[{"topic": "Metagenomics", "summary": "Metagenomics is the study of genetic material recovered directly from environmental or clinical samples by a method called sequencing. The broad field may also be referred to as environmental genomics, ecogenomics, community genomics or microbiomics.\nWhile traditional microbiology and microbial genome sequencing and genomics rely upon cultivated clonal cultures, early environmental gene sequencing cloned specific genes (often the 16S rRNA gene) to produce a profile of diversity in a natural sample. Such work revealed that the vast majority of microbial biodiversity had been missed by cultivation-based methods.Because of its ability to reveal the previously hidden diversity of microscopic life, metagenomics offers a powerful lens for viewing the microbial world that has the potential to revolutionize understanding of the entire living world. As the price of DNA sequencing continues to fall, metagenomics now allows microbial ecology to be investigated at a much greater scale and detail than before. Recent studies use either \"shotgun\" or PCR directed sequencing to get largely unbiased samples of all genes from all the members of the sampled communities.", "content": "\n\n\n== Etymology ==\nThe term \"metagenomics\" was first used by Jo Handelsman, Robert M. Goodman, Michelle R. Rondon, Jon Clardy, and Sean F. Brady, and first appeared in publication in 1998. The term metagenome referenced the idea that a collection of genes sequenced from the environment could be analyzed in a way analogous to the study of a single genome. In 2005, Kevin Chen and Lior Pachter (researchers at the University of California, Berkeley) defined metagenomics as \"the application of modern genomics technique without the need for isolation and lab cultivation of individual species\".\n\n\n== History ==\nConventional sequencing begins with a culture of identical cells as a source of DNA. However, early metagenomic studies revealed that there are probably large groups of microorganisms in many environments that cannot be cultured and thus cannot be sequenced. These early studies focused on 16S ribosomal RNA (rRNA) sequences which are relatively short, often conserved within a species, and generally different between species. Many 16S rRNA sequences have been found which do not belong to any known cultured species, indicating that there are numerous non-isolated organisms. These surveys of ribosomal RNA genes taken directly from the environment revealed that cultivation based methods find less than 1% of the bacterial and archaeal species in a sample. Much of the interest in metagenomics comes from these discoveries that showed that the vast majority of microorganisms had previously gone unnoticed.\nIn the 1980s early molecular work in the field was conducted by Norman R. Pace and colleagues, who used PCR to explore the diversity of ribosomal RNA sequences. The insights gained from these breakthrough studies led Pace to propose the idea of cloning DNA directly from environmental samples as early as 1985. This led to the first report of isolating and cloning bulk DNA from an environmental sample, published by Pace and colleagues in 1991 while Pace was in the Department of Biology at Indiana University. Considerable efforts ensured that these were not PCR false positives and supported the existence of a complex community of unexplored species. Although this methodology was limited to exploring highly conserved, non-protein coding genes, it did support early microbial morphology-based observations that diversity was far more complex than was known by culturing methods. Soon after that in 1995, Healy reported the metagenomic isolation of functional genes from \"zoolibraries\" constructed from a complex culture of environmental organisms grown in the laboratory on dried grasses. After leaving the Pace laboratory, Edward DeLong continued in the field and has published work that has largely laid the groundwork for environmental phylogenies based on signature 16S sequences, beginning with his group's construction of libraries from marine samples.In 2002, Mya Breitbart, Forest Rohwer, and colleagues used environmental shotgun sequencing (see below) to show that 200 liters of seawater contains over 5000 different viruses. Subsequent studies showed that there are more than a thousand viral species in human stool and possibly a million different viruses per kilogram of marine sediment, including many bacteriophages. Essentially all of the viruses in these studies were new species. In 2004, Gene Tyson, Jill Banfield, and colleagues at the University of California, Berkeley and the Joint Genome Institute sequenced DNA extracted from an acid mine drainage system. This effort resulted in the complete, or nearly complete, genomes for a handful of bacteria and archaea that had previously resisted attempts to culture them.Beginning in 2003, Craig Venter, leader of the privately funded parallel of the Human Genome Project, has led the Global Ocean Sampling Expedition (GOS), circumnavigating the globe and collecting metagenomic samples throughout the journey. All of these samples were sequenced using shotgun sequencing, in hopes that new genomes (and therefore new organisms) would be identified. The pilot project, conducted in the Sargasso Sea, found DNA from nearly 2000 different species, including 148 types of bacteria never before seen. Venter thoroughly explored the West Coast of the United States, and completed a two-year expedition to explore the Baltic, Mediterranean and Black Seas. Analysis of the metagenomic data collected during this journey revealed two groups of organisms, one composed of taxa adapted to environmental conditions of 'feast or famine', and a second composed of relatively fewer but more abundantly and widely distributed taxa primarily composed of plankton.In 2005 Stephan C. Schuster at Penn State University and colleagues published the first sequences of an environmental sample generated with high-throughput sequencing, in this case massively parallel pyrosequencing developed by 454 Life Sciences. Another early paper in this area appeared in 2006 by Robert Edwards, Forest Rohwer, and colleagues at San Diego State University.\n\n\n== Sequencing ==\n\nRecovery of DNA sequences longer than a few thousand base pairs from environmental samples was very difficult until recent advances in molecular biological techniques allowed the construction of libraries in bacterial artificial chromosomes (BACs), which provided better vectors for molecular cloning.\n\n\n=== Shotgun metagenomics ===\nAdvances in bioinformatics, refinements of DNA amplification, and the proliferation of computational power have greatly aided the analysis of DNA sequences recovered from environmental samples, allowing the adaptation of shotgun sequencing to metagenomic samples (known also as whole metagenome shotgun or WMGS sequencing). The approach, used to sequence many cultured microorganisms and the human genome, randomly shears DNA, sequences many short sequences, and reconstructs them into a consensus sequence. Shotgun sequencing reveals genes present in environmental samples. Historically, clone libraries were used to facilitate this sequencing. However, with advances in high throughput sequencing technologies, the cloning step is no longer necessary and greater yields of sequencing data can be obtained without this labour-intensive bottleneck step. Shotgun metagenomics provides information both about which organisms are present and what metabolic processes are possible in the community. Because the collection of DNA from an environment is largely uncontrolled, the most abundant organisms in an environmental sample are most highly represented in the resulting sequence data. To achieve the high coverage needed to fully resolve the genomes of under-represented community members, large samples, often prohibitively so, are needed. On the other hand, the random nature of shotgun sequencing ensures that many of these organisms, which would otherwise go unnoticed using traditional culturing techniques, will be represented by at least some small sequence segments.\n\n\n=== High-throughput sequencing ===\nAn advantage to high throughput sequencing is that this technique does not require cloning the DNA before sequencing, removing one of the main biases and bottlenecks in environmental sampling. The first metagenomic studies conducted using high-throughput sequencing used massively parallel 454 pyrosequencing. Three other technologies commonly applied to environmental sampling are the Ion Torrent Personal Genome Machine, the Illumina MiSeq or HiSeq and the Applied Biosystems SOLiD system. These techniques for sequencing DNA generate shorter fragments than Sanger sequencing; Ion Torrent PGM System and 454 pyrosequencing typically produces ~400 bp reads, Illumina MiSeq produces 400-700bp reads (depending on whether paired end options are used), and SOLiD produce 25\u201375 bp reads. Historically, these read lengths were significantly shorter than the typical Sanger sequencing read length of ~750 bp, however the Illumina technology is quickly coming close to this benchmark. However, this limitation is compensated for by the much larger number of sequence reads. In 2009, pyrosequenced metagenomes generate 200\u2013500 megabases, and Illumina platforms generate around 20\u201350 gigabases, but these outputs have increased by orders of magnitude in recent years.An emerging approach combines shotgun sequencing and chromosome conformation capture (Hi-C), which measures the proximity of any two DNA sequences within the same cell, to guide microbial genome assembly. Long read sequencing technologies, including PacBio RSII and PacBio Sequel by Pacific Biosciences, and Nanopore MinION, GridION, PromethION by Oxford Nanopore Technologies, is another choice to get long shotgun sequencing reads that should make ease in assembling process.\n\n\n== Bioinformatics ==\n\nThe data generated by metagenomics experiments are both enormous and inherently noisy, containing fragmented data representing as many as 10,000 species. The sequencing of the cow rumen metagenome generated 279 gigabases, or 279 billion base pairs of nucleotide sequence data, while the human gut microbiome gene catalog identified 3.3 million genes assembled from 567.7 gigabases of sequence data. Collecting, curating, and extracting useful biological information from datasets of this size represent significant computational challenges for researchers.\n\n\n=== Sequence pre-filtering ===\nThe first step of metagenomic data analysis requires the execution of certain pre-filtering steps, including the removal of redundant, low-quality sequences and sequences of probable eukaryotic origin (especially in metagenomes of human origin). The methods available for the removal of contaminating eukaryotic genomic DNA sequences include Eu-Detect and DeConseq.\n\n\n=== Assembly ===\n\nDNA sequence data from genomic and metagenomic projects are essentially the same, but genomic sequence data offers higher coverage while metagenomic data is usually highly non-redundant. Furthermore, the increased use of second-generation sequencing technologies with short read lengths means that much of future metagenomic data will be error-prone. Taken in combination, these factors make the assembly of metagenomic sequence reads into genomes difficult and unreliable. Misassemblies are caused by the presence of repetitive DNA sequences that make assembly especially difficult because of the difference in the relative abundance of species present in the sample. Misassemblies can also involve the combination of sequences from more than one species into chimeric contigs.There are several assembly programs, most of which can use information from paired-end tags in order to improve the accuracy of assemblies. Some programs, such as Phrap or Celera Assembler, were designed to be used to assemble single genomes but nevertheless produce good results when assembling metagenomic data sets. Other programs, such as Velvet assembler, have been optimized for the shorter reads produced by second-generation sequencing through the use of de Bruijn graphs. The use of reference genomes allows researchers to improve the assembly of the most abundant microbial species, but this approach is limited by the small subset of microbial phyla for which sequenced genomes are available. After an assembly is created, an additional challenge is \"metagenomic deconvolution\", or determining which sequences come from which species in the sample.\n\n\n=== Gene prediction ===\n\nMetagenomic analysis pipelines use two approaches in the annotation of coding regions in the assembled contigs. The first approach is to identify genes based upon homology with genes that are already publicly available in sequence databases, usually by BLAST searches. This type of approach is implemented in the program MEGAN4. The second, ab initio, uses intrinsic features of the sequence to predict coding regions based upon gene training sets from related organisms. This is the approach taken by programs such as GeneMark and GLIMMER. The main advantage of ab initio prediction is that it enables the detection of coding regions that lack homologs in the sequence databases; however, it is most accurate when there are large regions of contiguous genomic DNA available for comparison.\n\n\n=== Species diversity ===\n\nGene annotations provide the \"what\", while measurements of species diversity provide the \"who\". In order to connect community composition and function in metagenomes, sequences must be binned. Binning is the process of associating a particular sequence with an organism. In similarity-based binning, methods such as BLAST are used to rapidly search for phylogenetic markers or otherwise similar sequences in existing public databases. This approach is implemented in MEGAN. Another tool, PhymmBL, uses interpolated Markov models to assign reads. MetaPhlAn and AMPHORA are methods based on unique clade-specific markers for estimating organismal relative abundances with improved computational performances. Other tools, like mOTUs and MetaPhyler, use universal marker genes to profile prokaryotic species. With the mOTUs profiler is possible to profile species without a reference genome, improving the estimation of microbial community diversity. Recent methods, such as SLIMM, use read coverage landscape of individual reference genomes to minimize false-positive hits and get reliable relative abundances. In composition based binning, methods use intrinsic features of the sequence, such as oligonucleotide frequencies or codon usage bias. Once sequences are binned, it is possible to carry out comparative analysis of diversity and richness.\n\n\n=== Data integration ===\nThe massive amount of exponentially growing sequence data is a daunting challenge that is complicated by the complexity of the metadata associated with metagenomic projects. Metadata includes detailed information about the three-dimensional (including depth, or height) geography and environmental features of the sample, physical data about the sample site, and the methodology of the sampling. This information is necessary both to ensure replicability and to enable downstream analysis. Because of its importance, metadata and collaborative data review and curation require standardized data formats located in specialized databases, such as the Genomes OnLine Database (GOLD).Several tools have been developed to integrate metadata and sequence data, allowing downstream comparative analyses of different datasets using a number of ecological indices. In 2007, Folker Meyer and Robert Edwards and a team at Argonne National Laboratory and the University of Chicago released the Metagenomics Rapid Annotation using Subsystem Technology server (MG-RAST) a community resource for metagenome data set analysis. As of June 2012 over 14.8 terabases (14x1012 bases) of DNA have been analyzed, with more than 10,000 public data sets freely available for comparison within MG-RAST. Over 8,000 users now have submitted a total of 50,000 metagenomes to MG-RAST. The Integrated Microbial Genomes/Metagenomes (IMG/M) system also provides a collection of tools for functional analysis of microbial communities based on their metagenome sequence, based upon reference isolate genomes included from the Integrated Microbial Genomes (IMG) system and the Genomic Encyclopedia of Bacteria and Archaea (GEBA) project.One of the first standalone tools for analysing high-throughput metagenome shotgun data was MEGAN (MEta Genome ANalyzer). A first version of the program was used in 2005 to analyse the metagenomic context of DNA sequences obtained from a mammoth bone. Based on a BLAST comparison against a reference database, this tool performs both taxonomic and functional binning, by placing the reads onto the nodes of the NCBI taxonomy using a simple lowest common ancestor (LCA) algorithm or onto the nodes of the SEED or KEGG classifications, respectively.With the advent of fast and inexpensive sequencing instruments, the growth of databases of DNA sequences is now exponential (e.g., the NCBI GenBank database ). Faster and efficient tools are needed to keep pace with the high-throughput sequencing, because the BLAST-based approaches such as MG-RAST or MEGAN run slowly to annotate large samples (e.g., several hours to process a small/medium size dataset/sample ). Thus, ultra-fast classifiers have recently emerged, thanks to more affordable powerful servers. These tools can perform the taxonomic annotation at extremely high speed, for example CLARK (according to CLARK's authors, it can classify accurately \"32 million metagenomic short reads per minute\"). At such a speed, a very large dataset/sample of a billion short reads can be processed in about 30 minutes.\nWith the increasing availability of samples containing ancient DNA and due to the uncertainty associated with the nature of those samples (ancient DNA damage), a fast tool capable of producing conservative similarity estimates has been made available. According to FALCON's authors, it can use relaxed thresholds and edit distances without affecting the memory and speed performance.\n\n\n=== Comparative metagenomics ===\nComparative analyses between metagenomes can provide additional insight into the function of complex microbial communities and their role in host health. Pairwise or multiple comparisons between metagenomes can be made at the level of sequence composition (comparing GC-content or genome size), taxonomic diversity, or functional complement. Comparisons of population structure and phylogenetic diversity can be made on the basis of 16S rRNA and other phylogenetic marker genes, or\u2014in the case of low-diversity communities\u2014by genome reconstruction from the metagenomic dataset. Functional comparisons between metagenomes may be made by comparing sequences against reference databases such as COG or KEGG, and tabulating the abundance by category and evaluating any differences for statistical significance. This gene-centric approach emphasizes the functional complement of the community as a whole rather than taxonomic groups, and shows that the functional complements are analogous under similar environmental conditions. Consequently, metadata on the environmental context of the metagenomic sample is especially important in comparative analyses, as it provides researchers with the ability to study the effect of habitat upon community structure and function.Additionally, several studies have also utilized oligonucleotide usage patterns to identify the differences across diverse microbial communities. Examples of such methodologies include the dinucleotide relative abundance approach by Willner et al. and the HabiSign approach of Ghosh et al. This latter study also indicated that differences in tetranucleotide usage patterns can be used to identify genes (or metagenomic reads) originating from specific habitats. Additionally some methods as TriageTools or Compareads detect similar reads between two read sets. The similarity measure they apply on reads is based on a number of identical words of length k shared by pairs of reads.\nA key goal in comparative metagenomics is to identify microbial group(s) which are responsible for conferring specific characteristics to a given environment. However, due to issues in the sequencing technologies artifacts need to be accounted for like in metagenomeSeq. Others have characterized inter-microbial interactions between the resident microbial groups. A GUI-based comparative metagenomic analysis application called Community-Analyzer has been developed by Kuntal et al. \n which implements a correlation-based graph layout algorithm that not only facilitates a quick visualization of the differences in the analyzed microbial communities (in terms of their taxonomic composition), but also provides insights into the inherent inter-microbial interactions occurring therein. Notably, this layout algorithm also enables grouping of the metagenomes based on the probable inter-microbial interaction patterns rather than simply comparing abundance values of various taxonomic groups. In addition, the tool implements several interactive GUI-based functionalities that enable users to perform standard comparative analyses across microbiomes.\n\n\n== Data analysis ==\n\n\n=== Community metabolism ===\nIn many bacterial communities, natural or engineered (such as bioreactors), there is significant division of labor in metabolism (syntrophy), during which the waste products of some organisms are metabolites for others. In one such system, the methanogenic bioreactor, functional stability requires the presence of several syntrophic species (Syntrophobacterales and Synergistia) working together in order to turn raw resources into fully metabolized waste (methane). Using comparative gene studies and expression experiments with microarrays or proteomics researchers can piece together a metabolic network that goes beyond species boundaries. Such studies require detailed knowledge about which versions of which proteins are coded by which species and even by which strains of which species. Therefore, community genomic information is another fundamental tool (with metabolomics and proteomics) in the quest to determine how metabolites are transferred and transformed by a community.\n\n\n=== Metatranscriptomics ===\n\nMetagenomics allows researchers to access the functional and metabolic diversity of microbial communities, but it cannot show which of these processes are active. The extraction and analysis of metagenomic mRNA (the metatranscriptome) provides information on the regulation and expression profiles of complex communities. Because of the technical difficulties (the short half-life of mRNA, for example) in the collection of environmental RNA there have been relatively few in situ metatranscriptomic studies of microbial communities to date. While originally limited to microarray technology, metatranscriptomics studies have made use of transcriptomics technologies to measure whole-genome expression and quantification of a microbial community, first employed in analysis of ammonia oxidation in soils.\n\n\n=== Viruses ===\n\nMetagenomic sequencing is particularly useful in the study of viral communities. As viruses lack a shared universal phylogenetic marker (as 16S RNA for bacteria and archaea, and 18S RNA for eukarya), the only way to access the genetic diversity of the viral community from an environmental sample is through metagenomics. Viral metagenomes (also called viromes) should thus provide more and more information about viral diversity and evolution. For example, a metagenomic pipeline called Giant Virus Finder showed the first evidence of existence of giant viruses in a saline desert and in Antarctic dry valleys.\n\n\n== Applications ==\nMetagenomics has the potential to advance knowledge in a wide variety of fields. It can also be applied to solve practical challenges in medicine, engineering, agriculture, sustainability and ecology.\n\n\n=== Agriculture ===\nThe soils in which plants grow are inhabited by microbial communities, with one gram of soil containing around 109-1010 microbial cells which comprise about one gigabase of sequence information. The microbial communities which inhabit soils are some of the most complex known to science, and remain poorly understood despite their economic importance. Microbial consortia perform a wide variety of ecosystem services necessary for plant growth, including fixing atmospheric nitrogen, nutrient cycling, disease suppression, and sequester iron and other metals. Functional metagenomics strategies are being used to explore the interactions between plants and microbes through cultivation-independent study of these microbial communities. By allowing insights into the role of previously uncultivated or rare community members in nutrient cycling and the promotion of plant growth, metagenomic approaches can contribute to improved disease detection in crops and livestock and the adaptation of enhanced farming practices which improve crop health by harnessing the relationship between microbes and plants.\n\n\n=== Biofuel ===\n\nBiofuels are fuels derived from biomass conversion, as in the conversion of cellulose contained in corn stalks, switchgrass, and other biomass into cellulosic ethanol. This process is dependent upon microbial consortia(association) that transform the cellulose into sugars, followed by the fermentation of the sugars into ethanol. Microbes also produce a variety of sources of bioenergy including methane and hydrogen.The efficient industrial-scale deconstruction of biomass requires novel enzymes with higher productivity and lower cost. Metagenomic approaches to the analysis of complex microbial communities allow the targeted screening of enzymes with industrial applications in biofuel production, such as glycoside hydrolases. Furthermore, knowledge of how these microbial communities function is required to control them, and metagenomics is a key tool in their understanding. Metagenomic approaches allow comparative analyses between convergent microbial systems like biogas fermenters or insect herbivores such as the fungus garden of the leafcutter ants.\n\n\n=== Biotechnology ===\nMicrobial communities produce a vast array of biologically active chemicals that are used in competition and communication. Many of the drugs in use today were originally uncovered in microbes; recent progress in mining the rich genetic resource of non-culturable microbes has led to the discovery of new genes, enzymes, and natural products. The application of metagenomics has allowed the development of commodity and fine chemicals, agrochemicals and pharmaceuticals where the benefit of enzyme-catalyzed chiral synthesis is increasingly recognized.Two types of analysis are used in the bioprospecting of metagenomic data: function-driven screening for an expressed trait, and sequence-driven screening for DNA sequences of interest. Function-driven analysis seeks to identify clones expressing a desired trait or useful activity, followed by biochemical characterization and sequence analysis. This approach is limited by availability of a suitable screen and the requirement that the desired trait be expressed in the host cell. Moreover, the low rate of discovery (less than one per 1,000 clones screened) and its labor-intensive nature further limit this approach. In contrast, sequence-driven analysis uses conserved DNA sequences to design PCR primers to screen clones for the sequence of interest. In comparison to cloning-based approaches, using a sequence-only approach further reduces the amount of bench work required. The application of massively parallel sequencing also greatly increases the amount of sequence data generated, which require high-throughput bioinformatic analysis pipelines. The sequence-driven approach to screening is limited by the breadth and accuracy of gene functions present in public sequence databases. In practice, experiments make use of a combination of both functional and sequence-based approaches based upon the function of interest, the complexity of the sample to be screened, and other factors. An example of success using metagenomics as a biotechnology for drug discovery is illustrated with the malacidin antibiotics.\n\n\n=== Ecology ===\n\nMetagenomics can provide valuable insights into the functional ecology of environmental communities. Metagenomic analysis of the bacterial consortia found in the defecations of Australian sea lions suggests that nutrient-rich sea lion faeces may be an important nutrient source for coastal ecosystems. This is because the bacteria that are expelled simultaneously with the defecations are adept at breaking down the nutrients in the faeces into a bioavailable form that can be taken up into the food chain.DNA sequencing can also be used more broadly to identify species present in a body of water, debris filtered from the air, sample of dirt, or animal's faeces, and even detect diet items from blood meals. This can establish the range of invasive species and endangered species, and track seasonal populations.\n\n\n=== Environmental remediation ===\n\nMetagenomics can improve strategies for monitoring the impact of pollutants on ecosystems and for cleaning up contaminated environments. Increased understanding of how microbial communities cope with pollutants improves assessments of the potential of contaminated sites to recover from pollution and increases the chances of bioaugmentation or biostimulation trials to succeed.\n\n\n=== Gut microbe characterization ===\nMicrobial communities play a key role in preserving human health, but their composition and the mechanism by which they do so remains mysterious. Metagenomic sequencing is being used to characterize the microbial communities from 15\u201318 body sites from at least 250 individuals. This is part of the Human Microbiome initiative with primary goals to determine if there is a core human microbiome, to understand the changes in the human microbiome that can be correlated with human health, and to develop new technological and bioinformatics tools to support these goals.Another medical study as part of the MetaHit (Metagenomics of the Human Intestinal Tract) project consisted of 124 individuals from Denmark and Spain consisting of healthy, overweight, and irritable bowel disease patients. The study attempted to categorize the depth and phylogenetic diversity of gastrointestinal bacteria. Using Illumina GA sequence data and SOAPdenovo, a de Bruijn graph-based tool specifically designed for assembly short reads, they were able to generate 6.58 million contigs greater than 500 bp for a total contig length of 10.3 Gb and a N50 length of 2.2 kb.The study demonstrated that two bacterial divisions, Bacteroidetes and Firmicutes, constitute over 90% of the known phylogenetic categories that dominate distal gut bacteria. Using the relative gene frequencies found within the gut these researchers identified 1,244 metagenomic clusters that are critically important for the health of the intestinal tract. There are two types of functions in these range clusters: housekeeping and those specific to the intestine. The housekeeping gene clusters are required in all bacteria and are often major players in the main metabolic pathways including central carbon metabolism and amino acid synthesis. The gut-specific functions include adhesion to host proteins and the harvesting of sugars from globoseries glycolipids. Patients with irritable bowel syndrome were shown to exhibit 25% fewer genes and lower bacterial diversity than individuals not suffering from irritable bowel syndrome indicating that changes in patients' gut biome diversity may be associated with this condition.While these studies highlight some potentially valuable medical applications, only 31\u201348.8% of the reads could be aligned to 194 public human gut bacterial genomes and 7.6\u201321.2% to bacterial genomes available in GenBank which indicates that there is still far more research necessary to capture novel bacterial genomes.In the Human Microbiome Project (HMP), gut microbial communities were assayed using high-throughput DNA sequencing. HMP showed that, unlike individual microbial species, many metabolic processes were present among all body habitats with varying frequencies. Microbial communities of 649 metagenomes drawn from seven primary body sites on 102 individuals were studied as part of the human microbiome project. The metagenomic analysis revealed variations in niche specific abundance among 168 functional modules and 196 metabolic pathways within the microbiome. These included glycosaminoglycan degradation in the gut, as well as phosphate and amino acid transport linked to host phenotype (vaginal pH) in the posterior fornix. The HMP has brought to light the utility of metagenomics in diagnostics and evidence-based medicine. Thus metagenomics is a powerful tool to address many of the pressing issues in the field of personalized medicine.In animals, metagenomics can be used to profile their gut microbiomes and enable detection of antibiotic-resistant bacteria. This can have implications in monitoring the spread of diseases from wildlife to farmed animals and humans.\n\n\n=== Infectious disease diagnosis ===\nDifferentiating between infectious and non-infectious illness, and identifying the underlying etiology of infection, can be challenging. For example, more than half of cases of encephalitis remain undiagnosed, despite extensive testing using state-of-the-art clinical laboratory methods. Clinical metagenomic sequencing shows promise as a sensitive and rapid method to diagnose infection by comparing genetic material found in a patient's sample to databases of all known microscopic human pathogens and thousands of other bacterial, viral, fungal, and parasitic organisms and databases on antimicrobial resistances gene sequences with associated clinical phenotypes.\n\n\n=== Arbovirus surveillance ===\nMetagenomics has been an invaluable tool to help characterise the diversity and ecology of pathogens that are vectored by hematophagous (blood-feeding) insects such as mosquitoes and ticks. Metagenomics is routinely used by public health officials and organisations for the surveillance of arboviruses.\n\n\n== See also ==\nBinning\nEpidemiology and sewage\nMetaproteomics\nMicrobial ecology\nPathogenomics\n\n\n== References ==\n\n\n== External links ==\nFocus on Metagenomics at Nature Reviews Microbiology journal website\nThe \u201cCritical Assessment of Metagenome Interpretation\u201d (CAMI) initiative to evaluate methods in metagenomics", "content_traditional": "patients irritable bowel syndrome shown exhibit 25 fewer genes lower bacterial diversity individuals suffering irritable bowel syndrome indicating changes patients gut biome diversity may associated conditionwhile studies highlight potentially valuable medical applications 31\u2013488 reads could aligned 194 public human gut bacterial genomes 76\u2013212 bacterial genomes available genbank indicates still far research necessary capture novel bacterial genomesin human microbiome project hmp gut microbial communities assayed using highthroughput dna sequencing. using illumina ga sequence data soapdenovo de bruijn graphbased tool specifically designed assembly short reads able generate 658 million contigs greater 500 bp total contig length 103 gb n50 length 22 kbthe study demonstrated two bacterial divisions bacteroidetes firmicutes constitute 90 known phylogenetic categories dominate distal gut bacteria. leaving pace laboratory edward delong continued field published work largely laid groundwork environmental phylogenies based signature 16s sequences beginning groups construction libraries marine samplesin 2002 mya breitbart forest rohwer colleagues used environmental shotgun sequencing see show 200 liters seawater contains 5000 different viruses. analysis metagenomic data collected journey revealed two groups organisms one composed taxa adapted environmental conditions feast famine second composed relatively fewer abundantly widely distributed taxa primarily composed planktonin 2005 stephan c schuster penn state university colleagues published first sequences environmental sample generated highthroughput sequencing case massively parallel pyrosequencing developed 454 life sciences. bacteria expelled simultaneously defecations adept breaking nutrients faeces bioavailable form taken food chaindna sequencing also used broadly identify species present body water debris filtered air sample dirt animals faeces even detect diet items blood meals. 2009 pyrosequenced metagenomes generate 200\u2013500 megabases illumina platforms generate around 20\u201350 gigabases outputs increased orders magnitude recent yearsan emerging approach combines shotgun sequencing chromosome conformation capture hic measures proximity two dna sequences within cell guide microbial genome assembly. effort resulted complete nearly complete genomes handful bacteria archaea previously resisted attempts culture thembeginning 2003 craig venter leader privately funded parallel human genome project led global ocean sampling expedition gos circumnavigating globe collecting metagenomic samples throughout journey. part human microbiome initiative primary goals determine core human microbiome understand changes human microbiome correlated human health develop new technological bioinformatics tools support goalsanother medical study part metahit metagenomics human intestinal tract project consisted 124 individuals denmark spain consisting healthy overweight irritable bowel disease patients. based blast comparison reference database tool performs taxonomic functional binning placing reads onto nodes ncbi taxonomy using simple lowest common ancestor lca algorithm onto nodes seed kegg classifications respectivelywith advent fast inexpensive sequencing instruments growth databases dna sequences exponential eg ncbi genbank database. consequently metadata environmental context metagenomic sample especially important comparative analyses provides researchers ability study effect habitat upon community structure functionadditionally several studies also utilized oligonucleotide usage patterns identify differences across diverse microbial communities. allowing insights role previously uncultivated rare community members nutrient cycling promotion plant growth metagenomic approaches contribute improved disease detection crops livestock adaptation enhanced farming practices improve crop health harnessing relationship microbes plants. integrated microbial genomesmetagenomes imgm system also provides collection tools functional analysis microbial communities based metagenome sequence based upon reference isolate genomes included integrated microbial genomes img system genomic encyclopedia bacteria archaea geba projectone first standalone tools analysing highthroughput metagenome shotgun data megan meta genome analyzer. clinical metagenomic sequencing shows promise sensitive rapid method diagnose infection comparing genetic material found patients sample databases known microscopic human pathogens thousands bacterial viral fungal parasitic organisms databases antimicrobial resistances gene sequences associated clinical phenotypes. importance metadata collaborative data review curation require standardized data formats located specialized databases genomes online database goldseveral tools developed integrate metadata sequence data allowing downstream comparative analyses different datasets using number ecological indices. application metagenomics allowed development commodity fine chemicals agrochemicals pharmaceuticals benefit enzymecatalyzed chiral synthesis increasingly recognizedtwo types analysis used bioprospecting metagenomic data functiondriven screening expressed trait sequencedriven screening dna sequences interest. faster efficient tools needed keep pace highthroughput sequencing blastbased approaches mgrast megan run slowly annotate large samples eg several hours process smallmedium size datasetsample. sequencing recovery dna sequences longer thousand base pairs environmental samples difficult recent advances molecular biological techniques allowed construction libraries bacterial artificial chromosomes bacs provided better vectors molecular cloning. implements correlationbased graph layout algorithm facilitates quick visualization differences analyzed microbial communities terms taxonomic composition also provides insights inherent intermicrobial interactions occurring therein. techniques sequencing dna generate shorter fragments sanger sequencing ion torrent pgm system 454 pyrosequencing typically produces 400 bp reads illumina miseq produces 400700bp reads depending whether paired end options used solid produce 25\u201375 bp reads. misassemblies also involve combination sequences one species chimeric contigsthere several assembly programs use information pairedend tags order improve accuracy assemblies. hand random nature shotgun sequencing ensures many organisms would otherwise go unnoticed using traditional culturing techniques represented least small sequence segments. see also binning epidemiology sewage metaproteomics microbial ecology pathogenomics references external links focus metagenomics nature reviews microbiology journal website \u201c critical assessment metagenome interpretation \u201d cami initiative evaluate methods metagenomics 2007 folker meyer robert edwards team argonne national laboratory university chicago released metagenomics rapid annotation using subsystem technology server mgrast community resource metagenome data set analysis. shotgun metagenomics advances bioinformatics refinements dna amplification proliferation computational power greatly aided analysis dna sequences recovered environmental samples allowing adaptation shotgun sequencing metagenomic samples known also whole metagenome shotgun wmgs sequencing. main advantage ab initio prediction enables detection coding regions lack homologs sequence databases however accurate large regions contiguous genomic dna available comparison.", "custom_approach": "Another early paper in this area appeared in 2006 by Robert Edwards, Forest Rohwer, and colleagues at San Diego State University.Recovery of DNA sequences longer than a few thousand base pairs from environmental samples was very difficult until recent advances in molecular biological techniques allowed the construction of libraries in bacterial artificial chromosomes (BACs), which provided better vectors for molecular cloning.Advances in bioinformatics, refinements of DNA amplification, and the proliferation of computational power have greatly aided the analysis of DNA sequences recovered from environmental samples, allowing the adaptation of shotgun sequencing to metagenomic samples (known also as whole metagenome shotgun or WMGS sequencing). Patients with irritable bowel syndrome were shown to exhibit 25% fewer genes and lower bacterial diversity than individuals not suffering from irritable bowel syndrome indicating that changes in patients' gut biome diversity may be associated with this condition.While these studies highlight some potentially valuable medical applications, only 31\u201348.8% of the reads could be aligned to 194 public human gut bacterial genomes and 7.6\u201321.2% to bacterial genomes available in GenBank which indicates that there is still far more research necessary to capture novel bacterial genomes.In the Human Microbiome Project (HMP), gut microbial communities were assayed using high-throughput DNA sequencing. Clinical metagenomic sequencing shows promise as a sensitive and rapid method to diagnose infection by comparing genetic material found in a patient's sample to databases of all known microscopic human pathogens and thousands of other bacterial, viral, fungal, and parasitic organisms and databases on antimicrobial resistances gene sequences with associated clinical phenotypes.Metagenomics has been an invaluable tool to help characterise the diversity and ecology of pathogens that are vectored by hematophagous (blood-feeding) insects such as mosquitoes and ticks. Using Illumina GA sequence data and SOAPdenovo, a de Bruijn graph-based tool specifically designed for assembly short reads, they were able to generate 6.58 million contigs greater than 500 bp for a total contig length of 10.3 Gb and a N50 length of 2.2 kb.The study demonstrated that two bacterial divisions, Bacteroidetes and Firmicutes, constitute over 90% of the known phylogenetic categories that dominate distal gut bacteria. By allowing insights into the role of previously uncultivated or rare community members in nutrient cycling and the promotion of plant growth, metagenomic approaches can contribute to improved disease detection in crops and livestock and the adaptation of enhanced farming practices which improve crop health by harnessing the relationship between microbes and plants.Biofuels are fuels derived from biomass conversion, as in the conversion of cellulose contained in corn stalks, switchgrass, and other biomass into cellulosic ethanol. After leaving the Pace laboratory, Edward DeLong continued in the field and has published work that has largely laid the groundwork for environmental phylogenies based on signature 16S sequences, beginning with his group's construction of libraries from marine samples.In 2002, Mya Breitbart, Forest Rohwer, and colleagues used environmental shotgun sequencing (see below) to show that 200 liters of seawater contains over 5000 different viruses. Analysis of the metagenomic data collected during this journey revealed two groups of organisms, one composed of taxa adapted to environmental conditions of 'feast or famine', and a second composed of relatively fewer but more abundantly and widely distributed taxa primarily composed of plankton.In 2005 Stephan C. Schuster at Penn State University and colleagues published the first sequences of an environmental sample generated with high-throughput sequencing, in this case massively parallel pyrosequencing developed by 454 Life Sciences. On the other hand, the random nature of shotgun sequencing ensures that many of these organisms, which would otherwise go unnoticed using traditional culturing techniques, will be represented by at least some small sequence segments.An advantage to high throughput sequencing is that this technique does not require cloning the DNA before sequencing, removing one of the main biases and bottlenecks in environmental sampling. This is because the bacteria that are expelled simultaneously with the defecations are adept at breaking down the nutrients in the faeces into a bioavailable form that can be taken up into the food chain.DNA sequencing can also be used more broadly to identify species present in a body of water, debris filtered from the air, sample of dirt, or animal's faeces, and even detect diet items from blood meals. In 2009, pyrosequenced metagenomes generate 200\u2013500 megabases, and Illumina platforms generate around 20\u201350 gigabases, but these outputs have increased by orders of magnitude in recent years.An emerging approach combines shotgun sequencing and chromosome conformation capture (Hi-C), which measures the proximity of any two DNA sequences within the same cell, to guide microbial genome assembly. In addition, the tool implements several interactive GUI-based functionalities that enable users to perform standard comparative analyses across microbiomes.In many bacterial communities, natural or engineered (such as bioreactors), there is significant division of labor in metabolism (syntrophy), during which the waste products of some organisms are metabolites for others. This effort resulted in the complete, or nearly complete, genomes for a handful of bacteria and archaea that had previously resisted attempts to culture them.Beginning in 2003, Craig Venter, leader of the privately funded parallel of the Human Genome Project, has led the Global Ocean Sampling Expedition (GOS), circumnavigating the globe and collecting metagenomic samples throughout the journey. This is part of the Human Microbiome initiative with primary goals to determine if there is a core human microbiome, to understand the changes in the human microbiome that can be correlated with human health, and to develop new technological and bioinformatics tools to support these goals.Another medical study as part of the MetaHit (Metagenomics of the Human Intestinal Tract) project consisted of 124 individuals from Denmark and Spain consisting of healthy, overweight, and irritable bowel disease patients. Increased understanding of how microbial communities cope with pollutants improves assessments of the potential of contaminated sites to recover from pollution and increases the chances of bioaugmentation or biostimulation trials to succeed.Microbial communities play a key role in preserving human health, but their composition and the mechanism by which they do so remains mysterious. Based on a BLAST comparison against a reference database, this tool performs both taxonomic and functional binning, by placing the reads onto the nodes of the NCBI taxonomy using a simple lowest common ancestor (LCA) algorithm or onto the nodes of the SEED or KEGG classifications, respectively.With the advent of fast and inexpensive sequencing instruments, the growth of databases of DNA sequences is now exponential (e.g., the NCBI GenBank database ). Long read sequencing technologies, including PacBio RSII and PacBio Sequel by Pacific Biosciences, and Nanopore MinION, GridION, PromethION by Oxford Nanopore Technologies, is another choice to get long shotgun sequencing reads that should make ease in assembling process.The data generated by metagenomics experiments are both enormous and inherently noisy, containing fragmented data representing as many as 10,000 species. Consequently, metadata on the environmental context of the metagenomic sample is especially important in comparative analyses, as it provides researchers with the ability to study the effect of habitat upon community structure and function.Additionally, several studies have also utilized oligonucleotide usage patterns to identify the differences across diverse microbial communities. The Integrated Microbial Genomes/Metagenomes (IMG/M) system also provides a collection of tools for functional analysis of microbial communities based on their metagenome sequence, based upon reference isolate genomes included from the Integrated Microbial Genomes (IMG) system and the Genomic Encyclopedia of Bacteria and Archaea (GEBA) project.One of the first standalone tools for analysing high-throughput metagenome shotgun data was MEGAN (MEta Genome ANalyzer). Collecting, curating, and extracting useful biological information from datasets of this size represent significant computational challenges for researchers.The first step of metagenomic data analysis requires the execution of certain pre-filtering steps, including the removal of redundant, low-quality sequences and sequences of probable eukaryotic origin (especially in metagenomes of human origin). Therefore, community genomic information is another fundamental tool (with metabolomics and proteomics) in the quest to determine how metabolites are transferred and transformed by a community.Metagenomics allows researchers to access the functional and metabolic diversity of microbial communities, but it cannot show which of these processes are active.", "combined_approach": "another early paper area appeared 2006 robert edwards forest rohwer colleagues san diego state universityrecovery dna sequences longer thousand base pairs environmental samples difficult recent advances molecular biological techniques allowed construction libraries bacterial artificial chromosomes bacs provided better vectors molecular cloningadvances bioinformatics refinements dna amplification proliferation computational power greatly aided analysis dna sequences recovered environmental samples allowing adaptation shotgun sequencing metagenomic samples known also whole metagenome shotgun wmgs sequencing. patients irritable bowel syndrome shown exhibit 25 fewer genes lower bacterial diversity individuals suffering irritable bowel syndrome indicating changes patients gut biome diversity may associated conditionwhile studies highlight potentially valuable medical applications 31\u2013488 reads could aligned 194 public human gut bacterial genomes 76\u2013212 bacterial genomes available genbank indicates still far research necessary capture novel bacterial genomesin human microbiome project hmp gut microbial communities assayed using highthroughput dna sequencing. clinical metagenomic sequencing shows promise sensitive rapid method diagnose infection comparing genetic material found patients sample databases known microscopic human pathogens thousands bacterial viral fungal parasitic organisms databases antimicrobial resistances gene sequences associated clinical phenotypesmetagenomics invaluable tool help characterise diversity ecology pathogens vectored hematophagous bloodfeeding insects mosquitoes ticks. using illumina ga sequence data soapdenovo de bruijn graphbased tool specifically designed assembly short reads able generate 658 million contigs greater 500 bp total contig length 103 gb n50 length 22 kbthe study demonstrated two bacterial divisions bacteroidetes firmicutes constitute 90 known phylogenetic categories dominate distal gut bacteria. allowing insights role previously uncultivated rare community members nutrient cycling promotion plant growth metagenomic approaches contribute improved disease detection crops livestock adaptation enhanced farming practices improve crop health harnessing relationship microbes plantsbiofuels fuels derived biomass conversion conversion cellulose contained corn stalks switchgrass biomass cellulosic ethanol. leaving pace laboratory edward delong continued field published work largely laid groundwork environmental phylogenies based signature 16s sequences beginning groups construction libraries marine samplesin 2002 mya breitbart forest rohwer colleagues used environmental shotgun sequencing see show 200 liters seawater contains 5000 different viruses. analysis metagenomic data collected journey revealed two groups organisms one composed taxa adapted environmental conditions feast famine second composed relatively fewer abundantly widely distributed taxa primarily composed planktonin 2005 stephan c schuster penn state university colleagues published first sequences environmental sample generated highthroughput sequencing case massively parallel pyrosequencing developed 454 life sciences. hand random nature shotgun sequencing ensures many organisms would otherwise go unnoticed using traditional culturing techniques represented least small sequence segmentsan advantage high throughput sequencing technique require cloning dna sequencing removing one main biases bottlenecks environmental sampling. bacteria expelled simultaneously defecations adept breaking nutrients faeces bioavailable form taken food chaindna sequencing also used broadly identify species present body water debris filtered air sample dirt animals faeces even detect diet items blood meals. 2009 pyrosequenced metagenomes generate 200\u2013500 megabases illumina platforms generate around 20\u201350 gigabases outputs increased orders magnitude recent yearsan emerging approach combines shotgun sequencing chromosome conformation capture hic measures proximity two dna sequences within cell guide microbial genome assembly. addition tool implements several interactive guibased functionalities enable users perform standard comparative analyses across microbiomesin many bacterial communities natural engineered bioreactors significant division labor metabolism syntrophy waste products organisms metabolites others. effort resulted complete nearly complete genomes handful bacteria archaea previously resisted attempts culture thembeginning 2003 craig venter leader privately funded parallel human genome project led global ocean sampling expedition gos circumnavigating globe collecting metagenomic samples throughout journey. part human microbiome initiative primary goals determine core human microbiome understand changes human microbiome correlated human health develop new technological bioinformatics tools support goalsanother medical study part metahit metagenomics human intestinal tract project consisted 124 individuals denmark spain consisting healthy overweight irritable bowel disease patients. increased understanding microbial communities cope pollutants improves assessments potential contaminated sites recover pollution increases chances bioaugmentation biostimulation trials succeedmicrobial communities play key role preserving human health composition mechanism remains mysterious. based blast comparison reference database tool performs taxonomic functional binning placing reads onto nodes ncbi taxonomy using simple lowest common ancestor lca algorithm onto nodes seed kegg classifications respectivelywith advent fast inexpensive sequencing instruments growth databases dna sequences exponential eg ncbi genbank database. long read sequencing technologies including pacbio rsii pacbio sequel pacific biosciences nanopore minion gridion promethion oxford nanopore technologies another choice get long shotgun sequencing reads make ease assembling processthe data generated metagenomics experiments enormous inherently noisy containing fragmented data representing many 10000 species. consequently metadata environmental context metagenomic sample especially important comparative analyses provides researchers ability study effect habitat upon community structure functionadditionally several studies also utilized oligonucleotide usage patterns identify differences across diverse microbial communities. integrated microbial genomesmetagenomes imgm system also provides collection tools functional analysis microbial communities based metagenome sequence based upon reference isolate genomes included integrated microbial genomes img system genomic encyclopedia bacteria archaea geba projectone first standalone tools analysing highthroughput metagenome shotgun data megan meta genome analyzer. collecting curating extracting useful biological information datasets size represent significant computational challenges researchersthe first step metagenomic data analysis requires execution certain prefiltering steps including removal redundant lowquality sequences sequences probable eukaryotic origin especially metagenomes human origin. therefore community genomic information another fundamental tool metabolomics proteomics quest determine metabolites transferred transformed communitymetagenomics allows researchers access functional metabolic diversity microbial communities show processes active."}, {"topic": "Heredity", "summary": "Heredity, also called inheritance or biological inheritance, is the passing on of traits from parents to their offspring; either through asexual reproduction or sexual reproduction, the offspring cells or organisms acquire the genetic information of their parents. Through heredity, variations between individuals can accumulate and cause species to evolve by natural selection. The study of heredity in biology is genetics.", "content": "\n\n\n== Overview ==\n\nIn humans, eye color is an example of an inherited characteristic: an individual might inherit the \"brown-eye trait\" from one of the parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome is called its genotype.The complete set of observable traits of the structure and behavior of an organism is called its phenotype. These traits arise from the interaction of its genotype with the environment. As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in their genotype: a striking example is people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.Heritable traits are known to be passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long polymer that incorporates four types of bases, which are interchangeable. The Nucleic acid sequence (the sequence of bases along a particular DNA molecule) specifies the genetic information: this is comparable to a sequence of letters spelling out a passage of text. Before a cell divides through mitosis, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. A portion of a DNA molecule that specifies a single functional unit is called a gene; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. Organisms inherit genetic material from their parents in the form of homologous chromosomes, containing a unique combination of DNA sequences that code for genes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a particular locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism.However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by multiple interacting genes within and among organisms. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlie some of the mechanics in developmental plasticity and canalization.Recent findings have confirmed important examples of heritable changes that cannot be explained by direct agency of the DNA molecule. These phenomena are classed as epigenetic inheritance systems that are causally or independently evolving over genes. Research into modes and mechanisms of epigenetic inheritance is still in its scientific infancy, but this area of research has attracted much recent activity as it broadens the scope of heritability and evolutionary biology in general. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference, and the three dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effect that modifies and feeds back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits, group heritability, and symbiogenesis. These examples of heritability that operate above the gene are covered broadly under the title of multilevel or hierarchical selection, which has been a subject of intense debate in the history of evolutionary science.\n\n\n== Relation to theory of evolution ==\n\nWhen Charles Darwin proposed his theory of evolution in 1859, one of its major problems was the lack of an underlying mechanism for heredity. Darwin believed in a mix of blending inheritance and the inheritance of acquired traits (pangenesis). Blending inheritance would lead to uniformity across populations in only a few generations and then would remove variation from a population on which natural selection could act. This led to Darwin adopting some Lamarckian ideas in later editions of On the Origin of Species and his later biological works. Darwin's primary approach to heredity was to outline how it appeared to work (noticing that traits that were not expressed explicitly in the parent at the time of reproduction could be inherited, that certain traits could be sex-linked, etc.) rather than suggesting mechanisms.Darwin's initial model of heredity was adopted by, and then heavily modified by, his cousin Francis Galton, who laid the framework for the biometric school of heredity. Galton found no evidence to support the aspects of Darwin's pangenesis model, which relied on acquired traits.The inheritance of acquired traits was shown to have little basis in the 1880s when August Weismann cut the tails off many generations of mice and found that their offspring continued to develop tails.\n\n\n== History ==\n\nScientists in Antiquity had a variety of ideas about heredity: Theophrastus proposed that male flowers caused female flowers to ripen; Hippocrates speculated that \"seeds\" were produced by various body parts and transmitted to offspring at the time of conception; and Aristotle thought that male and female fluids mixed at conception. Aeschylus, in 458 BC, proposed the male as the parent, with the female as a \"nurse for the young life sown within her\".Ancient understandings of heredity transitioned to two debated doctrines in the 18th century. The Doctrine of Epigenesis and the Doctrine of Preformation were two distinct views of the understanding of heredity. The Doctrine of Epigenesis, originated by Aristotle, claimed that an embryo continually develops. The modifications of the parent's traits are passed off to an embryo during its lifetime. The foundation of this doctrine was based on the theory of inheritance of acquired traits. In direct opposition, the Doctrine of Preformation claimed that \"like generates like\" where the germ would evolve to yield offspring similar to the parents. The Preformationist view believed procreation was an act of revealing what had been created long before. However, this was disputed by the creation of the cell theory in the 19th century, where the fundamental unit of life is the cell, and not some preformed parts of an organism. Various hereditary mechanisms, including blending inheritance were also envisaged without being properly tested or quantified, and were later disputed. Nevertheless, people were able to develop domestic breeds of animals as well as crops through artificial selection. The inheritance of acquired traits also formed a part of early Lamarckian ideas on evolution.During the 18th century, Dutch microscopist Antonie van Leeuwenhoek (1632\u20131723) discovered \"animalcules\" in the sperm of humans and other animals. Some scientists speculated they saw a \"little man\" (homunculus) inside each sperm. These scientists formed a school of thought known as the \"spermists\". They contended the only contributions of the female to the next generation were the womb in which the homunculus grew, and prenatal influences of the womb. An opposing school of thought, the ovists, believed that the future human was in the egg, and that sperm merely stimulated the growth of the egg. Ovists thought women carried eggs containing boy and girl children, and that the gender of the offspring was determined well before conception.An early research initiative emerged in 1878 when Alpheus Hyatt led an investigation to study the laws of heredity through compiling data on family phenotypes (nose size, ear shape, etc.) and expression of pathological conditions and abnormal characteristics, particularly with respect to the age of appearance. One of the projects aims was to tabulate data to better understand why certain traits are consistently expressed while others are highly irregular.\n\n\n=== Gregor Mendel: father of genetics ===\n\nThe idea of particulate inheritance of genes can be attributed to the Moravian monk Gregor Mendel who published his work on pea plants in 1865. However, his work was not widely known and was rediscovered in 1901. It was initially assumed that Mendelian inheritance only accounted for large (qualitative) differences, such as those seen by Mendel in his pea plants \u2013 and the idea of additive effect of (quantitative) genes was not realised until R.A. Fisher's (1918) paper, \"The Correlation Between Relatives on the Supposition of Mendelian Inheritance\" Mendel's overall contribution gave scientists a useful overview that traits were inheritable. His pea plant demonstration became the foundation of the study of Mendelian Traits. These traits can be traced on a single locus.\n\n\n=== Modern development of genetics and heredity ===\n\nIn the 1930s, work by Fisher and others resulted in a combination of Mendelian and biometric schools into the modern evolutionary synthesis. The modern synthesis bridged the gap between experimental geneticists and naturalists; and between both and palaeontologists, stating that:\nAll evolutionary phenomena can be explained in a way consistent with known genetic mechanisms and the observational evidence of naturalists.\nEvolution is gradual: small genetic changes, recombination ordered by natural selection. Discontinuities amongst species (or other taxa) are explained as originating gradually through geographical separation and extinction (not saltation).\nSelection is overwhelmingly the main mechanism of change; even slight advantages are important when continued. The object of selection is the phenotype in its surrounding environment. The role of genetic drift is equivocal; though strongly supported initially by Dobzhansky, it was downgraded later as results from ecological genetics were obtained.\nThe primacy of population thinking: the genetic diversity carried in natural populations is a key factor in evolution. The strength of natural selection in the wild was greater than expected; the effect of ecological factors such as niche occupation and the significance of barriers to gene flow are all important.The idea that speciation occurs after populations are reproductively isolated has been much debated. In plants, polyploidy must be included in any view of speciation. Formulations such as 'evolution consists primarily of changes in the frequencies of alleles between one generation and another' were proposed rather later. The traditional view is that developmental biology ('evo-devo') played little part in the synthesis, but an account of Gavin de Beer's work by Stephen Jay Gould suggests he may be an exception.Almost all aspects of the synthesis have been challenged at times, with varying degrees of success. There is no doubt, however, that the synthesis was a great landmark in evolutionary biology. It cleared up many confusions, and was directly responsible for stimulating a great deal of research in the post-World War II era.\nTrofim Lysenko however caused a backlash of what is now called Lysenkoism in the Soviet Union when he emphasised Lamarckian ideas on the inheritance of acquired traits. This movement affected agricultural research and led to food shortages in the 1960s and seriously affected the USSR.There is growing evidence that there is transgenerational inheritance of epigenetic changes in humans and other animals.\n\n\n=== Common genetic disorders ===\nFragile X syndrome\nSickle cell disease\nPhenylketonuria (PKU)\nHaemophilia\n\n\n== Types ==\n\nThe description of a mode of biological inheritance consists of three main categories:\n\n1. Number of involved loci\nMonogenetic (also called \"simple\") \u2013 one locus\nOligogenic \u2013 few loci\nPolygenetic \u2013 many loci2. Involved chromosomesAutosomal \u2013 loci are not situated on a sex chromosome\nGonosomal \u2013 loci are situated on a sex chromosome\nX-chromosomal \u2013 loci are situated on the X-chromosome (the more common case)\nY-chromosomal \u2013 loci are situated on the Y-chromosome\nMitochondrial \u2013 loci are situated on the mitochondrial DNA3. Correlation genotype\u2013phenotype\nDominant\nIntermediate (also called \"codominant\")\nRecessive\nOverdominant\nUnderdominantThese three categories are part of every exact description of a mode of inheritance in the above order. In addition, more specifications may be added as follows:\n\n4. Coincidental and environmental interactionsPenetrance\nComplete\nIncomplete (percentual number)\nExpressivity\nInvariable\nVariable\nHeritability (in polygenetic and sometimes also in oligogenetic modes of inheritance)\nMaternal or paternal imprinting phenomena (also see epigenetics)5. Sex-linked interactionsSex-linked inheritance (gonosomal loci)\nSex-limited phenotype expression (e.g., cryptorchism)\nInheritance through the maternal line (in case of mitochondrial DNA loci)\nInheritance through the paternal line (in case of Y-chromosomal loci)6. Locus\u2013locus interactions\nEpistasis with other loci (e.g., overdominance)\nGene coupling with other loci (also see crossing over)\nHomozygotous lethal factors\nSemi-lethal factorsDetermination and description of a mode of inheritance is also achieved primarily through statistical analysis of pedigree data. In case the involved loci are known, methods of molecular genetics can also be employed.\n\n\n=== Dominant and recessive alleles ===\nAn allele is said to be dominant if it is always expressed in the appearance of an organism (phenotype) provided that at least one copy of it is present. For example, in peas the allele for green pods, G, is dominant to that for yellow pods, g. Thus pea plants with the pair of alleles either GG (homozygote) or Gg (heterozygote) will have green pods. The allele for yellow pods is recessive. The effects of this allele are only seen when it is present in both chromosomes, gg (homozygote). This derives from Zygosity, the degree to which both copies of a chromosome or gene have the same genetic sequence, in other words, the degree of similarity of the alleles in an organism.\n\n\t\t\n\n\n== See also ==\n\n\n== References ==\n\n\n== External links ==\n\nStanford Encyclopedia of Philosophy entry on Heredity and Heritability\n\"\"Experiments in Plant Hybridization\" (1866), by Johann Gregor Mendel\", by A. Andrei at the Embryo Project Encyclopedia", "content_traditional": "initially assumed mendelian inheritance accounted large qualitative differences seen mendel pea plants \u2013 idea additive effect quantitative genes realised ra fishers 1918 paper correlation relatives supposition mendelian inheritance mendels overall contribution gave scientists useful overview traits inheritable. ovists thought women carried eggs containing boy girl children gender offspring determined well conceptionan early research initiative emerged 1878 alpheus hyatt led investigation study laws heredity compiling data family phenotypes nose size ear shape etc. however people tan easily others due differences genotype striking example people inherited trait albinism tan sensitive sunburnheritable traits known passed one generation next via dna molecule encodes genetic information. traditional view developmental biology evodevo played little part synthesis account gavin de beers work stephen jay gould suggests may exceptionalmost aspects synthesis challenged times varying degrees success. developmental biologists suggest complex interactions genetic networks communication among cells lead heritable variations may underlie mechanics developmental plasticity canalizationrecent findings confirmed important examples heritable changes explained direct agency dna molecule. strength natural selection wild greater expected effect ecological factors niche occupation significance barriers gene flow importantthe idea speciation occurs populations reproductively isolated much debated. mutation occurs within gene new allele may affect trait gene controls altering phenotype organismhowever simple correspondence allele trait works cases traits complex controlled multiple interacting genes within among organisms. galton found evidence support aspects darwins pangenesis model relied acquired traitsthe inheritance acquired traits shown little basis 1880s august weismann cut tails many generations mice found offspring continued develop tails. history scientists antiquity variety ideas heredity theophrastus proposed male flowers caused female flowers ripen hippocrates speculated seeds produced various body parts transmitted offspring time conception aristotle thought male female fluids mixed conception. dna methylation marking chromatin selfsustaining metabolic loops gene silencing rna interference three dimensional conformation proteins prions areas epigenetic inheritance systems discovered organismic level. inheritance acquired traits also formed part early lamarckian ideas evolutionduring 18th century dutch microscopist antonie van leeuwenhoek 1632\u20131723 discovered animalcules sperm humans animals. research modes mechanisms epigenetic inheritance still scientific infancy area research attracted much recent activity broadens scope heritability evolutionary biology general. darwins primary approach heredity outline appeared work noticing traits expressed explicitly parent time reproduction could inherited certain traits could sexlinked etc. locus \u2013 locus interactions epistasis loci eg overdominance gene coupling loci also see crossing homozygotous lethal factors semilethal factorsdetermination description mode inheritance also achieved primarily statistical analysis pedigree data. aeschylus 458 bc proposed male parent female nurse young life sown within herancient understandings heredity transitioned two debated doctrines 18th century. examples heritability operate gene covered broadly title multilevel hierarchical selection subject intense debate history evolutionary science. modern synthesis bridged gap experimental geneticists naturalists palaeontologists stating evolutionary phenomena explained way consistent known genetic mechanisms observational evidence naturalists. coincidental environmental interactionspenetrance complete incomplete percentual number expressivity invariable variable heritability polygenetic sometimes also oligogenetic modes inheritance maternal paternal imprinting phenomena also see epigenetics5 rather suggesting mechanismsdarwins initial model heredity adopted heavily modified cousin francis galton laid framework biometric school heredity. see also references external links stanford encyclopedia philosophy entry heredity heritability experiments plant hybridization 1866 johann gregor mendel andrei embryo project encyclopedia movement affected agricultural research led food shortages 1960s seriously affected ussrthere growing evidence transgenerational inheritance epigenetic changes humans animals. correlation genotype \u2013 phenotype dominant intermediate also called codominant recessive overdominant underdominantthese three categories part every exact description mode inheritance order. dominant recessive alleles allele said dominant always expressed appearance organism phenotype provided least one copy present. derives zygosity degree copies chromosome gene genetic sequence words degree similarity alleles organism. trofim lysenko however caused backlash called lysenkoism soviet union emphasised lamarckian ideas inheritance acquired traits. however disputed creation cell theory 19th century fundamental unit life cell preformed parts organism. role genetic drift equivocal though strongly supported initially dobzhansky downgraded later results ecological genetics obtained. gregor mendel father genetics idea particulate inheritance genes attributed moravian monk gregor mendel published work pea plants 1865. blending inheritance would lead uniformity across populations generations would remove variation population natural selection could act. common genetic disorders fragile x syndrome sickle cell disease phenylketonuria pku haemophilia types description mode biological inheritance consists three main categories 1. relation theory evolution charles darwin proposed theory evolution 1859 one major problems lack underlying mechanism heredity. inherited traits controlled genes complete set genes within organisms genome called genotypethe complete set observable traits structure behavior organism called phenotype. example suntanned skin comes interaction persons genotype sunlight thus suntans passed peoples children. organisms inherit genetic material parents form homologous chromosomes containing unique combination dna sequences code genes. example peas allele green pods g dominant yellow pods g thus pea plants pair alleles either gg homozygote gg heterozygote green pods. cleared many confusions directly responsible stimulating great deal research postworld war ii era.", "custom_approach": "Galton found no evidence to support the aspects of Darwin's pangenesis model, which relied on acquired traits.The inheritance of acquired traits was shown to have little basis in the 1880s when August Weismann cut the tails off many generations of mice and found that their offspring continued to develop tails.Scientists in Antiquity had a variety of ideas about heredity: Theophrastus proposed that male flowers caused female flowers to ripen; Hippocrates speculated that \"seeds\" were produced by various body parts and transmitted to offspring at the time of conception; and Aristotle thought that male and female fluids mixed at conception. It was initially assumed that Mendelian inheritance only accounted for large (qualitative) differences, such as those seen by Mendel in his pea plants \u2013 and the idea of additive effect of (quantitative) genes was not realised until R.A. Fisher's (1918) paper, \"The Correlation Between Relatives on the Supposition of Mendelian Inheritance\" Mendel's overall contribution gave scientists a useful overview that traits were inheritable. Ovists thought women carried eggs containing boy and girl children, and that the gender of the offspring was determined well before conception.An early research initiative emerged in 1878 when Alpheus Hyatt led an investigation to study the laws of heredity through compiling data on family phenotypes (nose size, ear shape, etc.) These examples of heritability that operate above the gene are covered broadly under the title of multilevel or hierarchical selection, which has been a subject of intense debate in the history of evolutionary science.When Charles Darwin proposed his theory of evolution in 1859, one of its major problems was the lack of an underlying mechanism for heredity. However, some people tan more easily than others, due to differences in their genotype: a striking example is people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.Heritable traits are known to be passed from one generation to the next via DNA, a molecule that encodes genetic information. The traditional view is that developmental biology ('evo-devo') played little part in the synthesis, but an account of Gavin de Beer's work by Stephen Jay Gould suggests he may be an exception.Almost all aspects of the synthesis have been challenged at times, with varying degrees of success. This movement affected agricultural research and led to food shortages in the 1960s and seriously affected the USSR.There is growing evidence that there is transgenerational inheritance of epigenetic changes in humans and other animals.Fragile X syndrome Sickle cell disease Phenylketonuria (PKU) HaemophiliaThe description of a mode of biological inheritance consists of three main categories: 1. One of the projects aims was to tabulate data to better understand why certain traits are consistently expressed while others are highly irregular.The idea of particulate inheritance of genes can be attributed to the Moravian monk Gregor Mendel who published his work on pea plants in 1865. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlie some of the mechanics in developmental plasticity and canalization.Recent findings have confirmed important examples of heritable changes that cannot be explained by direct agency of the DNA molecule. The strength of natural selection in the wild was greater than expected; the effect of ecological factors such as niche occupation and the significance of barriers to gene flow are all important.The idea that speciation occurs after populations are reproductively isolated has been much debated. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism.However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by multiple interacting genes within and among organisms. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference, and the three dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. In case the involved loci are known, methods of molecular genetics can also be employed.An allele is said to be dominant if it is always expressed in the appearance of an organism (phenotype) provided that at least one copy of it is present. The inheritance of acquired traits also formed a part of early Lamarckian ideas on evolution.During the 18th century, Dutch microscopist Antonie van Leeuwenhoek (1632\u20131723) discovered \"animalcules\" in the sperm of humans and other animals. Research into modes and mechanisms of epigenetic inheritance is still in its scientific infancy, but this area of research has attracted much recent activity as it broadens the scope of heritability and evolutionary biology in general. Darwin's primary approach to heredity was to outline how it appeared to work (noticing that traits that were not expressed explicitly in the parent at the time of reproduction could be inherited, that certain traits could be sex-linked, etc.) Locus\u2013locus interactions Epistasis with other loci (e.g., overdominance) Gene coupling with other loci (also see crossing over) Homozygotous lethal factors Semi-lethal factorsDetermination and description of a mode of inheritance is also achieved primarily through statistical analysis of pedigree data. Aeschylus, in 458 BC, proposed the male as the parent, with the female as a \"nurse for the young life sown within her\".Ancient understandings of heredity transitioned to two debated doctrines in the 18th century. The modern synthesis bridged the gap between experimental geneticists and naturalists; and between both and palaeontologists, stating that: All evolutionary phenomena can be explained in a way consistent with known genetic mechanisms and the observational evidence of naturalists. Coincidental and environmental interactionsPenetrance Complete Incomplete (percentual number) Expressivity Invariable Variable Heritability (in polygenetic and sometimes also in oligogenetic modes of inheritance) Maternal or paternal imprinting phenomena (also see epigenetics)5. rather than suggesting mechanisms.Darwin's initial model of heredity was adopted by, and then heavily modified by, his cousin Francis Galton, who laid the framework for the biometric school of heredity. These traits can be traced on a single locus.In the 1930s, work by Fisher and others resulted in a combination of Mendelian and biometric schools into the modern evolutionary synthesis. Correlation genotype\u2013phenotype Dominant Intermediate (also called \"codominant\") Recessive Overdominant UnderdominantThese three categories are part of every exact description of a mode of inheritance in the above order. This derives from Zygosity, the degree to which both copies of a chromosome or gene have the same genetic sequence, in other words, the degree of similarity of the alleles in an organism. Trofim Lysenko however caused a backlash of what is now called Lysenkoism in the Soviet Union when he emphasised Lamarckian ideas on the inheritance of acquired traits. However, this was disputed by the creation of the cell theory in the 19th century, where the fundamental unit of life is the cell, and not some preformed parts of an organism. The role of genetic drift is equivocal; though strongly supported initially by Dobzhansky, it was downgraded later as results from ecological genetics were obtained. Blending inheritance would lead to uniformity across populations in only a few generations and then would remove variation from a population on which natural selection could act. Inherited traits are controlled by genes and the complete set of genes within an organism's genome is called its genotype.The complete set of observable traits of the structure and behavior of an organism is called its phenotype. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. Organisms inherit genetic material from their parents in the form of homologous chromosomes, containing a unique combination of DNA sequences that code for genes. For example, in peas the allele for green pods, G, is dominant to that for yellow pods, g. Thus pea plants with the pair of alleles either GG (homozygote) or Gg (heterozygote) will have green pods. It cleared up many confusions, and was directly responsible for stimulating a great deal of research in the post-World War II era. Formulations such as 'evolution consists primarily of changes in the frequencies of alleles between one generation and another' were proposed rather later.", "combined_approach": "galton found evidence support aspects darwins pangenesis model relied acquired traitsthe inheritance acquired traits shown little basis 1880s august weismann cut tails many generations mice found offspring continued develop tailsscientists antiquity variety ideas heredity theophrastus proposed male flowers caused female flowers ripen hippocrates speculated seeds produced various body parts transmitted offspring time conception aristotle thought male female fluids mixed conception. initially assumed mendelian inheritance accounted large qualitative differences seen mendel pea plants \u2013 idea additive effect quantitative genes realised ra fishers 1918 paper correlation relatives supposition mendelian inheritance mendels overall contribution gave scientists useful overview traits inheritable. ovists thought women carried eggs containing boy girl children gender offspring determined well conceptionan early research initiative emerged 1878 alpheus hyatt led investigation study laws heredity compiling data family phenotypes nose size ear shape etc. examples heritability operate gene covered broadly title multilevel hierarchical selection subject intense debate history evolutionary sciencewhen charles darwin proposed theory evolution 1859 one major problems lack underlying mechanism heredity. however people tan easily others due differences genotype striking example people inherited trait albinism tan sensitive sunburnheritable traits known passed one generation next via dna molecule encodes genetic information. traditional view developmental biology evodevo played little part synthesis account gavin de beers work stephen jay gould suggests may exceptionalmost aspects synthesis challenged times varying degrees success. movement affected agricultural research led food shortages 1960s seriously affected ussrthere growing evidence transgenerational inheritance epigenetic changes humans animalsfragile x syndrome sickle cell disease phenylketonuria pku haemophiliathe description mode biological inheritance consists three main categories 1. one projects aims tabulate data better understand certain traits consistently expressed others highly irregularthe idea particulate inheritance genes attributed moravian monk gregor mendel published work pea plants 1865. developmental biologists suggest complex interactions genetic networks communication among cells lead heritable variations may underlie mechanics developmental plasticity canalizationrecent findings confirmed important examples heritable changes explained direct agency dna molecule. strength natural selection wild greater expected effect ecological factors niche occupation significance barriers gene flow importantthe idea speciation occurs populations reproductively isolated much debated. mutation occurs within gene new allele may affect trait gene controls altering phenotype organismhowever simple correspondence allele trait works cases traits complex controlled multiple interacting genes within among organisms. dna methylation marking chromatin selfsustaining metabolic loops gene silencing rna interference three dimensional conformation proteins prions areas epigenetic inheritance systems discovered organismic level. case involved loci known methods molecular genetics also employedan allele said dominant always expressed appearance organism phenotype provided least one copy present. inheritance acquired traits also formed part early lamarckian ideas evolutionduring 18th century dutch microscopist antonie van leeuwenhoek 1632\u20131723 discovered animalcules sperm humans animals. research modes mechanisms epigenetic inheritance still scientific infancy area research attracted much recent activity broadens scope heritability evolutionary biology general. darwins primary approach heredity outline appeared work noticing traits expressed explicitly parent time reproduction could inherited certain traits could sexlinked etc. locus \u2013 locus interactions epistasis loci eg overdominance gene coupling loci also see crossing homozygotous lethal factors semilethal factorsdetermination description mode inheritance also achieved primarily statistical analysis pedigree data. aeschylus 458 bc proposed male parent female nurse young life sown within herancient understandings heredity transitioned two debated doctrines 18th century. modern synthesis bridged gap experimental geneticists naturalists palaeontologists stating evolutionary phenomena explained way consistent known genetic mechanisms observational evidence naturalists. coincidental environmental interactionspenetrance complete incomplete percentual number expressivity invariable variable heritability polygenetic sometimes also oligogenetic modes inheritance maternal paternal imprinting phenomena also see epigenetics5 rather suggesting mechanismsdarwins initial model heredity adopted heavily modified cousin francis galton laid framework biometric school heredity. traits traced single locusin 1930s work fisher others resulted combination mendelian biometric schools modern evolutionary synthesis. correlation genotype \u2013 phenotype dominant intermediate also called codominant recessive overdominant underdominantthese three categories part every exact description mode inheritance order. derives zygosity degree copies chromosome gene genetic sequence words degree similarity alleles organism. trofim lysenko however caused backlash called lysenkoism soviet union emphasised lamarckian ideas inheritance acquired traits. however disputed creation cell theory 19th century fundamental unit life cell preformed parts organism. role genetic drift equivocal though strongly supported initially dobzhansky downgraded later results ecological genetics obtained. blending inheritance would lead uniformity across populations generations would remove variation population natural selection could act. inherited traits controlled genes complete set genes within organisms genome called genotypethe complete set observable traits structure behavior organism called phenotype. example suntanned skin comes interaction persons genotype sunlight thus suntans passed peoples children. organisms inherit genetic material parents form homologous chromosomes containing unique combination dna sequences code genes. example peas allele green pods g dominant yellow pods g thus pea plants pair alleles either gg homozygote gg heterozygote green pods. cleared many confusions directly responsible stimulating great deal research postworld war ii era. formulations evolution consists primarily changes frequencies alleles one generation another proposed rather later."}, {"topic": "R/K selection theory", "summary": "In ecology, r/K selection theory relates to the selection of combinations of traits in an organism that trade off between quantity and quality of offspring. The focus on either an increased quantity of offspring at the expense of individual parental investment of r-strategists, or on a reduced quantity of offspring with a corresponding increased parental investment of K-strategists, varies widely, seemingly to promote success in particular environments. The concepts of quantity or quality offspring are sometimes referred to as \"cheap\" or \"expensive\", a comment on the expendable nature of the offspring and parental commitment made. The stability of the environment can predict if many expendable offspring are made or if fewer offspring of higher quality would lead to higher reproductive success. An unstable environment would encourage the parent to make many offspring, because the likelihood of all (or the majority) of them surviving to adulthood is slim. In contrast, more stable environments allow parents to confidently invest in one offspring because they are more likely to survive to adulthood.\nThe terminology of r/K-selection was coined by the ecologists Robert MacArthur and E. O. Wilson in 1967 based on their work on island biogeography; although the concept of the evolution of life history strategies has a longer history (see e.g. plant strategies).\nThe theory was popular in the 1970s and 1980s, when it was used as a heuristic device, but lost importance in the early 1990s, when it was criticized by several empirical studies. A life-history paradigm has replaced the r/K selection paradigm, but continues to incorporate its important themes as a subset of life history theory. Some scientists now prefer to use the terms fast versus slow life history as a replacement for, respectively, r versus K reproductive strategy.\n\n", "content": "\n== Overview ==\n\nIn r/K selection theory, selective pressures are hypothesised to drive evolution in one of two generalized directions: r- or K-selection. These terms, r and K, are drawn from standard ecological algebra as illustrated in the simplified Verhulst model of population dynamics:\nwhere N is the population, r is the maximum growth rate, K is the carrying capacity of the local environment, and dN/dt, the derivative of N with respect to time t, is the rate of change in population with time. Thus, the equation relates the growth rate of the population N to the current population size, incorporating the effect of the two constant parameters r and K.\n(Note that decrease is negative growth.) The choice of the letter K came from the German Kapazit\u00e4tsgrenze (capacity limit), while r came from rate.\n\n\n=== r-selection ===\nr-selected species are those that emphasize high growth rates, typically exploit less-crowded ecological niches, and produce many offspring, each of which has a relatively low probability of surviving to adulthood (i.e., high r, low K). A typical r species is the dandelion (genus Taraxacum).\nIn unstable or unpredictable environments, r-selection predominates due to the ability to reproduce rapidly. There is little advantage in adaptations that permit successful competition with other organisms, because the environment is likely to change again. Among the traits that are thought to characterize r-selection are high fecundity, small body size, early maturity onset, short generation time, and the ability to disperse offspring widely.\nOrganisms whose life history is subject to r-selection are often referred to as r-strategists or r-selected. Organisms that exhibit r-selected traits can range from bacteria and diatoms, to insects and grasses, to various semelparous cephalopods and small mammals, particularly rodents.\n\n\n=== K-selection ===\n\nBy contrast, K-selected species display traits associated with living at densities close to carrying capacity and typically are strong competitors in such crowded niches, that invest more heavily in fewer offspring, each of which has a relatively high probability of surviving to adulthood (i.e., low r, high K). In scientific literature, r-selected species are occasionally referred to as \"opportunistic\" whereas K-selected species are described as \"equilibrium\".In stable or predictable environments, K-selection predominates as the ability to compete successfully for limited resources is crucial and populations of K-selected organisms typically are very constant in number and close to the maximum that the environment can bear (unlike r-selected populations, where population sizes can change much more rapidly).\nTraits that are thought to be characteristic of K-selection include large body size, long life expectancy, and the production of fewer offspring, which often require extensive parental care until they mature. Organisms whose life history is subject to K-selection are often referred to as K-strategists or K-selected. Organisms with K-selected traits include large organisms such as elephants, humans, and whales, but also smaller long-lived organisms such as Arctic terns, parrots and eagles.\n\n\n=== Continuous spectrum ===\nAlthough some organisms are identified as primarily r- or K-strategists, the majority of organisms do not follow this pattern. For instance, trees have traits such as longevity and strong competitiveness that characterise them as K-strategists. In reproduction, however, trees typically produce thousands of offspring and disperse them widely, traits characteristic of r-strategists.Similarly, reptiles such as sea turtles display both r- and K-traits: although sea turtles are large organisms with long lifespans (provided they reach adulthood), they produce large numbers of unnurtured offspring.\nThe r/K dichotomy can be re-expressed as a continuous spectrum using the economic concept of discounted future returns, with r-selection corresponding to large discount rates and K-selection corresponding to small discount rates.\n\n\n== Ecological succession ==\nIn areas of major ecological disruption or sterilisation (such as after a major volcanic eruption, as at Krakatoa or Mount St. Helens), r- and K-strategists play distinct roles in the ecological succession that regenerates the ecosystem. Because of their higher reproductive rates and ecological opportunism, primary colonisers typically are r-strategists and they are followed by a succession of increasingly competitive flora and fauna. The ability of an environment to increase energetic content, through photosynthetic capture of solar energy, increases with the increase in complex biodiversity as r species proliferate to reach a peak possible with K strategies.Eventually a new equilibrium is approached (sometimes referred to as a climax community), with r-strategists gradually being replaced by K-strategists which are more competitive and better adapted to the emerging micro-environmental characteristics of the landscape. Traditionally, biodiversity was considered maximized at this stage, with introductions of new species resulting in the replacement and local extinction of endemic species. However, the intermediate disturbance hypothesis posits that intermediate levels of disturbance in a landscape create patches at different levels of succession, promoting coexistence of colonizers and competitors at the regional scale.\n\n\n== Application ==\nWhile usually applied at the level of species, r/K selection theory is also useful in studying the evolution of ecological and life history differences between subspecies, for instance the African honey bee, A. m. scutellata, and the Italian bee, A. m. ligustica. At the other end of the scale, it has also been used to study the evolutionary ecology of whole groups of organisms, such as bacteriophages. Other researchers have proposed that the evolution of human inflammatory responses is related to r/K selection.Some researchers, such as Lee Ellis, J. Philippe Rushton, and Aurelio Jos\u00e9 Figueredo, have applied r/K selection theory to various human behaviors, including crime, sexual promiscuity, fertility, IQ, and other traits related to life history theory. Rushton's work resulted in him developing \"differential K theory\" to attempt to explain many variations in human behavior across geographic areas, a theory which has been criticized by many other researchers.\n\n\n== Status ==\nAlthough r/K selection theory became widely used during the 1970s, it also began to attract more critical attention. In particular, a review by the ecologist Stephen C. Stearns drew attention to gaps in the theory, and to ambiguities in the interpretation of empirical data for testing it.In 1981, a review of the r/K selection literature by Parry demonstrated that there was no agreement among researchers using the theory about the definition of r- and K-selection, which led him to question whether the assumption of a relation between reproductive expenditure and packaging of offspring was justified. A 1982 study by Templeton and Johnson showed that in a population of Drosophila mercatorum under K-selection the population actually produced a higher frequency of traits typically associated with r-selection. Several other studies contradicting the predictions of r/K selection theory were also published between 1977 and 1994.When Stearns reviewed the status of the theory in 1992, he noted that from 1977 to 1982 there was an average of 42 references to the theory per year in the BIOSIS literature search service, but from 1984 to 1989 the average dropped to 16 per year and continued to decline. He concluded that r/K theory was a once useful heuristic that no longer serves a purpose in life history theory.More recently, the panarchy theories of adaptive capacity and resilience promoted by C. S. Holling and Lance Gunderson have revived interest in the theory, and use it as a way of integrating social systems, economics and ecology.Writing in 2002, Reznick and colleagues reviewed the controversy regarding r/K selection theory and concluded that: \n\nThe distinguishing feature of the r- and K-selection paradigm was the focus on density-dependent selection as the important agent of selection on organisms' life histories. This paradigm was challenged as it became clear that other factors, such as age-specific mortality, could provide a more mechanistic causative link between an environment and an optimal life history (Wilbur et al. 1974; Stearns 1976, 1977). The r- and K-selection paradigm was replaced by new paradigm that focused on age-specific mortality (Stearns, 1976; Charlesworth, 1980). This new life-history paradigm has matured into one that uses age-structured models as a framework to incorporate many of the themes important to the r\u2013K paradigm.\nAlternative approaches are now available both for studying life history evolution (e.g. Leslie matrix for an age-structured population) and for density-dependent selection (e.g. variable density lottery model).\n\n\n== See also ==\nEvolutionary game theory\nLife history theory\nMinimax/maximin strategy\nRuderal species\nSemelparity and iteroparity\nTrivers\u2013Willard hypothesis\n\n\n== References ==", "content_traditional": "concluded rk theory useful heuristic longer serves purpose life history theorymore recently panarchy theories adaptive capacity resilience promoted c holling lance gunderson revived interest theory use way integrating social systems economics ecologywriting 2002 reznick colleagues reviewed controversy regarding rk selection theory concluded distinguishing feature r kselection paradigm focus densitydependent selection important agent selection organisms life histories. ability environment increase energetic content photosynthetic capture solar energy increases increase complex biodiversity r species proliferate reach peak possible k strategieseventually new equilibrium approached sometimes referred climax community rstrategists gradually replaced kstrategists competitive better adapted emerging microenvironmental characteristics landscape. particular review ecologist stephen c stearns drew attention gaps theory ambiguities interpretation empirical data testing itin 1981 review rk selection literature parry demonstrated agreement among researchers using theory definition r kselection led question whether assumption relation reproductive expenditure packaging offspring justified. scientific literature rselected species occasionally referred opportunistic whereas kselected species described equilibriumin stable predictable environments kselection predominates ability compete successfully limited resources crucial populations kselected organisms typically constant number close maximum environment bear unlike rselected populations population sizes change much rapidly. several studies contradicting predictions rk selection theory also published 1977 1994when stearns reviewed status theory 1992 noted 1977 1982 average 42 references theory per year biosis literature search service 1984 1989 average dropped 16 per year continued decline. kselection contrast kselected species display traits associated living densities close carrying capacity typically strong competitors crowded niches invest heavily fewer offspring relatively high probability surviving adulthood ie low r high k. researchers proposed evolution human inflammatory responses related rk selectionsome researchers lee ellis j philippe rushton aurelio jos\u00e9 figueredo applied rk selection theory various human behaviors including crime sexual promiscuity fertility iq traits related life history theory. reproduction however trees typically produce thousands offspring disperse widely traits characteristic rstrategistssimilarly reptiles sea turtles display r ktraits although sea turtles large organisms long lifespans provided reach adulthood produce large numbers unnurtured offspring. terms r k drawn standard ecological algebra illustrated simplified verhulst model population dynamics n population r maximum growth rate k carrying capacity local environment dndt derivative n respect time rate change population time. paradigm challenged became clear factors agespecific mortality could provide mechanistic causative link environment optimal life history wilbur et al. application usually applied level species rk selection theory also useful studying evolution ecological life history differences subspecies instance african honey bee scutellata italian bee ligustica. traits thought characteristic kselection include large body size long life expectancy production fewer offspring often require extensive parental care mature. rselection rselected species emphasize high growth rates typically exploit lesscrowded ecological niches produce many offspring relatively low probability surviving adulthood ie high r low k. ecological succession areas major ecological disruption sterilisation major volcanic eruption krakatoa mount st helens r kstrategists play distinct roles ecological succession regenerates ecosystem. rushtons work resulted developing differential k theory attempt explain many variations human behavior across geographic areas theory criticized many researchers. among traits thought characterize rselection high fecundity small body size early maturity onset short generation time ability disperse offspring widely. 1982 study templeton johnson showed population drosophila mercatorum kselection population actually produced higher frequency traits typically associated rselection. higher reproductive rates ecological opportunism primary colonisers typically rstrategists followed succession increasingly competitive flora fauna. rk dichotomy reexpressed continuous spectrum using economic concept discounted future returns rselection corresponding large discount rates kselection corresponding small discount rates. new lifehistory paradigm matured one uses agestructured models framework incorporate many themes important r \u2013 k paradigm. organisms exhibit rselected traits range bacteria diatoms insects grasses various semelparous cephalopods small mammals particularly rodents. end scale also used study evolutionary ecology whole groups organisms bacteriophages. organisms kselected traits include large organisms elephants humans whales also smaller longlived organisms arctic terns parrots eagles. however intermediate disturbance hypothesis posits intermediate levels disturbance landscape create patches different levels succession promoting coexistence colonizers competitors regional scale. traditionally biodiversity considered maximized stage introductions new species resulting replacement local extinction endemic species. thus equation relates growth rate population n current population size incorporating effect two constant parameters r k note decrease negative growth. little advantage adaptations permit successful competition organisms environment likely change. see also evolutionary game theory life history theory minimaxmaximin strategy ruderal species semelparity iteroparity trivers \u2013 willard hypothesis references continuous spectrum although organisms identified primarily r kstrategists majority organisms follow pattern. status although rk selection theory became widely used 1970s also began attract critical attention. r kselection paradigm replaced new paradigm focused agespecific mortality stearns 1976 charlesworth 1980. overview rk selection theory selective pressures hypothesised drive evolution one two generalized directions r kselection. organisms whose life history subject kselection often referred kstrategists kselected. organisms whose life history subject rselection often referred rstrategists rselected. instance trees traits longevity strong competitiveness characterise kstrategists. unstable unpredictable environments rselection predominates due ability reproduce rapidly. choice letter k came german kapazit\u00e4tsgrenze capacity limit r came rate. alternative approaches available studying life history evolution eg.", "custom_approach": "In r/K selection theory, selective pressures are hypothesised to drive evolution in one of two generalized directions: r- or K-selection. These terms, r and K, are drawn from standard ecological algebra as illustrated in the simplified Verhulst model of population dynamics: where N is the population, r is the maximum growth rate, K is the carrying capacity of the local environment, and dN/dt, the derivative of N with respect to time t, is the rate of change in population with time. Thus, the equation relates the growth rate of the population N to the current population size, incorporating the effect of the two constant parameters r and K. (Note that decrease is negative growth.) The choice of the letter K came from the German Kapazit\u00e4tsgrenze (capacity limit), while r came from rate.r-selected species are those that emphasize high growth rates, typically exploit less-crowded ecological niches, and produce many offspring, each of which has a relatively low probability of surviving to adulthood (i.e., high r, low K). A typical r species is the dandelion (genus Taraxacum). In unstable or unpredictable environments, r-selection predominates due to the ability to reproduce rapidly. There is little advantage in adaptations that permit successful competition with other organisms, because the environment is likely to change again. Among the traits that are thought to characterize r-selection are high fecundity, small body size, early maturity onset, short generation time, and the ability to disperse offspring widely. Organisms whose life history is subject to r-selection are often referred to as r-strategists or r-selected. Organisms that exhibit r-selected traits can range from bacteria and diatoms, to insects and grasses, to various semelparous cephalopods and small mammals, particularly rodents.By contrast, K-selected species display traits associated with living at densities close to carrying capacity and typically are strong competitors in such crowded niches, that invest more heavily in fewer offspring, each of which has a relatively high probability of surviving to adulthood (i.e., low r, high K). In scientific literature, r-selected species are occasionally referred to as \"opportunistic\" whereas K-selected species are described as \"equilibrium\".In stable or predictable environments, K-selection predominates as the ability to compete successfully for limited resources is crucial and populations of K-selected organisms typically are very constant in number and close to the maximum that the environment can bear (unlike r-selected populations, where population sizes can change much more rapidly). Traits that are thought to be characteristic of K-selection include large body size, long life expectancy, and the production of fewer offspring, which often require extensive parental care until they mature. Organisms whose life history is subject to K-selection are often referred to as K-strategists or K-selected. Organisms with K-selected traits include large organisms such as elephants, humans, and whales, but also smaller long-lived organisms such as Arctic terns, parrots and eagles.Although some organisms are identified as primarily r- or K-strategists, the majority of organisms do not follow this pattern. For instance, trees have traits such as longevity and strong competitiveness that characterise them as K-strategists. In reproduction, however, trees typically produce thousands of offspring and disperse them widely, traits characteristic of r-strategists.Similarly, reptiles such as sea turtles display both r- and K-traits: although sea turtles are large organisms with long lifespans (provided they reach adulthood), they produce large numbers of unnurtured offspring. The r/K dichotomy can be re-expressed as a continuous spectrum using the economic concept of discounted future returns, with r-selection corresponding to large discount rates and K-selection corresponding to small discount rates.In areas of major ecological disruption or sterilisation (such as after a major volcanic eruption, as at Krakatoa or Mount St. Helens), r- and K-strategists play distinct roles in the ecological succession that regenerates the ecosystem. Because of their higher reproductive rates and ecological opportunism, primary colonisers typically are r-strategists and they are followed by a succession of increasingly competitive flora and fauna. The ability of an environment to increase energetic content, through photosynthetic capture of solar energy, increases with the increase in complex biodiversity as r species proliferate to reach a peak possible with K strategies.Eventually a new equilibrium is approached (sometimes referred to as a climax community), with r-strategists gradually being replaced by K-strategists which are more competitive and better adapted to the emerging micro-environmental characteristics of the landscape. Traditionally, biodiversity was considered maximized at this stage, with introductions of new species resulting in the replacement and local extinction of endemic species. However, the intermediate disturbance hypothesis posits that intermediate levels of disturbance in a landscape create patches at different levels of succession, promoting coexistence of colonizers and competitors at the regional scale.While usually applied at the level of species, r/K selection theory is also useful in studying the evolution of ecological and life history differences between subspecies, for instance the African honey bee, A. m. scutellata, and the Italian bee, A. m. ligustica. At the other end of the scale, it has also been used to study the evolutionary ecology of whole groups of organisms, such as bacteriophages. Other researchers have proposed that the evolution of human inflammatory responses is related to r/K selection.Some researchers, such as Lee Ellis, J. Philippe Rushton, and Aurelio Jos\u00e9 Figueredo, have applied r/K selection theory to various human behaviors, including crime, sexual promiscuity, fertility, IQ, and other traits related to life history theory. Rushton's work resulted in him developing \"differential K theory\" to attempt to explain many variations in human behavior across geographic areas, a theory which has been criticized by many other researchers.Although r/K selection theory became widely used during the 1970s, it also began to attract more critical attention. In particular, a review by the ecologist Stephen C. Stearns drew attention to gaps in the theory, and to ambiguities in the interpretation of empirical data for testing it.In 1981, a review of the r/K selection literature by Parry demonstrated that there was no agreement among researchers using the theory about the definition of r- and K-selection, which led him to question whether the assumption of a relation between reproductive expenditure and packaging of offspring was justified. A 1982 study by Templeton and Johnson showed that in a population of Drosophila mercatorum under K-selection the population actually produced a higher frequency of traits typically associated with r-selection. Several other studies contradicting the predictions of r/K selection theory were also published between 1977 and 1994.When Stearns reviewed the status of the theory in 1992, he noted that from 1977 to 1982 there was an average of 42 references to the theory per year in the BIOSIS literature search service, but from 1984 to 1989 the average dropped to 16 per year and continued to decline. He concluded that r/K theory was a once useful heuristic that no longer serves a purpose in life history theory.More recently, the panarchy theories of adaptive capacity and resilience promoted by C. S. Holling and Lance Gunderson have revived interest in the theory, and use it as a way of integrating social systems, economics and ecology.Writing in 2002, Reznick and colleagues reviewed the controversy regarding r/K selection theory and concluded that: The distinguishing feature of the r- and K-selection paradigm was the focus on density-dependent selection as the important agent of selection on organisms' life histories. This paradigm was challenged as it became clear that other factors, such as age-specific mortality, could provide a more mechanistic causative link between an environment and an optimal life history (Wilbur et al. 1974; Stearns 1976, 1977). The r- and K-selection paradigm was replaced by new paradigm that focused on age-specific mortality (Stearns, 1976; Charlesworth, 1980). This new life-history paradigm has matured into one that uses age-structured models as a framework to incorporate many of the themes important to the r\u2013K paradigm. Alternative approaches are now available both for studying life history evolution (e.g. Leslie matrix for an age-structured population) and for density-dependent selection (e.g. variable density lottery model).", "combined_approach": "rk selection theory selective pressures hypothesised drive evolution one two generalized directions r kselection. terms r k drawn standard ecological algebra illustrated simplified verhulst model population dynamics n population r maximum growth rate k carrying capacity local environment dndt derivative n respect time rate change population time. thus equation relates growth rate population n current population size incorporating effect two constant parameters r k note decrease negative growth. choice letter k came german kapazit\u00e4tsgrenze capacity limit r came raterselected species emphasize high growth rates typically exploit lesscrowded ecological niches produce many offspring relatively low probability surviving adulthood ie high r low k. typical r species dandelion genus taraxacum. unstable unpredictable environments rselection predominates due ability reproduce rapidly. little advantage adaptations permit successful competition organisms environment likely change. among traits thought characterize rselection high fecundity small body size early maturity onset short generation time ability disperse offspring widely. organisms whose life history subject rselection often referred rstrategists rselected. organisms exhibit rselected traits range bacteria diatoms insects grasses various semelparous cephalopods small mammals particularly rodentsby contrast kselected species display traits associated living densities close carrying capacity typically strong competitors crowded niches invest heavily fewer offspring relatively high probability surviving adulthood ie low r high k. scientific literature rselected species occasionally referred opportunistic whereas kselected species described equilibriumin stable predictable environments kselection predominates ability compete successfully limited resources crucial populations kselected organisms typically constant number close maximum environment bear unlike rselected populations population sizes change much rapidly. traits thought characteristic kselection include large body size long life expectancy production fewer offspring often require extensive parental care mature. organisms whose life history subject kselection often referred kstrategists kselected. organisms kselected traits include large organisms elephants humans whales also smaller longlived organisms arctic terns parrots eaglesalthough organisms identified primarily r kstrategists majority organisms follow pattern. instance trees traits longevity strong competitiveness characterise kstrategists. reproduction however trees typically produce thousands offspring disperse widely traits characteristic rstrategistssimilarly reptiles sea turtles display r ktraits although sea turtles large organisms long lifespans provided reach adulthood produce large numbers unnurtured offspring. rk dichotomy reexpressed continuous spectrum using economic concept discounted future returns rselection corresponding large discount rates kselection corresponding small discount ratesin areas major ecological disruption sterilisation major volcanic eruption krakatoa mount st helens r kstrategists play distinct roles ecological succession regenerates ecosystem. higher reproductive rates ecological opportunism primary colonisers typically rstrategists followed succession increasingly competitive flora fauna. ability environment increase energetic content photosynthetic capture solar energy increases increase complex biodiversity r species proliferate reach peak possible k strategieseventually new equilibrium approached sometimes referred climax community rstrategists gradually replaced kstrategists competitive better adapted emerging microenvironmental characteristics landscape. traditionally biodiversity considered maximized stage introductions new species resulting replacement local extinction endemic species. however intermediate disturbance hypothesis posits intermediate levels disturbance landscape create patches different levels succession promoting coexistence colonizers competitors regional scalewhile usually applied level species rk selection theory also useful studying evolution ecological life history differences subspecies instance african honey bee scutellata italian bee ligustica. end scale also used study evolutionary ecology whole groups organisms bacteriophages. researchers proposed evolution human inflammatory responses related rk selectionsome researchers lee ellis j philippe rushton aurelio jos\u00e9 figueredo applied rk selection theory various human behaviors including crime sexual promiscuity fertility iq traits related life history theory. rushtons work resulted developing differential k theory attempt explain many variations human behavior across geographic areas theory criticized many researchersalthough rk selection theory became widely used 1970s also began attract critical attention. particular review ecologist stephen c stearns drew attention gaps theory ambiguities interpretation empirical data testing itin 1981 review rk selection literature parry demonstrated agreement among researchers using theory definition r kselection led question whether assumption relation reproductive expenditure packaging offspring justified. 1982 study templeton johnson showed population drosophila mercatorum kselection population actually produced higher frequency traits typically associated rselection. several studies contradicting predictions rk selection theory also published 1977 1994when stearns reviewed status theory 1992 noted 1977 1982 average 42 references theory per year biosis literature search service 1984 1989 average dropped 16 per year continued decline. concluded rk theory useful heuristic longer serves purpose life history theorymore recently panarchy theories adaptive capacity resilience promoted c holling lance gunderson revived interest theory use way integrating social systems economics ecologywriting 2002 reznick colleagues reviewed controversy regarding rk selection theory concluded distinguishing feature r kselection paradigm focus densitydependent selection important agent selection organisms life histories. paradigm challenged became clear factors agespecific mortality could provide mechanistic causative link environment optimal life history wilbur et al. 1974 stearns 1976 1977. r kselection paradigm replaced new paradigm focused agespecific mortality stearns 1976 charlesworth 1980. new lifehistory paradigm matured one uses agestructured models framework incorporate many themes important r \u2013 k paradigm. alternative approaches available studying life history evolution eg. leslie matrix agestructured population densitydependent selection eg. variable density lottery model."}, {"topic": "Fire adaptations", "summary": "Fire adaptations are life history traits of plants and animals that help them survive wildfire or to use resources created by wildfire. These traits can help plants and animals increase their survival rates during a fire and/or reproduce offspring after a fire. Both plants and animals have multiple strategies for surviving and reproducing after fire.", "content": "\n\n\n== Plant adaptations to fire ==\nUnlike animals, plants are not able to move physically during a fire. However, plants have their own ways to survive a fire event or recover after a fire. The strategies can be classified into three types: resist (above-ground parts survive fire), recover (evade mortality by sprouting), and recruit (seed germination after the fire). Fire plays a role as a filter that can select different fire response traits.\n\n\n=== Resist ===\n\n\n==== Thick bark ====\n\nFire impacts plants most directly via heat damage. However, new studies indicate that hydraulic failure kills trees during a fire in addition to fire scorching. High temperature cuts the water supply to the canopy and causes the death of the tree. Fortunately, thick bark can protect plants because they keep stems away from high temperature. Under the protection of bark, living tissue won't have direct contact with fire and the survival rate of plants will be increased. Heat resistance is a function of bark thermal diffusivity (a property of the species) and bark thickness (increasing exponentially with bark thickness). Thick bark is common in species adapted to surface or low-severity fire regimes. On the other hand, plants in crown or high-severity fire regimes usually have thinner barks because it is meaningless to invest in thick bark without it conferring an advantage in survivorship.\n\n\n==== Self-pruning branches ====\nSelf-pruning is another trait of plants to resist fires. Self-pruning branches can reduce the chance for surface fire to reach the canopy because ladder fuels are removed. Self-pruning branches are common in surface or low-severity fire regimes.\n\n\n=== Recover ===\n\n\n==== Epicormic buds ====\n\nEpicormic buds are dormant buds under the bark or even deeper. Buds can turn active and grow due to environmental stress such as fire or drought. This trait can help plants to recover their canopies rapidly after a fire. For example, eucalypts are known for this trait. The bark may be removed or burnt by severe fires, but buds are still able to germinate and recover. This trait is common in surface or low-severity fire regimes.\n\n\n==== Lignotubers ====\n\nNot all plants have thick bark and epicormic buds. But for some shrubs and trees, their buds are located below ground, which are able to re-sprout even when the stems are killed by fire. Lignotubers, woody structures around the roots of plants that contains many dormant buds and nutrients such as starch, are very helpful for plants to recover after a fire. In case the stem was damaged by a fire, buds will sprout forming basal shoots. Species with lignotubers are often seen in crown or high-severity fire regimes (e.g., chamise in chaparral).\n\n\n==== Clonal spread ====\nClonal spread is usually triggered by fires and other forms of removal of above-ground stems. The buds from the mother plant can develop into basal shoots or suckers from roots some distance from the plant. Aspen and Californian redwoods are two examples of clonal spread. In clonal communities, all the individuals developed vegetatively from one single ancestor rather than reproduced sexually. For example, the Pando is a large clonal aspen colony in Utah that developed from a single quaking aspen tree. There are currently more than 40,000 trunks in this colony, and the root system is about 80,000 years old.\n\n\n=== Recruit ===\n\n\n==== Serotiny ====\n\nSerotiny is a seed dispersal strategy in which the dissemination of seeds is stimulated by external triggers (such as fires) rather than by natural maturation. For serotinous plants, seeds are protected by woody structures during fires and will germinate after the fire. This trait can be found in conifer genera in both the northern and southern hemispheres as well as in flowering plant families (e.g., Banksia). Serotiny is a typical trait in the crown or high-severity fire regimes.\n\n\n==== Fire stimulated germination ====\nMany species persist in a long-lived soil seed bank, and are stimulated to germinate via thermal scarification or smoke exposure.\n\n\n==== Fire-stimulated flowering ====\nA less common strategy is fire-stimulated flowering.\n\n\n==== Dispersal ====\nSpecies with very high wind dispersal capacity and seed production often are the first arrivals after a fire or other soil disturbance. For example, fireweed is common in burned areas in the western United States.\n\n\n== Plants and fire regimes ==\nThe fire regime exerts a strong filter on which plant species may occur in a given locality. For example, trees in high-severity regimes usually have thin bark while trees in low-severity regimes typically have thick bark. Another example will be that trees in surface fire regimes tend to have epicormic buds rather than basal buds. On the other hand, plants can also alter fire regimes. Oaks, for example, produce a litter layer which slows down the fire spread while pines create a flammable duff layer which increases fire spread. More profoundly, the composition of species can influence fire regimes even when the climate remains unchanged. For example, the mixed forests consists of conifers and chaparral can be found in Cascade Mountains. Conifers burn with low-severity surface fires while chaparral burns with high-severity crown fires. Ironically, some trees can \"use\" fires to help them to survive during competitions with other trees. Pine trees, for example, can produce flammable litter layers, which help them to take advantage during the completion with other, less fire adapted, species.\n\n\n== Evolution of fire survival traits ==\nPhylogenetic studies indicated that fire adaptive traits have evolved for a long time (tens of millions of years) and these traits are associated with the environment. In habitats with regular surface fires, similar species developed traits such as thick bark and self-pruning branches. In crown fire regimes, pines have evolved traits such as retaining dead branches in order to attract fires. These traits are inherited from the fire-sensitive ancestors of modern pines. Other traits such as serotiny and fire-stimulating flowering also have evolved for millions of years. Some species are capable of using flammability to establish their habitats. For example, trees evolved with fire-embracing traits can \"sacrifice\" themselves during fires. But they also cause fires to spread and kill their less flammable neighbors. With the help of other fire adaptive traits such as serotiny, flammable trees will occupy the gap created by fires and colonize the habitat.\n\n\n== Animals' adaptations to fires ==\n\n\n=== Direct effects of fires on animals ===\nMost animals have sufficient mobility to successfully evade fires. Vertebrates such as large mammals and adult birds are usually capable of escaping from fires. However, young animals which lack mobility may suffer from fires and have high mortality. Ground-dwelling invertebrates are less impacted by fires (due to low thermal diffusivity of soil) while tree-living invertebrates may be killed by crown fires but survive surface fires. Animals are seldom killed by fires directly. Of the animals killed during the Yellowstone fires of 1988, asphyxiation is believed to be the primary cause of death.\n\n\n=== Long term effects of fires on animals ===\nMore importantly, fires have long-term effects on the post-burn environment. Fires in seldom-burned rainforests can cause disasters. For example, El Ni\u00f1o-induced surface fires in central Brazilian Amazonia have seriously affected the habitats of birds and primates. Fires also expose animals to dangers such as humans or predators. Generally in a habitat previously with more understory species and less open site species, a fire may replace the fauna structure with more open species and much less understory species. However, the habitat normally will recover to the original structure.\n\n\n== Animals and fire regimes ==\n\nJust like plants may alter fire regimes, animals also have impacts on fire regimes. For example, grazing animals consume fuel for fires and reduce the possibilities of future fires. Many animals play roles as designers of fire regimes. Prairie dogs, for example, are rodents which are common in North America. They are able to control fires by grazing grasses too short to burn.\n\n\n== Animal use of fire ==\n\nFires are not always detrimental. Burnt areas usually have better quality and accessibility of foods for animals, which attract animals to forage from nearby habitats. For example, fires can kill trees, and dead trees can attract insects. Birds are attracted by the abundance of food, and they can spread the seeds of herbaceous plants. Eventually large herbivores will also flourish. Also, large mammals prefer newly burnt areas because they need less vigilance for predators. An example of animals' uses on fires is the black kite, a carnivorous bird which can be found globally. Although it is still not confirmed, black kites were witnessed to carry smoldering sticks to deliberately start fires. These birds can then capture the escaping insects and rodents.\n\n\n== Summary ==\nBoth plants and animals have multiple strategies to adapt with fires. Moreover, both plants and animals are capable of altering fire regimes. Humans know how to use fires, and plants and animals \"know\" it as well.\n\n\n== See also ==\nFire Effects Information System\nFire ecology\nDisturbance (ecology)\n\n\n== References ==", "content_traditional": "hand plants crown highseverity fire regimes usually thinner barks meaningless invest thick bark without conferring advantage survivorship. lignotubers woody structures around roots plants contains many dormant buds nutrients starch helpful plants recover fire. strategies classified three types resist aboveground parts survive fire recover evade mortality sprouting recruit seed germination fire. pine trees example produce flammable litter layers help take advantage completion less fire adapted species. grounddwelling invertebrates less impacted fires due low thermal diffusivity soil treeliving invertebrates may killed crown fires survive surface fires. shrubs trees buds located ground able resprout even stems killed fire. evolution fire survival traits phylogenetic studies indicated fire adaptive traits evolved long time tens millions years traits associated environment. recruit serotiny serotiny seed dispersal strategy dissemination seeds stimulated external triggers fires rather natural maturation. help fire adaptive traits serotiny flammable trees occupy gap created fires colonize habitat. fire stimulated germination many species persist longlived soil seed bank stimulated germinate via thermal scarification smoke exposure. protection bark living tissue wo nt direct contact fire survival rate plants increased. trait found conifer genera northern southern hemispheres well flowering plant families eg banksia. dispersal species high wind dispersal capacity seed production often first arrivals fire soil disturbance. example el ni\u00f1oinduced surface fires central brazilian amazonia seriously affected habitats birds primates. bark may removed burnt severe fires buds still able germinate recover. burnt areas usually better quality accessibility foods animals attract animals forage nearby habitats. currently 40000 trunks colony root system 80000 years old. habitats regular surface fires similar species developed traits thick bark selfpruning branches. selfpruning branches reduce chance surface fire reach canopy ladder fuels removed. plants fire regimes fire regime exerts strong filter plant species may occur given locality. example animals uses fires black kite carnivorous bird found globally. although still confirmed black kites witnessed carry smoldering sticks deliberately start fires. another example trees surface fire regimes tend epicormic buds rather basal buds. crown fire regimes pines evolved traits retaining dead branches order attract fires. oaks example produce litter layer slows fire spread pines create flammable duff layer increases fire spread. serotinous plants seeds protected woody structures fires germinate fire. clonal communities individuals developed vegetatively one single ancestor rather reproduced sexually. generally habitat previously understory species less open site species fire may replace fauna structure open species much less understory species. traits serotiny firestimulating flowering also evolved millions years. example pando large clonal aspen colony utah developed single quaking aspen tree. animals killed yellowstone fires 1988 asphyxiation believed primary cause death. example mixed forests consists conifers chaparral found cascade mountains. however new studies indicate hydraulic failure kills trees fire addition fire scorching. species lignotubers often seen crown highseverity fire regimes eg chamise chaparral. birds attracted abundance food spread seeds herbaceous plants. profoundly composition species influence fire regimes even climate remains unchanged. example trees highseverity regimes usually thin bark trees lowseverity regimes typically thick bark. vertebrates large mammals adult birds usually capable escaping fires. also large mammals prefer newly burnt areas need less vigilance predators. buds turn active grow due environmental stress fire drought. fortunately thick bark protect plants keep stems away high temperature. thick bark common species adapted surface lowseverity fire regimes. heat resistance function bark thermal diffusivity property species bark thickness increasing exponentially bark thickness. buds mother plant develop basal shoots suckers roots distance plant. ironically trees use fires help survive competitions trees. clonal spread clonal spread usually triggered fires forms removal aboveground stems. however young animals lack mobility may suffer fires high mortality. case stem damaged fire buds sprout forming basal shoots. animals adaptations fires direct effects fires animals animals sufficient mobility successfully evade fires. plant adaptations fire unlike animals plants able move physically fire. however plants ways survive fire event recover fire. also cause fires spread kill less flammable neighbors. example trees evolved fireembracing traits sacrifice fires. example grazing animals consume fuel fires reduce possibilities future fires. high temperature cuts water supply canopy causes death tree. selfpruning branches common surface lowseverity fire regimes. animals fire regimes like plants may alter fire regimes animals also impacts fire regimes. conifers burn lowseverity surface fires chaparral burns highseverity crown fires. long term effects fires animals importantly fires longterm effects postburn environment. able control fires grazing grasses short burn. example fireweed common burned areas western united states. trait help plants recover canopies rapidly fire. serotiny typical trait crown highseverity fire regimes. summary plants animals multiple strategies adapt fires. traits inherited firesensitive ancestors modern pines. trait common surface lowseverity fire regimes. fire plays role filter select different fire response traits. resist thick bark fire impacts plants directly via heat damage. prairie dogs example rodents common north america. species capable using flammability establish habitats. fires also expose animals dangers humans predators. moreover plants animals capable altering fire regimes. humans know use fires plants animals know well. lignotubers plants thick bark epicormic buds.", "custom_approach": "Serotiny is a typical trait in the crown or high-severity fire regimes.Many species persist in a long-lived soil seed bank, and are stimulated to germinate via thermal scarification or smoke exposure.A less common strategy is fire-stimulated flowering.Species with very high wind dispersal capacity and seed production often are the first arrivals after a fire or other soil disturbance. Pine trees, for example, can produce flammable litter layers, which help them to take advantage during the completion with other, less fire adapted, species.Phylogenetic studies indicated that fire adaptive traits have evolved for a long time (tens of millions of years) and these traits are associated with the environment. There are currently more than 40,000 trunks in this colony, and the root system is about 80,000 years old.Serotiny is a seed dispersal strategy in which the dissemination of seeds is stimulated by external triggers (such as fires) rather than by natural maturation. On the other hand, plants in crown or high-severity fire regimes usually have thinner barks because it is meaningless to invest in thick bark without it conferring an advantage in survivorship.Self-pruning is another trait of plants to resist fires. Species with lignotubers are often seen in crown or high-severity fire regimes (e.g., chamise in chaparral).Clonal spread is usually triggered by fires and other forms of removal of above-ground stems. With the help of other fire adaptive traits such as serotiny, flammable trees will occupy the gap created by fires and colonize the habitat.Most animals have sufficient mobility to successfully evade fires. Of the animals killed during the Yellowstone fires of 1988, asphyxiation is believed to be the primary cause of death.More importantly, fires have long-term effects on the post-burn environment. Lignotubers, woody structures around the roots of plants that contains many dormant buds and nutrients such as starch, are very helpful for plants to recover after a fire. For example, fireweed is common in burned areas in the western United States.The fire regime exerts a strong filter on which plant species may occur in a given locality. The strategies can be classified into three types: resist (above-ground parts survive fire), recover (evade mortality by sprouting), and recruit (seed germination after the fire). Ground-dwelling invertebrates are less impacted by fires (due to low thermal diffusivity of soil) while tree-living invertebrates may be killed by crown fires but survive surface fires. But for some shrubs and trees, their buds are located below ground, which are able to re-sprout even when the stems are killed by fire. However, the habitat normally will recover to the original structure.Just like plants may alter fire regimes, animals also have impacts on fire regimes. This trait is common in surface or low-severity fire regimes.Not all plants have thick bark and epicormic buds. Under the protection of bark, living tissue won't have direct contact with fire and the survival rate of plants will be increased. This trait can be found in conifer genera in both the northern and southern hemispheres as well as in flowering plant families (e.g., Banksia). These birds can then capture the escaping insects and rodents.Both plants and animals have multiple strategies to adapt with fires. Self-pruning branches are common in surface or low-severity fire regimes.Epicormic buds are dormant buds under the bark or even deeper. For example, El Ni\u00f1o-induced surface fires in central Brazilian Amazonia have seriously affected the habitats of birds and primates. The bark may be removed or burnt by severe fires, but buds are still able to germinate and recover. Burnt areas usually have better quality and accessibility of foods for animals, which attract animals to forage from nearby habitats. Fire plays a role as a filter that can select different fire response traits.Fire impacts plants most directly via heat damage. In habitats with regular surface fires, similar species developed traits such as thick bark and self-pruning branches. Self-pruning branches can reduce the chance for surface fire to reach the canopy because ladder fuels are removed. An example of animals' uses on fires is the black kite, a carnivorous bird which can be found globally. Although it is still not confirmed, black kites were witnessed to carry smoldering sticks to deliberately start fires. Another example will be that trees in surface fire regimes tend to have epicormic buds rather than basal buds. Oaks, for example, produce a litter layer which slows down the fire spread while pines create a flammable duff layer which increases fire spread. In crown fire regimes, pines have evolved traits such as retaining dead branches in order to attract fires. For serotinous plants, seeds are protected by woody structures during fires and will germinate after the fire. In clonal communities, all the individuals developed vegetatively from one single ancestor rather than reproduced sexually. Generally in a habitat previously with more understory species and less open site species, a fire may replace the fauna structure with more open species and much less understory species. Other traits such as serotiny and fire-stimulating flowering also have evolved for millions of years. For example, the Pando is a large clonal aspen colony in Utah that developed from a single quaking aspen tree. They are able to control fires by grazing grasses too short to burn.Fires are not always detrimental. For example, the mixed forests consists of conifers and chaparral can be found in Cascade Mountains. However, new studies indicate that hydraulic failure kills trees during a fire in addition to fire scorching. Birds are attracted by the abundance of food, and they can spread the seeds of herbaceous plants. For example, trees in high-severity regimes usually have thin bark while trees in low-severity regimes typically have thick bark. More profoundly, the composition of species can influence fire regimes even when the climate remains unchanged. Vertebrates such as large mammals and adult birds are usually capable of escaping from fires. Also, large mammals prefer newly burnt areas because they need less vigilance for predators. Buds can turn active and grow due to environmental stress such as fire or drought. Fortunately, thick bark can protect plants because they keep stems away from high temperature. Thick bark is common in species adapted to surface or low-severity fire regimes. Heat resistance is a function of bark thermal diffusivity (a property of the species) and bark thickness (increasing exponentially with bark thickness). The buds from the mother plant can develop into basal shoots or suckers from roots some distance from the plant. Ironically, some trees can \"use\" fires to help them to survive during competitions with other trees. However, young animals which lack mobility may suffer from fires and have high mortality. In case the stem was damaged by a fire, buds will sprout forming basal shoots. However, plants have their own ways to survive a fire event or recover after a fire. But they also cause fires to spread and kill their less flammable neighbors. For example, trees evolved with fire-embracing traits can \"sacrifice\" themselves during fires. For example, grazing animals consume fuel for fires and reduce the possibilities of future fires. High temperature cuts the water supply to the canopy and causes the death of the tree. Conifers burn with low-severity surface fires while chaparral burns with high-severity crown fires. This trait can help plants to recover their canopies rapidly after a fire. These traits are inherited from the fire-sensitive ancestors of modern pines. Prairie dogs, for example, are rodents which are common in North America. Some species are capable of using flammability to establish their habitats. Fires also expose animals to dangers such as humans or predators. Unlike animals, plants are not able to move physically during a fire. Moreover, both plants and animals are capable of altering fire regimes. Humans know how to use fires, and plants and animals \"know\" it as well. On the other hand, plants can also alter fire regimes. Aspen and Californian redwoods are two examples of clonal spread. For example, fires can kill trees, and dead trees can attract insects. Many animals play roles as designers of fire regimes. Fires in seldom-burned rainforests can cause disasters.", "combined_approach": "serotiny typical trait crown highseverity fire regimesmany species persist longlived soil seed bank stimulated germinate via thermal scarification smoke exposurea less common strategy firestimulated floweringspecies high wind dispersal capacity seed production often first arrivals fire soil disturbance. pine trees example produce flammable litter layers help take advantage completion less fire adapted speciesphylogenetic studies indicated fire adaptive traits evolved long time tens millions years traits associated environment. currently 40000 trunks colony root system 80000 years oldserotiny seed dispersal strategy dissemination seeds stimulated external triggers fires rather natural maturation. hand plants crown highseverity fire regimes usually thinner barks meaningless invest thick bark without conferring advantage survivorshipselfpruning another trait plants resist fires. species lignotubers often seen crown highseverity fire regimes eg chamise chaparralclonal spread usually triggered fires forms removal aboveground stems. help fire adaptive traits serotiny flammable trees occupy gap created fires colonize habitatmost animals sufficient mobility successfully evade fires. animals killed yellowstone fires 1988 asphyxiation believed primary cause deathmore importantly fires longterm effects postburn environment. lignotubers woody structures around roots plants contains many dormant buds nutrients starch helpful plants recover fire. example fireweed common burned areas western united statesthe fire regime exerts strong filter plant species may occur given locality. strategies classified three types resist aboveground parts survive fire recover evade mortality sprouting recruit seed germination fire. grounddwelling invertebrates less impacted fires due low thermal diffusivity soil treeliving invertebrates may killed crown fires survive surface fires. shrubs trees buds located ground able resprout even stems killed fire. however habitat normally recover original structurejust like plants may alter fire regimes animals also impacts fire regimes. trait common surface lowseverity fire regimesnot plants thick bark epicormic buds. protection bark living tissue wo nt direct contact fire survival rate plants increased. trait found conifer genera northern southern hemispheres well flowering plant families eg banksia. birds capture escaping insects rodentsboth plants animals multiple strategies adapt fires. selfpruning branches common surface lowseverity fire regimesepicormic buds dormant buds bark even deeper. example el ni\u00f1oinduced surface fires central brazilian amazonia seriously affected habitats birds primates. bark may removed burnt severe fires buds still able germinate recover. burnt areas usually better quality accessibility foods animals attract animals forage nearby habitats. fire plays role filter select different fire response traitsfire impacts plants directly via heat damage. habitats regular surface fires similar species developed traits thick bark selfpruning branches. selfpruning branches reduce chance surface fire reach canopy ladder fuels removed. example animals uses fires black kite carnivorous bird found globally. although still confirmed black kites witnessed carry smoldering sticks deliberately start fires. another example trees surface fire regimes tend epicormic buds rather basal buds. oaks example produce litter layer slows fire spread pines create flammable duff layer increases fire spread. crown fire regimes pines evolved traits retaining dead branches order attract fires. serotinous plants seeds protected woody structures fires germinate fire. clonal communities individuals developed vegetatively one single ancestor rather reproduced sexually. generally habitat previously understory species less open site species fire may replace fauna structure open species much less understory species. traits serotiny firestimulating flowering also evolved millions years. example pando large clonal aspen colony utah developed single quaking aspen tree. able control fires grazing grasses short burnfires always detrimental. example mixed forests consists conifers chaparral found cascade mountains. however new studies indicate hydraulic failure kills trees fire addition fire scorching. birds attracted abundance food spread seeds herbaceous plants. example trees highseverity regimes usually thin bark trees lowseverity regimes typically thick bark. profoundly composition species influence fire regimes even climate remains unchanged. vertebrates large mammals adult birds usually capable escaping fires. also large mammals prefer newly burnt areas need less vigilance predators. buds turn active grow due environmental stress fire drought. fortunately thick bark protect plants keep stems away high temperature. thick bark common species adapted surface lowseverity fire regimes. heat resistance function bark thermal diffusivity property species bark thickness increasing exponentially bark thickness. buds mother plant develop basal shoots suckers roots distance plant. ironically trees use fires help survive competitions trees. however young animals lack mobility may suffer fires high mortality. case stem damaged fire buds sprout forming basal shoots. however plants ways survive fire event recover fire. also cause fires spread kill less flammable neighbors. example trees evolved fireembracing traits sacrifice fires. example grazing animals consume fuel fires reduce possibilities future fires. high temperature cuts water supply canopy causes death tree. conifers burn lowseverity surface fires chaparral burns highseverity crown fires. trait help plants recover canopies rapidly fire. traits inherited firesensitive ancestors modern pines. prairie dogs example rodents common north america. species capable using flammability establish habitats. fires also expose animals dangers humans predators. unlike animals plants able move physically fire. moreover plants animals capable altering fire regimes. humans know use fires plants animals know well. hand plants also alter fire regimes. aspen californian redwoods two examples clonal spread. example fires kill trees dead trees attract insects. many animals play roles designers fire regimes. fires seldomburned rainforests cause disasters."}, {"topic": "Upland and lowland", "summary": "Upland and lowland are conditional descriptions of a plain based on elevation above sea level. In studies of the ecology of freshwater rivers, habitats are classified as upland or lowland.\n\n", "content": "\n== Definitions ==\nUpland and lowland are portions of plain that are conditionally categorized by their elevation above the sea level. Lowlands are usually no higher than 200 m (660 ft), while uplands are somewhere around 200 m (660 ft) to 500 m (1,600 ft). On unusual occasions, certain lowlands such as the Caspian Depression lie below sea level.\nUpland habitats are cold, clear and rocky whose rivers are fast-flowing in mountainous areas; lowland habitats are warm with slow-flowing rivers found in relatively flat lowland areas, with water that is frequently colored by sediment and organic matter.\nThese classifications overlap with the geological definitions of \"upland\" and \"lowland\". In geology an \"upland\" is generally considered to be land that is at a higher elevation than the alluvial plain or stream terrace, which are considered to be \"lowlands\". The term \"bottomland\" refers to low-lying alluvial land near a river.\nMuch freshwater fish and invertebrate communities around the world show a pattern of specialization into upland or lowland river habitats. Classifying rivers and streams as upland or lowland is important in freshwater ecology, as the two types of river habitat are very different, and usually support very different populations of fish and invertebrate species.\n\n\n== Uplands ==\nIn freshwater ecology, upland rivers and streams are the fast-flowing rivers and streams that drain elevated or mountainous country, often onto broad alluvial plains (where they become lowland rivers). However, elevation is not the sole determinant of whether a river is upland or lowland. Arguably the most important determinants are those of stream power and stream gradient. Rivers with a course that drops rapidly in elevation will have faster water flow and higher stream power or \"force of water\". This in turn produces the other characteristics of an upland river\u2014an incised course, a river bed dominated by bedrock and coarse sediments, a riffle and pool structure and cooler water temperatures. Rivers with a course that drops in elevation very slowly will have slower water flow and lower force. This in turn produces the other characteristics of a lowland river\u2014a meandering course lacking rapids, a river bed dominated by fine sediments and higher water temperatures. Lowland rivers tend to carry more suspended sediment and organic matter as well, but some lowland rivers have periods of high water clarity in seasonal low-flow periods.\nThe generally clear, cool, fast-flowing waters and bedrock and coarse sediment beds of upland rivers encourage fish species with limited temperature tolerances, high oxygen needs, strong swimming ability and specialised reproductive strategies to prevent eggs or larvae being swept away. These characteristics also encourage invertebrate species with limited temperature tolerances, high oxygen needs and ecologies revolving around coarse sediments and interstices or \"gaps\" between those coarse sediments.\nThe term \"upland\" is also used in wetland ecology, where \"upland\" plants indicate an area that is not a wetland.\n\n\n== Lowlands ==\n\nThe generally more turbid, warm, slow-flowing waters and fine sediment beds of lowland rivers encourage fish species with broad temperature tolerances and greater tolerances to low oxygen levels, and life history and breeding strategies adapted to these and other traits of lowland rivers. These characteristics also encourage invertebrate species with broad temperature tolerances and greater tolerances to low oxygen levels and ecologies revolving around fine sediments or alternative habitats such as submerged woody debris (\"snags\") or submergent macrophytes (\"water weed\").\n\n\n== Lowland alluvial plains ==\nAmerican Bottom\u2014flood plain of the Mississippi River in Southern Illinois\nBois Brule Bottom\nBottomland hardwood forest\u2014deciduous hardwood forest found in broad lowland floodplains of the United States\n\n\n== See also ==\nFreshwater biology\nHighland\nMountain river\nRiver reclamation\nRiparian zone\n\n\n== References ==", "content_traditional": "definitions upland lowland portions plain conditionally categorized elevation sea level. lowlands usually higher 200 660 ft uplands somewhere around 200 660 ft 500 1600 ft. unusual occasions certain lowlands caspian depression lie sea level. upland habitats cold clear rocky whose rivers fastflowing mountainous areas lowland habitats warm slowflowing rivers found relatively flat lowland areas water frequently colored sediment organic matter. classifications overlap geological definitions upland lowland. geology upland generally considered land higher elevation alluvial plain stream terrace considered lowlands. term bottomland refers lowlying alluvial land near river. much freshwater fish invertebrate communities around world show pattern specialization upland lowland river habitats. classifying rivers streams upland lowland important freshwater ecology two types river habitat different usually support different populations fish invertebrate species. uplands freshwater ecology upland rivers streams fastflowing rivers streams drain elevated mountainous country often onto broad alluvial plains become lowland rivers. however elevation sole determinant whether river upland lowland. arguably important determinants stream power stream gradient. rivers course drops rapidly elevation faster water flow higher stream power force water. turn produces characteristics upland river \u2014 incised course river bed dominated bedrock coarse sediments riffle pool structure cooler water temperatures. rivers course drops elevation slowly slower water flow lower force. turn produces characteristics lowland river \u2014 meandering course lacking rapids river bed dominated fine sediments higher water temperatures. lowland rivers tend carry suspended sediment organic matter well lowland rivers periods high water clarity seasonal lowflow periods. generally clear cool fastflowing waters bedrock coarse sediment beds upland rivers encourage fish species limited temperature tolerances high oxygen needs strong swimming ability specialised reproductive strategies prevent eggs larvae swept away. characteristics also encourage invertebrate species limited temperature tolerances high oxygen needs ecologies revolving around coarse sediments interstices gaps coarse sediments. term upland also used wetland ecology upland plants indicate area wetland. lowlands generally turbid warm slowflowing waters fine sediment beds lowland rivers encourage fish species broad temperature tolerances greater tolerances low oxygen levels life history breeding strategies adapted traits lowland rivers. characteristics also encourage invertebrate species broad temperature tolerances greater tolerances low oxygen levels ecologies revolving around fine sediments alternative habitats submerged woody debris snags submergent macrophytes water weed. lowland alluvial plains american bottom \u2014 flood plain mississippi river southern illinois bois brule bottom bottomland hardwood forest \u2014 deciduous hardwood forest found broad lowland floodplains united states see also freshwater biology highland mountain river river reclamation riparian zone references.", "custom_approach": "Upland and lowland are portions of plain that are conditionally categorized by their elevation above the sea level. Lowlands are usually no higher than 200 m (660 ft), while uplands are somewhere around 200 m (660 ft) to 500 m (1,600 ft). On unusual occasions, certain lowlands such as the Caspian Depression lie below sea level. Upland habitats are cold, clear and rocky whose rivers are fast-flowing in mountainous areas; lowland habitats are warm with slow-flowing rivers found in relatively flat lowland areas, with water that is frequently colored by sediment and organic matter. These classifications overlap with the geological definitions of \"upland\" and \"lowland\". In geology an \"upland\" is generally considered to be land that is at a higher elevation than the alluvial plain or stream terrace, which are considered to be \"lowlands\". The term \"bottomland\" refers to low-lying alluvial land near a river. Much freshwater fish and invertebrate communities around the world show a pattern of specialization into upland or lowland river habitats. Classifying rivers and streams as upland or lowland is important in freshwater ecology, as the two types of river habitat are very different, and usually support very different populations of fish and invertebrate species.In freshwater ecology, upland rivers and streams are the fast-flowing rivers and streams that drain elevated or mountainous country, often onto broad alluvial plains (where they become lowland rivers). However, elevation is not the sole determinant of whether a river is upland or lowland. Arguably the most important determinants are those of stream power and stream gradient. Rivers with a course that drops rapidly in elevation will have faster water flow and higher stream power or \"force of water\". This in turn produces the other characteristics of an upland river\u2014an incised course, a river bed dominated by bedrock and coarse sediments, a riffle and pool structure and cooler water temperatures. Rivers with a course that drops in elevation very slowly will have slower water flow and lower force. This in turn produces the other characteristics of a lowland river\u2014a meandering course lacking rapids, a river bed dominated by fine sediments and higher water temperatures. Lowland rivers tend to carry more suspended sediment and organic matter as well, but some lowland rivers have periods of high water clarity in seasonal low-flow periods. The generally clear, cool, fast-flowing waters and bedrock and coarse sediment beds of upland rivers encourage fish species with limited temperature tolerances, high oxygen needs, strong swimming ability and specialised reproductive strategies to prevent eggs or larvae being swept away. These characteristics also encourage invertebrate species with limited temperature tolerances, high oxygen needs and ecologies revolving around coarse sediments and interstices or \"gaps\" between those coarse sediments. The term \"upland\" is also used in wetland ecology, where \"upland\" plants indicate an area that is not a wetland.The generally more turbid, warm, slow-flowing waters and fine sediment beds of lowland rivers encourage fish species with broad temperature tolerances and greater tolerances to low oxygen levels, and life history and breeding strategies adapted to these and other traits of lowland rivers. These characteristics also encourage invertebrate species with broad temperature tolerances and greater tolerances to low oxygen levels and ecologies revolving around fine sediments or alternative habitats such as submerged woody debris (\"snags\") or submergent macrophytes (\"water weed\").American Bottom\u2014flood plain of the Mississippi River in Southern Illinois Bois Brule Bottom Bottomland hardwood forest\u2014deciduous hardwood forest found in broad lowland floodplains of the United States", "combined_approach": "upland lowland portions plain conditionally categorized elevation sea level. lowlands usually higher 200 660 ft uplands somewhere around 200 660 ft 500 1600 ft. unusual occasions certain lowlands caspian depression lie sea level. upland habitats cold clear rocky whose rivers fastflowing mountainous areas lowland habitats warm slowflowing rivers found relatively flat lowland areas water frequently colored sediment organic matter. classifications overlap geological definitions upland lowland. geology upland generally considered land higher elevation alluvial plain stream terrace considered lowlands. term bottomland refers lowlying alluvial land near river. much freshwater fish invertebrate communities around world show pattern specialization upland lowland river habitats. classifying rivers streams upland lowland important freshwater ecology two types river habitat different usually support different populations fish invertebrate speciesin freshwater ecology upland rivers streams fastflowing rivers streams drain elevated mountainous country often onto broad alluvial plains become lowland rivers. however elevation sole determinant whether river upland lowland. arguably important determinants stream power stream gradient. rivers course drops rapidly elevation faster water flow higher stream power force water. turn produces characteristics upland river \u2014 incised course river bed dominated bedrock coarse sediments riffle pool structure cooler water temperatures. rivers course drops elevation slowly slower water flow lower force. turn produces characteristics lowland river \u2014 meandering course lacking rapids river bed dominated fine sediments higher water temperatures. lowland rivers tend carry suspended sediment organic matter well lowland rivers periods high water clarity seasonal lowflow periods. generally clear cool fastflowing waters bedrock coarse sediment beds upland rivers encourage fish species limited temperature tolerances high oxygen needs strong swimming ability specialised reproductive strategies prevent eggs larvae swept away. characteristics also encourage invertebrate species limited temperature tolerances high oxygen needs ecologies revolving around coarse sediments interstices gaps coarse sediments. term upland also used wetland ecology upland plants indicate area wetlandthe generally turbid warm slowflowing waters fine sediment beds lowland rivers encourage fish species broad temperature tolerances greater tolerances low oxygen levels life history breeding strategies adapted traits lowland rivers. characteristics also encourage invertebrate species broad temperature tolerances greater tolerances low oxygen levels ecologies revolving around fine sediments alternative habitats submerged woody debris snags submergent macrophytes water weedamerican bottom \u2014 flood plain mississippi river southern illinois bois brule bottom bottomland hardwood forest \u2014 deciduous hardwood forest found broad lowland floodplains united states."}, {"topic": "Disruptive selection", "summary": "Disruptive selection, also called diversifying selection, describes changes in population genetics in which extreme values for a trait are favored over intermediate values. In this case, the variance of the trait increases and the population is divided into two distinct groups. In this more individuals acquire peripheral character value at both ends of the distribution curve.", "content": "\n\n\n== Overview ==\nNatural selection is known to be one of the most important biological processes behind evolution. There are many variations of traits, and some cause greater or lesser reproductive success of the individual. The effect of selection is to promote certain alleles, traits, and individuals that have a higher chance to survive and reproduce in their specific environment. Since the environment has a carrying capacity, nature acts on this mode of selection on individuals to let only the most fit offspring survive and reproduce to their full potential. The more advantageous the trait is the more common it will become in the population. Disruptive selection is a specific type of natural selection that actively selects against the intermediate in a population, favoring both extremes of the spectrum.\nDisruptive selection is inferred to oftentimes lead to sympatric speciation through a phyletic gradualism mode of evolution. Disruptive selection can be caused or influenced by multiple factors and also have multiple outcomes, in addition to speciation. Individuals within the same environment can develop a preference for extremes of a trait, against the intermediate. Selection can act on having divergent body morphologies in accessing food, such as beak and dental structure. It is seen that often this is more prevalent in environments where there is not a wide clinal range of resources, causing heterozygote disadvantage or selection favoring homozygotes.\nNiche partitioning allows for selection of differential patterns of resource usage, which can drive speciation. To the contrast, niche conservation pulls individuals toward ancestral ecological traits in an evolutionary tug-of-war. Also, nature tends to have a 'jump on the band wagon' perspective when something beneficial is found. This can lead to the opposite occurring with disruptive selection eventually selecting against the average; when everyone starts taking advantage of that resource it will become depleted and the extremes will be favored. Furthermore, gradualism is a more realistic view when looking at speciation as compared to punctuated equilibrium.\nDisruptive selection can initially rapidly intensify divergence; this is because it is only manipulating alleles that already exist. Often it is not creating new ones by mutation which takes a long time. Usually complete reproductive isolation does not occur until many generations, but behavioral or morphological differences separate the species from reproducing generally. Furthermore, generally hybrids have reduced fitness which promotes reproductive isolation.\n\n\n== Example ==\nSuppose there is a population of rabbits. The colour of the rabbits is governed by two incompletely dominant traits: black fur, represented by \"B\", and white fur, represented by \"b\". A rabbit in this population with a genotype of \"BB\" would have a phenotype of black fur, a genotype of \"Bb\" would have grey fur (a display of both black and white), and a genotype of \"bb\" would have white fur.\nIf this population of rabbits occurred in an environment that had areas of black rocks as well as areas of white rocks, the rabbits with black fur would be able to hide from predators amongst the black rocks, and the rabbits with white fur likewise amongst the white rocks. The rabbits with grey fur, however, would stand out in all areas of the habitat, and would thereby suffer greater predation.\nAs a consequence of this type of selective pressure, our hypothetical rabbit population would be disruptively selected for extreme values of the fur colour trait: white or black, but not grey. This is an example of underdominance (heterozygote disadvantage) leading to disruptive selection.\n\n\n== Sympatric speciation ==\nIt is believed that disruptive selection is one of the main forces that drive sympatric speciation in natural populations. The pathways that lead from disruptive selection to sympatric speciation seldom are prone to deviation; such speciation is a domino effect that depends on the consistency of each distinct variable. These pathways are the result of disruptive selection in intraspecific competition; it may cause reproductive isolation, and finally culminate in sympatric speciation.\nIt is important to keep in mind that disruptive selection does not always have to be based on intraspecific competition. It is also important to know that this type of natural selection is similar to the other ones. Where it is not the major factor, intraspecific competition can be discounted in assessing the operative aspects of the course of adaptation. For example, what may drive disruptive selection instead of intraspecific competition might be polymorphisms that lead to reproductive isolation, and thence to speciation.When disruptive selection is based on intraspecific competition, the resulting selection in turn promotes ecological niche diversification and polymorphisms. If multiple morphs (phenotypic forms) occupy different niches, such separation could be expected to promote reduced competition for resources. Disruptive selection is seen more often in high density populations rather than in low density populations because intraspecific competition tends to be more intense within higher density populations. This is because higher density populations often imply more competition for resources. The resulting competition drives polymorphisms to exploit different niches or changes in niches in order to avoid competition. If one morph has no need for resources used by another morph, then it is likely that neither would experience pressure to compete or interact, thereby supporting the persistence and possibly the intensification of the distinctness of the two morphs within the population. This theory does not necessarily have a lot of supporting evidence in natural populations, but it has been seen many times in experimental situations using existing populations. These experiments further support that, under the right situations (as described above), this theory could prove to be true in nature.When intraspecific competition is not at work disruptive selection can still lead to sympatric speciation and it does this through maintaining polymorphisms. Once the polymorphisms are maintained in the population, if assortative mating is taking place, then this is one way that disruptive selection can lead in the direction of sympatric speciation. If different morphs have different mating preferences then assortative mating can occur, especially if the polymorphic trait is a \"magic trait\", meaning a trait that is under ecological selection and in turn has a side effect on reproductive behavior. In a situation where the polymorphic trait is not a magic trait then there has to be some kind of fitness penalty for those individuals who do not mate assortatively and a mechanism that causes assortative mating has to evolve in the population. For example, if a species of butterflies develops two kinds of wing patterns, crucial to mimicry purposes in their preferred habitat, then mating between two butterflies of different wing patterns leads to an unfavorable heterozygote. Therefore, butterflies will tend to mate with others of the same wing pattern promoting increased fitness, eventually eliminating the heterozygote altogether. This unfavorable heterozygote generates pressure for a mechanism that cause assortative mating which will then lead to reproductive isolation due to the production of post-mating barriers. It is actually fairly common to see sympatric speciation when disruptive selection is supporting two morphs, specifically when the phenotypic trait affects fitness rather than mate choice.In both situations, one where intraspecific competition is at work and the other where it is not, if all these factors are in place, they will lead to reproductive isolation, which can lead to sympatric speciation.\n\n\n== Other outcomes ==\npolymorphism\nsexual dimorphism\nphenotypic plasticity\n\n\n== Significance ==\nDisruptive selection is of particular significance in the history of evolutionary study, as it is involved in one of evolution's \"cardinal cases\", namely the finch populations observed by Darwin in the Gal\u00e1pagos.\nHe observed that the species of finches were similar enough to ostensibly have been descended from a single species. However, they exhibited disruptive variation in beak size. This variation appeared to be adaptively related to the seed size available on the respective islands (big beaks for big seeds, small beaks for small seeds). Medium beaks had difficulty retrieving small seeds and were also not tough enough for the bigger seeds, and were hence maladaptive.\nWhile it is true that disruptive selection can lead to speciation, this is not as quick or straightforward of a process as other types of speciation or evolutionary change. This introduces the topic of gradualism, which is a slow but continuous accumulation of changes over long periods of time. This is largely because the results of disruptive selection are less stable than the results of directional selection (directional selection favors individuals at only one end of the spectrum).\nFor example, let us take the mathematically straightforward yet biologically improbable case of the rabbits: Suppose directional selection were taking place. The field only has dark rocks in it, so the darker the rabbit, the more effectively it can hide from predators. Eventually there will be a lot of black rabbits in the population (hence many \"B\" alleles) and a lesser amount of grey rabbits (who contribute 50% chromosomes with \"B\" allele and 50% chromosomes with \"b\" allele to the population). There will be few white rabbits (not very many contributors of chromosomes with \"b\" allele to the population). This could eventually lead to a situation in which chromosomes with \"b\" allele die out, making black the only possible color for all subsequent rabbits. The reason for this is that there is nothing \"boosting\" the level of \"b\" chromosomes in the population. They can only go down, and eventually die out.\nConsider now the case of disruptive selection. The result is equal numbers of black and white rabbits, and hence equal numbers of chromosomes with \"B\" or \"b\" allele, still floating around in that population. Every time a white rabbit mates with a black one, only gray rabbits results. So, in order for the results to \"click\", there needs to be a force causing white rabbits to choose other white rabbits, and black rabbits to choose other black ones. In the case of the finches, this \"force\" was geographic/niche isolation. This leads one to think that disruptive selection can't happen and is normally because of species being geographically isolated, directional selection or by stabilising selection.\n\n\n== See also ==\n\nCharacter displacement\nBalancing selection\nDirectional selection\nNegative selection (natural selection)\nStabilizing selection\nSympatric speciation\nFluctuating selection\nSelection\n\n\n== References ==", "content_traditional": "actually fairly common see sympatric speciation disruptive selection supporting two morphs specifically phenotypic trait affects fitness rather mate choicein situations one intraspecific competition work factors place lead reproductive isolation lead sympatric speciation. experiments support right situations described theory could prove true naturewhen intraspecific competition work disruptive selection still lead sympatric speciation maintaining polymorphisms. one morph need resources used another morph likely neither would experience pressure compete interact thereby supporting persistence possibly intensification distinctness two morphs within population. situation polymorphic trait magic trait kind fitness penalty individuals mate assortatively mechanism causes assortative mating evolve population. example may drive disruptive selection instead intraspecific competition might polymorphisms lead reproductive isolation thence speciationwhen disruptive selection based intraspecific competition resulting selection turn promotes ecological niche diversification polymorphisms. outcomes polymorphism sexual dimorphism phenotypic plasticity significance disruptive selection particular significance history evolutionary study involved one evolutions cardinal cases namely finch populations observed darwin gal\u00e1pagos. lead opposite occurring disruptive selection eventually selecting average everyone starts taking advantage resource become depleted extremes favored. consequence type selective pressure hypothetical rabbit population would disruptively selected extreme values fur colour trait white black grey. different morphs different mating preferences assortative mating occur especially polymorphic trait magic trait meaning trait ecological selection turn side effect reproductive behavior. polymorphisms maintained population assortative mating taking place one way disruptive selection lead direction sympatric speciation. since environment carrying capacity nature acts mode selection individuals let fit offspring survive reproduce full potential. example species butterflies develops two kinds wing patterns crucial mimicry purposes preferred habitat mating two butterflies different wing patterns leads unfavorable heterozygote. pathways lead disruptive selection sympatric speciation seldom prone deviation speciation domino effect depends consistency distinct variable. seen often prevalent environments wide clinal range resources causing heterozygote disadvantage selection favoring homozygotes. theory necessarily lot supporting evidence natural populations seen many times experimental situations using existing populations. population rabbits occurred environment areas black rocks well areas white rocks rabbits black fur would able hide predators amongst black rocks rabbits white fur likewise amongst white rocks. unfavorable heterozygote generates pressure mechanism cause assortative mating lead reproductive isolation due production postmating barriers. eventually lot black rabbits population hence many b alleles lesser amount grey rabbits contribute 50 chromosomes b allele 50 chromosomes b allele population. could eventually lead situation chromosomes b allele die making black possible color subsequent rabbits. leads one think disruptive selection ca nt happen normally species geographically isolated directional selection stabilising selection. true disruptive selection lead speciation quick straightforward process types speciation evolutionary change. usually complete reproductive isolation occur many generations behavioral morphological differences separate species reproducing generally. effect selection promote certain alleles traits individuals higher chance survive reproduce specific environment. pathways result disruptive selection intraspecific competition may cause reproductive isolation finally culminate sympatric speciation. therefore butterflies tend mate others wing pattern promoting increased fitness eventually eliminating heterozygote altogether. example let us take mathematically straightforward yet biologically improbable case rabbits suppose directional selection taking place. largely results disruptive selection less stable results directional selection directional selection favors individuals one end spectrum. multiple morphs phenotypic forms occupy different niches separation could expected promote reduced competition resources. disruptive selection specific type natural selection actively selects intermediate population favoring extremes spectrum. major factor intraspecific competition discounted assessing operative aspects course adaptation. important keep mind disruptive selection always based intraspecific competition. rabbits grey fur however would stand areas habitat would thereby suffer greater predation. order results click needs force causing white rabbits choose white rabbits black rabbits choose black ones. result equal numbers black white rabbits hence equal numbers chromosomes b b allele still floating around population. variation appeared adaptively related seed size available respective islands big beaks big seeds small beaks small seeds. disruptive selection seen often high density populations rather low density populations intraspecific competition tends intense within higher density populations. field dark rocks darker rabbit effectively hide predators. medium beaks difficulty retrieving small seeds also tough enough bigger seeds hence maladaptive. introduces topic gradualism slow continuous accumulation changes long periods time. selection act divergent body morphologies accessing food beak dental structure. disruptive selection initially rapidly intensify divergence manipulating alleles already exist. white rabbits many contributors chromosomes b allele population. contrast niche conservation pulls individuals toward ancestral ecological traits evolutionary tugofwar. many variations traits cause greater lesser reproductive success individual. observed species finches similar enough ostensibly descended single species. disruptive selection caused influenced multiple factors also multiple outcomes addition speciation. also nature tends jump band wagon perspective something beneficial found. sympatric speciation believed disruptive selection one main forces drive sympatric speciation natural populations.", "custom_approach": "It is actually fairly common to see sympatric speciation when disruptive selection is supporting two morphs, specifically when the phenotypic trait affects fitness rather than mate choice.In both situations, one where intraspecific competition is at work and the other where it is not, if all these factors are in place, they will lead to reproductive isolation, which can lead to sympatric speciation.polymorphism sexual dimorphism phenotypic plasticityDisruptive selection is of particular significance in the history of evolutionary study, as it is involved in one of evolution's \"cardinal cases\", namely the finch populations observed by Darwin in the Gal\u00e1pagos. These experiments further support that, under the right situations (as described above), this theory could prove to be true in nature.When intraspecific competition is not at work disruptive selection can still lead to sympatric speciation and it does this through maintaining polymorphisms. If one morph has no need for resources used by another morph, then it is likely that neither would experience pressure to compete or interact, thereby supporting the persistence and possibly the intensification of the distinctness of the two morphs within the population. In a situation where the polymorphic trait is not a magic trait then there has to be some kind of fitness penalty for those individuals who do not mate assortatively and a mechanism that causes assortative mating has to evolve in the population. For example, what may drive disruptive selection instead of intraspecific competition might be polymorphisms that lead to reproductive isolation, and thence to speciation.When disruptive selection is based on intraspecific competition, the resulting selection in turn promotes ecological niche diversification and polymorphisms. This can lead to the opposite occurring with disruptive selection eventually selecting against the average; when everyone starts taking advantage of that resource it will become depleted and the extremes will be favored. As a consequence of this type of selective pressure, our hypothetical rabbit population would be disruptively selected for extreme values of the fur colour trait: white or black, but not grey. If different morphs have different mating preferences then assortative mating can occur, especially if the polymorphic trait is a \"magic trait\", meaning a trait that is under ecological selection and in turn has a side effect on reproductive behavior. Once the polymorphisms are maintained in the population, if assortative mating is taking place, then this is one way that disruptive selection can lead in the direction of sympatric speciation. Since the environment has a carrying capacity, nature acts on this mode of selection on individuals to let only the most fit offspring survive and reproduce to their full potential. For example, if a species of butterflies develops two kinds of wing patterns, crucial to mimicry purposes in their preferred habitat, then mating between two butterflies of different wing patterns leads to an unfavorable heterozygote. The pathways that lead from disruptive selection to sympatric speciation seldom are prone to deviation; such speciation is a domino effect that depends on the consistency of each distinct variable. This is an example of underdominance (heterozygote disadvantage) leading to disruptive selection.It is believed that disruptive selection is one of the main forces that drive sympatric speciation in natural populations. It is seen that often this is more prevalent in environments where there is not a wide clinal range of resources, causing heterozygote disadvantage or selection favoring homozygotes. This theory does not necessarily have a lot of supporting evidence in natural populations, but it has been seen many times in experimental situations using existing populations. If this population of rabbits occurred in an environment that had areas of black rocks as well as areas of white rocks, the rabbits with black fur would be able to hide from predators amongst the black rocks, and the rabbits with white fur likewise amongst the white rocks. This unfavorable heterozygote generates pressure for a mechanism that cause assortative mating which will then lead to reproductive isolation due to the production of post-mating barriers. Eventually there will be a lot of black rabbits in the population (hence many \"B\" alleles) and a lesser amount of grey rabbits (who contribute 50% chromosomes with \"B\" allele and 50% chromosomes with \"b\" allele to the population). This could eventually lead to a situation in which chromosomes with \"b\" allele die out, making black the only possible color for all subsequent rabbits. This leads one to think that disruptive selection can't happen and is normally because of species being geographically isolated, directional selection or by stabilising selection. While it is true that disruptive selection can lead to speciation, this is not as quick or straightforward of a process as other types of speciation or evolutionary change. Usually complete reproductive isolation does not occur until many generations, but behavioral or morphological differences separate the species from reproducing generally. The effect of selection is to promote certain alleles, traits, and individuals that have a higher chance to survive and reproduce in their specific environment. These pathways are the result of disruptive selection in intraspecific competition; it may cause reproductive isolation, and finally culminate in sympatric speciation. For example, let us take the mathematically straightforward yet biologically improbable case of the rabbits: Suppose directional selection were taking place. Therefore, butterflies will tend to mate with others of the same wing pattern promoting increased fitness, eventually eliminating the heterozygote altogether. This is largely because the results of disruptive selection are less stable than the results of directional selection (directional selection favors individuals at only one end of the spectrum). If multiple morphs (phenotypic forms) occupy different niches, such separation could be expected to promote reduced competition for resources. Disruptive selection is a specific type of natural selection that actively selects against the intermediate in a population, favoring both extremes of the spectrum. Where it is not the major factor, intraspecific competition can be discounted in assessing the operative aspects of the course of adaptation. It is important to keep in mind that disruptive selection does not always have to be based on intraspecific competition. The rabbits with grey fur, however, would stand out in all areas of the habitat, and would thereby suffer greater predation. So, in order for the results to \"click\", there needs to be a force causing white rabbits to choose other white rabbits, and black rabbits to choose other black ones. The result is equal numbers of black and white rabbits, and hence equal numbers of chromosomes with \"B\" or \"b\" allele, still floating around in that population. Disruptive selection is seen more often in high density populations rather than in low density populations because intraspecific competition tends to be more intense within higher density populations. This variation appeared to be adaptively related to the seed size available on the respective islands (big beaks for big seeds, small beaks for small seeds). The field only has dark rocks in it, so the darker the rabbit, the more effectively it can hide from predators. Medium beaks had difficulty retrieving small seeds and were also not tough enough for the bigger seeds, and were hence maladaptive. This introduces the topic of gradualism, which is a slow but continuous accumulation of changes over long periods of time. Selection can act on having divergent body morphologies in accessing food, such as beak and dental structure. Disruptive selection can initially rapidly intensify divergence; this is because it is only manipulating alleles that already exist. There will be few white rabbits (not very many contributors of chromosomes with \"b\" allele to the population). To the contrast, niche conservation pulls individuals toward ancestral ecological traits in an evolutionary tug-of-war. There are many variations of traits, and some cause greater or lesser reproductive success of the individual. Furthermore, generally hybrids have reduced fitness which promotes reproductive isolation.Suppose there is a population of rabbits. He observed that the species of finches were similar enough to ostensibly have been descended from a single species. Disruptive selection can be caused or influenced by multiple factors and also have multiple outcomes, in addition to speciation.", "combined_approach": "actually fairly common see sympatric speciation disruptive selection supporting two morphs specifically phenotypic trait affects fitness rather mate choicein situations one intraspecific competition work factors place lead reproductive isolation lead sympatric speciationpolymorphism sexual dimorphism phenotypic plasticitydisruptive selection particular significance history evolutionary study involved one evolutions cardinal cases namely finch populations observed darwin gal\u00e1pagos. experiments support right situations described theory could prove true naturewhen intraspecific competition work disruptive selection still lead sympatric speciation maintaining polymorphisms. one morph need resources used another morph likely neither would experience pressure compete interact thereby supporting persistence possibly intensification distinctness two morphs within population. situation polymorphic trait magic trait kind fitness penalty individuals mate assortatively mechanism causes assortative mating evolve population. example may drive disruptive selection instead intraspecific competition might polymorphisms lead reproductive isolation thence speciationwhen disruptive selection based intraspecific competition resulting selection turn promotes ecological niche diversification polymorphisms. lead opposite occurring disruptive selection eventually selecting average everyone starts taking advantage resource become depleted extremes favored. consequence type selective pressure hypothetical rabbit population would disruptively selected extreme values fur colour trait white black grey. different morphs different mating preferences assortative mating occur especially polymorphic trait magic trait meaning trait ecological selection turn side effect reproductive behavior. polymorphisms maintained population assortative mating taking place one way disruptive selection lead direction sympatric speciation. since environment carrying capacity nature acts mode selection individuals let fit offspring survive reproduce full potential. example species butterflies develops two kinds wing patterns crucial mimicry purposes preferred habitat mating two butterflies different wing patterns leads unfavorable heterozygote. pathways lead disruptive selection sympatric speciation seldom prone deviation speciation domino effect depends consistency distinct variable. example underdominance heterozygote disadvantage leading disruptive selectionit believed disruptive selection one main forces drive sympatric speciation natural populations. seen often prevalent environments wide clinal range resources causing heterozygote disadvantage selection favoring homozygotes. theory necessarily lot supporting evidence natural populations seen many times experimental situations using existing populations. population rabbits occurred environment areas black rocks well areas white rocks rabbits black fur would able hide predators amongst black rocks rabbits white fur likewise amongst white rocks. unfavorable heterozygote generates pressure mechanism cause assortative mating lead reproductive isolation due production postmating barriers. eventually lot black rabbits population hence many b alleles lesser amount grey rabbits contribute 50 chromosomes b allele 50 chromosomes b allele population. could eventually lead situation chromosomes b allele die making black possible color subsequent rabbits. leads one think disruptive selection ca nt happen normally species geographically isolated directional selection stabilising selection. true disruptive selection lead speciation quick straightforward process types speciation evolutionary change. usually complete reproductive isolation occur many generations behavioral morphological differences separate species reproducing generally. effect selection promote certain alleles traits individuals higher chance survive reproduce specific environment. pathways result disruptive selection intraspecific competition may cause reproductive isolation finally culminate sympatric speciation. example let us take mathematically straightforward yet biologically improbable case rabbits suppose directional selection taking place. therefore butterflies tend mate others wing pattern promoting increased fitness eventually eliminating heterozygote altogether. largely results disruptive selection less stable results directional selection directional selection favors individuals one end spectrum. multiple morphs phenotypic forms occupy different niches separation could expected promote reduced competition resources. disruptive selection specific type natural selection actively selects intermediate population favoring extremes spectrum. major factor intraspecific competition discounted assessing operative aspects course adaptation. important keep mind disruptive selection always based intraspecific competition. rabbits grey fur however would stand areas habitat would thereby suffer greater predation. order results click needs force causing white rabbits choose white rabbits black rabbits choose black ones. result equal numbers black white rabbits hence equal numbers chromosomes b b allele still floating around population. disruptive selection seen often high density populations rather low density populations intraspecific competition tends intense within higher density populations. variation appeared adaptively related seed size available respective islands big beaks big seeds small beaks small seeds. field dark rocks darker rabbit effectively hide predators. medium beaks difficulty retrieving small seeds also tough enough bigger seeds hence maladaptive. introduces topic gradualism slow continuous accumulation changes long periods time. selection act divergent body morphologies accessing food beak dental structure. disruptive selection initially rapidly intensify divergence manipulating alleles already exist. white rabbits many contributors chromosomes b allele population. contrast niche conservation pulls individuals toward ancestral ecological traits evolutionary tugofwar. many variations traits cause greater lesser reproductive success individual. furthermore generally hybrids reduced fitness promotes reproductive isolationsuppose population rabbits. observed species finches similar enough ostensibly descended single species. disruptive selection caused influenced multiple factors also multiple outcomes addition speciation."}, {"topic": "Functional group (ecology)", "summary": "A functional group is merely a set of species, or collection of organisms, that share alike characteristics within a community. Ideally, the lifeforms would perform equivalent tasks based on domain forces, rather than a common ancestor or evolutionary relationship. This could potentially lead to analogous structures that overrule the possibility of homology. More specifically, these beings produce resembling effects to external factors of an inhabiting system. Due to the fact that a majority of these creatures share an ecological niche, it is practical to assume they require similar structures in order to achieve the greatest amount of fitness. This refers to such as the ability to successfully reproduce to create offspring, and furthermore sustain life by avoiding alike predators and sharing meals.", "content": "\n\n\n== Scientific investigation ==\nRather than the idea of this concept based upon a set of theories, functional groups are directly observed and determined by research specialists. It is important that this information is witnessed first-hand in order to state as usable evidence. Behavior and overall contribution to others are common key points to look for. Individuals use the corresponding perceived traits to further link genetic profiles to one another. Although, the life-forms themselves are different, variables based upon overall function and performance are interchangeable. These groups share an indistinguishable part within their energy flow, providing a key position within food chains and relationships within environment(s).What is an ecosystem and why is that important? An ecosystem is the biological organization that defines and expands on various environment factors- abiotic and biotic, that relate to simultaneous interaction. Whether it be a producer or relative consumer, each and every piece of life maintains a critical position in the ongoing survival rates of its own surroundings. As it pertains, a functional groups shares a very specific role within any given ecosystem and the process of cycling vitality.\n\n\n== Categories ==\nThere are generally two types of functional groups that range between flora and specific animal populations. Groups that relate to vegetation science, or flora, are known as plant functional types. Also referred to as PFT for short, those of such often share identical photosynthetic processes and require comparable nutrients. As an example, plants that undergo photosynthesis share an identical purpose in producing chemical energy for others. In contrast, those within the animal science range are called guilds, typically sharing feeding types. This could be easily simplified when viewing trophic levels. Examples include primary consumers, secondary consumers, tertiary consumers, and quaternary consumers.\n\n\n== Diversity ==\nFunctional diversity is often referred to as the \"value and the range of those species and organismal traits that influence ecosystem functioning\u201d. Traits of an organism that make it unique, for example, way it moves, gathers resources, reproduces, or the time of year it is active add to the overall diversity of an entire ecosystem, and therefore enhance the overall function, or productivity, of that ecosystem. Functional diversity increases the overall productivity of an ecosystem by allowing for an increase in niche occupation. Species have evolved to be more diverse through each epoch of time, with plants and insects having some of the most diverse families discovered thus far. The unique traits of an organism can allow a new niche to be occupied, allow for better defense against predators, and potentially lead to specialization. Organismal level functional diversity, which adds to the overall functional diversity of an ecosystem, is important for conservation efforts, especially in systems used for human consumption. Functional diversity can be difficult to measure accurately, but when done correctly, it provides useful insight to the overall function and stability of an ecosystem.\n\n\n== Redundancy ==\nFunctional redundancy refers to the phenomenon that species in the same ecosystem fill similar roles, which results in a sort of \"insurance\" in the ecosystem. Redundant species can easily do the job of a similar species from the same functional niche. This is possible because similar species have adapted to fill the same niche overtime. Functional redundancy varies across ecosystems and can vary from year to year depending on multiple factors including habitat availability, overall species diversity, competition among species for resources, and anthropogenic influence. This variation can lead to a fluctuation in overall ecosystem production. It is not always known how many species occupy a functional niche, and how much, if any, redundancy is occurring in each niche in an ecosystem. It is hypothesized that each important functional niche is filled by multiple species. Similar to functional diversity, there is no one clear method for calculating functional redundancy accurately, which can be problematic. One method is to account for the number of species occupying a functional niche, as well as the abundance of each species. This can indicate how many total individuals in an ecosystem are performing one function.\n\n\n== Effects on conservation ==\nStudies relating to functional diversity and redundancy occur in a large proportion of conservation and ecological research. As the human population increases, the need for ecosystem function subsequently increases. In addition, habitat destruction and modification continue to increase, and suitable habitat for many species continues to decrease, this research becomes more important. As the human population continues to expand, and urbanization is on the rise, native and natural landscapes are disappearing, being replaced with modified and managed land for human consumption. Alterations to landscapes are often accompanied with negative side effects including fragmentation, species losses, and nutrient runoff, which can effect the stability of an ecosystem, productivity of an ecosystem, and the functional diversity and functional redundancy by decreasing species diversity.\nIt has been shown that intense land use affects both the species diversity, and functional overlap, leaving the ecosystem and organisms in it vulnerable. Specifically, bee species, which we rely on for pollination services, have both lower functional diversity and species diversity in managed landscapes when compared to natural habitats, indicating that anthropogenic change can be detrimental for organismal functional diversity, and therefore overall ecosystem functional diversity. Additional research demonstrated that the functional redundancy of herbaceous insects in streams varies due to stream velocity, demonstrating that environmental factors can alter functional overlap. When conservation efforts begin, it is still up for debate whether preserving specific species, or functional traits is a more beneficial approach for the preservation of ecosystem function. Higher species, diversity can lead to an increase in overall ecosystem productivity, but does not necessarily insure the security of functional overlap. In ecosystems with high redundancy, losing a species (which lowers overall functional diversity) will not always lower overall ecosystem function due to high functional overlap, and thus in this instance it is most important to conserve a group, rather than an individual. In ecosystems with dominant species, which contribute to a majority of the biomass output, it may be more beneficial to conserve this single species, rather than a functional group. The ecological concept of keystone species was redefined based on the presence of species with non redundant trophic dynamics with measured biomass dominance within functional groups, which highlights the conservation benefits of protecting both species and their respective functional group.\n\n\n=== Challenge ===\nUnderstanding functional diversity and redundancy, and the roles each play in conservation efforts is often hard to accomplish because the tools with which we measure diversity and redundancy cannot be used interchangeably. Due to this, recent empirical work most often analyzes the effects of either functional diversity or functional redundancy, but not both. This does not create a complete picture of the factors influencing ecosystem production. In ecosystems with similar and diverse vegetation, functional diversity is more important for overall ecosystem stability and productivity. Yet, in contrast, functional diversity of native bee species in highly managed landscapes provided evidence for higher functional redundancy leading to higher fruit production, something humans rely heavily on for food consumption. A recent paper has stated that until a more accurate measuring technique is universally used, it is too early to determine which species, or functional groups, are most vulnerable and susceptible to extinction. Overall, understanding how extinction affects ecosystems, and which traits are most vulnerable can protect ecosystems as a whole.\n\n\n== References ==", "content_traditional": "scientific investigation rather idea concept based upon set theories functional groups directly observed determined research specialists. important information witnessed firsthand order state usable evidence. behavior overall contribution others common key points look. individuals use corresponding perceived traits link genetic profiles one another. although lifeforms different variables based upon overall function performance interchangeable. groups share indistinguishable part within energy flow providing key position within food chains relationships within environmentswhat ecosystem important. ecosystem biological organization defines expands various environment factors abiotic biotic relate simultaneous interaction. whether producer relative consumer every piece life maintains critical position ongoing survival rates surroundings. pertains functional groups shares specific role within given ecosystem process cycling vitality. categories generally two types functional groups range flora specific animal populations. groups relate vegetation science flora known plant functional types. also referred pft short often share identical photosynthetic processes require comparable nutrients. example plants undergo photosynthesis share identical purpose producing chemical energy others. contrast within animal science range called guilds typically sharing feeding types. could easily simplified viewing trophic levels. examples include primary consumers secondary consumers tertiary consumers quaternary consumers. diversity functional diversity often referred value range species organismal traits influence ecosystem functioning \u201d. traits organism make unique example way moves gathers resources reproduces time year active add overall diversity entire ecosystem therefore enhance overall function productivity ecosystem. functional diversity increases overall productivity ecosystem allowing increase niche occupation. species evolved diverse epoch time plants insects diverse families discovered thus far. unique traits organism allow new niche occupied allow better defense predators potentially lead specialization. organismal level functional diversity adds overall functional diversity ecosystem important conservation efforts especially systems used human consumption. functional diversity difficult measure accurately done correctly provides useful insight overall function stability ecosystem. redundancy functional redundancy refers phenomenon species ecosystem fill similar roles results sort insurance ecosystem. redundant species easily job similar species functional niche. possible similar species adapted fill niche overtime. functional redundancy varies across ecosystems vary year year depending multiple factors including habitat availability overall species diversity competition among species resources anthropogenic influence. variation lead fluctuation overall ecosystem production. always known many species occupy functional niche much redundancy occurring niche ecosystem. hypothesized important functional niche filled multiple species. similar functional diversity one clear method calculating functional redundancy accurately problematic. one method account number species occupying functional niche well abundance species. indicate many total individuals ecosystem performing one function. effects conservation studies relating functional diversity redundancy occur large proportion conservation ecological research. human population increases need ecosystem function subsequently increases. addition habitat destruction modification continue increase suitable habitat many species continues decrease research becomes important. human population continues expand urbanization rise native natural landscapes disappearing replaced modified managed land human consumption. alterations landscapes often accompanied negative side effects including fragmentation species losses nutrient runoff effect stability ecosystem productivity ecosystem functional diversity functional redundancy decreasing species diversity. shown intense land use affects species diversity functional overlap leaving ecosystem organisms vulnerable. specifically bee species rely pollination services lower functional diversity species diversity managed landscapes compared natural habitats indicating anthropogenic change detrimental organismal functional diversity therefore overall ecosystem functional diversity. additional research demonstrated functional redundancy herbaceous insects streams varies due stream velocity demonstrating environmental factors alter functional overlap. conservation efforts begin still debate whether preserving specific species functional traits beneficial approach preservation ecosystem function. higher species diversity lead increase overall ecosystem productivity necessarily insure security functional overlap. ecosystems high redundancy losing species lowers overall functional diversity always lower overall ecosystem function due high functional overlap thus instance important conserve group rather individual. ecosystems dominant species contribute majority biomass output may beneficial conserve single species rather functional group. ecological concept keystone species redefined based presence species non redundant trophic dynamics measured biomass dominance within functional groups highlights conservation benefits protecting species respective functional group. challenge understanding functional diversity redundancy roles play conservation efforts often hard accomplish tools measure diversity redundancy used interchangeably. due recent empirical work often analyzes effects either functional diversity functional redundancy. create complete picture factors influencing ecosystem production. ecosystems similar diverse vegetation functional diversity important overall ecosystem stability productivity. yet contrast functional diversity native bee species highly managed landscapes provided evidence higher functional redundancy leading higher fruit production something humans rely heavily food consumption. recent paper stated accurate measuring technique universally used early determine species functional groups vulnerable susceptible extinction. overall understanding extinction affects ecosystems traits vulnerable protect ecosystems whole. references.", "custom_approach": "Rather than the idea of this concept based upon a set of theories, functional groups are directly observed and determined by research specialists. It is important that this information is witnessed first-hand in order to state as usable evidence. Behavior and overall contribution to others are common key points to look for. Individuals use the corresponding perceived traits to further link genetic profiles to one another. Although, the life-forms themselves are different, variables based upon overall function and performance are interchangeable. These groups share an indistinguishable part within their energy flow, providing a key position within food chains and relationships within environment(s).What is an ecosystem and why is that important? An ecosystem is the biological organization that defines and expands on various environment factors- abiotic and biotic, that relate to simultaneous interaction. Whether it be a producer or relative consumer, each and every piece of life maintains a critical position in the ongoing survival rates of its own surroundings. As it pertains, a functional groups shares a very specific role within any given ecosystem and the process of cycling vitality.There are generally two types of functional groups that range between flora and specific animal populations. Groups that relate to vegetation science, or flora, are known as plant functional types. Also referred to as PFT for short, those of such often share identical photosynthetic processes and require comparable nutrients. As an example, plants that undergo photosynthesis share an identical purpose in producing chemical energy for others. In contrast, those within the animal science range are called guilds, typically sharing feeding types. This could be easily simplified when viewing trophic levels. Examples include primary consumers, secondary consumers, tertiary consumers, and quaternary consumers.Functional diversity is often referred to as the \"value and the range of those species and organismal traits that influence ecosystem functioning\u201d. Traits of an organism that make it unique, for example, way it moves, gathers resources, reproduces, or the time of year it is active add to the overall diversity of an entire ecosystem, and therefore enhance the overall function, or productivity, of that ecosystem. Functional diversity increases the overall productivity of an ecosystem by allowing for an increase in niche occupation. Species have evolved to be more diverse through each epoch of time, with plants and insects having some of the most diverse families discovered thus far. The unique traits of an organism can allow a new niche to be occupied, allow for better defense against predators, and potentially lead to specialization. Organismal level functional diversity, which adds to the overall functional diversity of an ecosystem, is important for conservation efforts, especially in systems used for human consumption. Functional diversity can be difficult to measure accurately, but when done correctly, it provides useful insight to the overall function and stability of an ecosystem.Functional redundancy refers to the phenomenon that species in the same ecosystem fill similar roles, which results in a sort of \"insurance\" in the ecosystem. Redundant species can easily do the job of a similar species from the same functional niche. This is possible because similar species have adapted to fill the same niche overtime. Functional redundancy varies across ecosystems and can vary from year to year depending on multiple factors including habitat availability, overall species diversity, competition among species for resources, and anthropogenic influence. This variation can lead to a fluctuation in overall ecosystem production. It is not always known how many species occupy a functional niche, and how much, if any, redundancy is occurring in each niche in an ecosystem. It is hypothesized that each important functional niche is filled by multiple species. Similar to functional diversity, there is no one clear method for calculating functional redundancy accurately, which can be problematic. One method is to account for the number of species occupying a functional niche, as well as the abundance of each species. This can indicate how many total individuals in an ecosystem are performing one function.Studies relating to functional diversity and redundancy occur in a large proportion of conservation and ecological research. As the human population increases, the need for ecosystem function subsequently increases. In addition, habitat destruction and modification continue to increase, and suitable habitat for many species continues to decrease, this research becomes more important. As the human population continues to expand, and urbanization is on the rise, native and natural landscapes are disappearing, being replaced with modified and managed land for human consumption. Alterations to landscapes are often accompanied with negative side effects including fragmentation, species losses, and nutrient runoff, which can effect the stability of an ecosystem, productivity of an ecosystem, and the functional diversity and functional redundancy by decreasing species diversity. It has been shown that intense land use affects both the species diversity, and functional overlap, leaving the ecosystem and organisms in it vulnerable. Specifically, bee species, which we rely on for pollination services, have both lower functional diversity and species diversity in managed landscapes when compared to natural habitats, indicating that anthropogenic change can be detrimental for organismal functional diversity, and therefore overall ecosystem functional diversity. Additional research demonstrated that the functional redundancy of herbaceous insects in streams varies due to stream velocity, demonstrating that environmental factors can alter functional overlap. When conservation efforts begin, it is still up for debate whether preserving specific species, or functional traits is a more beneficial approach for the preservation of ecosystem function. Higher species, diversity can lead to an increase in overall ecosystem productivity, but does not necessarily insure the security of functional overlap. In ecosystems with high redundancy, losing a species (which lowers overall functional diversity) will not always lower overall ecosystem function due to high functional overlap, and thus in this instance it is most important to conserve a group, rather than an individual. In ecosystems with dominant species, which contribute to a majority of the biomass output, it may be more beneficial to conserve this single species, rather than a functional group. The ecological concept of keystone species was redefined based on the presence of species with non redundant trophic dynamics with measured biomass dominance within functional groups, which highlights the conservation benefits of protecting both species and their respective functional group.Understanding functional diversity and redundancy, and the roles each play in conservation efforts is often hard to accomplish because the tools with which we measure diversity and redundancy cannot be used interchangeably. Due to this, recent empirical work most often analyzes the effects of either functional diversity or functional redundancy, but not both. This does not create a complete picture of the factors influencing ecosystem production. In ecosystems with similar and diverse vegetation, functional diversity is more important for overall ecosystem stability and productivity. Yet, in contrast, functional diversity of native bee species in highly managed landscapes provided evidence for higher functional redundancy leading to higher fruit production, something humans rely heavily on for food consumption. A recent paper has stated that until a more accurate measuring technique is universally used, it is too early to determine which species, or functional groups, are most vulnerable and susceptible to extinction. Overall, understanding how extinction affects ecosystems, and which traits are most vulnerable can protect ecosystems as a whole.", "combined_approach": "rather idea concept based upon set theories functional groups directly observed determined research specialists. important information witnessed firsthand order state usable evidence. behavior overall contribution others common key points look. individuals use corresponding perceived traits link genetic profiles one another. although lifeforms different variables based upon overall function performance interchangeable. groups share indistinguishable part within energy flow providing key position within food chains relationships within environmentswhat ecosystem important. ecosystem biological organization defines expands various environment factors abiotic biotic relate simultaneous interaction. whether producer relative consumer every piece life maintains critical position ongoing survival rates surroundings. pertains functional groups shares specific role within given ecosystem process cycling vitalitythere generally two types functional groups range flora specific animal populations. groups relate vegetation science flora known plant functional types. also referred pft short often share identical photosynthetic processes require comparable nutrients. example plants undergo photosynthesis share identical purpose producing chemical energy others. contrast within animal science range called guilds typically sharing feeding types. could easily simplified viewing trophic levels. examples include primary consumers secondary consumers tertiary consumers quaternary consumersfunctional diversity often referred value range species organismal traits influence ecosystem functioning \u201d. traits organism make unique example way moves gathers resources reproduces time year active add overall diversity entire ecosystem therefore enhance overall function productivity ecosystem. functional diversity increases overall productivity ecosystem allowing increase niche occupation. species evolved diverse epoch time plants insects diverse families discovered thus far. unique traits organism allow new niche occupied allow better defense predators potentially lead specialization. organismal level functional diversity adds overall functional diversity ecosystem important conservation efforts especially systems used human consumption. functional diversity difficult measure accurately done correctly provides useful insight overall function stability ecosystemfunctional redundancy refers phenomenon species ecosystem fill similar roles results sort insurance ecosystem. redundant species easily job similar species functional niche. possible similar species adapted fill niche overtime. functional redundancy varies across ecosystems vary year year depending multiple factors including habitat availability overall species diversity competition among species resources anthropogenic influence. variation lead fluctuation overall ecosystem production. always known many species occupy functional niche much redundancy occurring niche ecosystem. hypothesized important functional niche filled multiple species. similar functional diversity one clear method calculating functional redundancy accurately problematic. one method account number species occupying functional niche well abundance species. indicate many total individuals ecosystem performing one functionstudies relating functional diversity redundancy occur large proportion conservation ecological research. human population increases need ecosystem function subsequently increases. addition habitat destruction modification continue increase suitable habitat many species continues decrease research becomes important. human population continues expand urbanization rise native natural landscapes disappearing replaced modified managed land human consumption. alterations landscapes often accompanied negative side effects including fragmentation species losses nutrient runoff effect stability ecosystem productivity ecosystem functional diversity functional redundancy decreasing species diversity. shown intense land use affects species diversity functional overlap leaving ecosystem organisms vulnerable. specifically bee species rely pollination services lower functional diversity species diversity managed landscapes compared natural habitats indicating anthropogenic change detrimental organismal functional diversity therefore overall ecosystem functional diversity. additional research demonstrated functional redundancy herbaceous insects streams varies due stream velocity demonstrating environmental factors alter functional overlap. conservation efforts begin still debate whether preserving specific species functional traits beneficial approach preservation ecosystem function. higher species diversity lead increase overall ecosystem productivity necessarily insure security functional overlap. ecosystems high redundancy losing species lowers overall functional diversity always lower overall ecosystem function due high functional overlap thus instance important conserve group rather individual. ecosystems dominant species contribute majority biomass output may beneficial conserve single species rather functional group. ecological concept keystone species redefined based presence species non redundant trophic dynamics measured biomass dominance within functional groups highlights conservation benefits protecting species respective functional groupunderstanding functional diversity redundancy roles play conservation efforts often hard accomplish tools measure diversity redundancy used interchangeably. due recent empirical work often analyzes effects either functional diversity functional redundancy. create complete picture factors influencing ecosystem production. ecosystems similar diverse vegetation functional diversity important overall ecosystem stability productivity. yet contrast functional diversity native bee species highly managed landscapes provided evidence higher functional redundancy leading higher fruit production something humans rely heavily food consumption. recent paper stated accurate measuring technique universally used early determine species functional groups vulnerable susceptible extinction. overall understanding extinction affects ecosystems traits vulnerable protect ecosystems whole."}, {"topic": "Selection gradient", "summary": "A selection gradient describes the relationship between a character trait and a species' relative fitness. A trait may be a physical characteristic, such as height or eye color, or behavioral, such as flying or vocalizing. Changes in a trait, such as the amount of seeds a plant produces or the length of a bird's beak, may improve or reduce their relative fitness. Changes in traits may accumulate in a population under an ongoing process of natural selection. Understanding how changes in a trait affect fitness helps evolutionary biologists understand the nature of evolutionary pressures on a population.", "content": "\n\n\n== Relationship between traits and fitness ==\nIn a population, heritable traits that increase an organism's ability to survive and reproduce tend to increase in frequency over generations through a process known as natural selection. The selection gradient shows how much an organism's relative fitness (\u03c9) changes in response to a given increase or decrease in the value of a trait. It is defined as the slope of that relationship, which may be linear or more complex. The shape of the selection gradient function also can help identify the type of selection that is acting on a population. When the function is linear, selection is directional. Directional selection favors one extreme of a trait over another. An individual with the favored extreme value of the trait will survive more than others, causing the mean value of that trait in the population to shift in the next generation. When the relationship is quadratic, selection may be stabilizing or disruptive. Stabilizing selection reduces variation in a trait within a population by reducing the frequencies of more extreme values. Individuals with intermediate phenotypes will survive more than others. As a result, the values of the trait in the population in the following generation will cluster more closely around the peak of the population mean. Disruptive selection increases variation by increasing the frequencies of the more extreme values of a trait. Individuals with extreme trait values will survive more than those with intermediate phenotypes, leading to two peaks in frequency at the extreme values of the trait.\n\n\n== Calculation ==\nThe first and most common function to estimate fitness of a trait is linear \u03c9 =\u03b1 +\u03b2z , which represents directional selection. The slope of the linear regression line (\u03b2) is the selection gradient, \u03c9 is the fitness of a trait value z, and \u03b1 is the y-intercept of the fitness function. Here, the function indicates either an increase or decrease in fitness with increases in the value of a trait. The second fitness function is nonlinear \u03c9 = \u03b1 +\u03b2z +(\u03b3/2)z2 , which represents stabilizing or disruptive selection. The quadratic regression (\u03b3) is the selection gradient, \u03c9 is the fitness of a trait value z, and \u03b1 is the y-intercept of the fitness function. Here, individuals with intermediate trait values may have the highest fitness (stabilizing selection) or those with extreme trait values may have the highest fitness (disruptive selection). When, \u03b2 = 0 and \u03b3 is significantly positive, the selection gradient indicates disruptive selection. When, \u03b2= 0 and \u03b3 is significantly negative, the selection gradient indicates stabilizing selection. In both the cases \u03b3 measures the strength of selection.\n\n\n== Application ==\nEvolutionary biologists use estimates of the selection gradient of traits to identify patterns in the evolutionary pressures on a population and predict changes in species traits. When traits are correlated with one another to some degree, for example beak length (z1) and body size (z2) in a bird, selection on one will affect the distribution of the other. For correlated traits, the effects of natural selection can be separated by estimating the selection gradient for one trait (beak length (z1)) while holding the other trait (body size (z2)) constant. The process enables researchers to determine how greatly variations in one trait (beak length) affect fitness among individuals with the same body size. In 1977 when the Galapagos Islands suffered from severe drought, Peter and Rosemary Grant estimated the selection gradient for Darwin's finches to estimate the strength of the relationship between fitness and each trait while holding other traits constant. They estimated selection gradient for finches\u2019 weight (0.23), bill length (-0.17) and bill depth (0.43). The result showed that selection strongly favored larger birds with deeper bills. Evolutionary biologists also use selection gradients to estimate strength and mode of natural selection. Selection gradients, for example, have provided an explanation for fitness differences among individuals in a population, among different species and strengths of selection. In a study of the fresh-water Eurasian perch, a change in fitness was reported with a change in their density. An estimate of the selection gradient by linear and quadratic regression indicated a shift of the selection regime between stabilizing and directional selection at low density to disruptive selection at higher density.\n\n\n== Criticism ==\nDespite the conceptual simplicity of the selection gradient, there are ongoing debates about its usefulness as an estimator of causes and consequences of natural selection. In 2017, Franklin & Morrissey showed that when performance measures such as body size, biomass, or growth rate are used in place of fitness components in regression-based analysis, accurate estimation of selection gradient is limited, which may lead to under-estimates of selection. Another complication of using selection gradient as an estimator of natural selection is when the phenotype of an individual is itself affected by individuals it interacts with. It complicates the process of separating direct and indirect selection as there are multiple ways selection can work. One alternative to selection gradients is the use of high throughput sequencing to identify targets and agents of selection.\n\n\n== See also ==\nCharles Darwin\nOn the Origin of Species\nEvolution\nNatural selection\nDirectional selection\nStabilizing selection\nDisruptive selection\n\n\n== References ==", "content_traditional": "relationship traits fitness population heritable traits increase organisms ability survive reproduce tend increase frequency generations process known natural selection. selection gradient shows much organisms relative fitness \u03c9 changes response given increase decrease value trait. defined slope relationship may linear complex. shape selection gradient function also help identify type selection acting population. function linear selection directional. directional selection favors one extreme trait another. individual favored extreme value trait survive others causing mean value trait population shift next generation. relationship quadratic selection may stabilizing disruptive. stabilizing selection reduces variation trait within population reducing frequencies extreme values. individuals intermediate phenotypes survive others. result values trait population following generation cluster closely around peak population mean. disruptive selection increases variation increasing frequencies extreme values trait. individuals extreme trait values survive intermediate phenotypes leading two peaks frequency extreme values trait. calculation first common function estimate fitness trait linear \u03c9 \u03b1 \u03b2z represents directional selection. slope linear regression line \u03b2 selection gradient \u03c9 fitness trait value z \u03b1 yintercept fitness function. function indicates either increase decrease fitness increases value trait. second fitness function nonlinear \u03c9 \u03b1 \u03b2z \u03b32z2 represents stabilizing disruptive selection. quadratic regression \u03b3 selection gradient \u03c9 fitness trait value z \u03b1 yintercept fitness function. individuals intermediate trait values may highest fitness stabilizing selection extreme trait values may highest fitness disruptive selection. \u03b2 0 \u03b3 significantly positive selection gradient indicates disruptive selection. \u03b2 0 \u03b3 significantly negative selection gradient indicates stabilizing selection. cases \u03b3 measures strength selection. application evolutionary biologists use estimates selection gradient traits identify patterns evolutionary pressures population predict changes species traits. traits correlated one another degree example beak length z1 body size z2 bird selection one affect distribution. correlated traits effects natural selection separated estimating selection gradient one trait beak length z1 holding trait body size z2 constant. process enables researchers determine greatly variations one trait beak length affect fitness among individuals body size. 1977 galapagos islands suffered severe drought peter rosemary grant estimated selection gradient darwins finches estimate strength relationship fitness trait holding traits constant. estimated selection gradient finches \u2019 weight 023 bill length 017 bill depth 043. result showed selection strongly favored larger birds deeper bills. evolutionary biologists also use selection gradients estimate strength mode natural selection. selection gradients example provided explanation fitness differences among individuals population among different species strengths selection. study freshwater eurasian perch change fitness reported change density. estimate selection gradient linear quadratic regression indicated shift selection regime stabilizing directional selection low density disruptive selection higher density. criticism despite conceptual simplicity selection gradient ongoing debates usefulness estimator causes consequences natural selection. 2017 franklin morrissey showed performance measures body size biomass growth rate used place fitness components regressionbased analysis accurate estimation selection gradient limited may lead underestimates selection. another complication using selection gradient estimator natural selection phenotype individual affected individuals interacts. complicates process separating direct indirect selection multiple ways selection work. one alternative selection gradients use high throughput sequencing identify targets agents selection. see also charles darwin origin species evolution natural selection directional selection stabilizing selection disruptive selection references.", "custom_approach": "In a population, heritable traits that increase an organism's ability to survive and reproduce tend to increase in frequency over generations through a process known as natural selection. The selection gradient shows how much an organism's relative fitness (\u03c9) changes in response to a given increase or decrease in the value of a trait. It is defined as the slope of that relationship, which may be linear or more complex. The shape of the selection gradient function also can help identify the type of selection that is acting on a population. When the function is linear, selection is directional. Directional selection favors one extreme of a trait over another. An individual with the favored extreme value of the trait will survive more than others, causing the mean value of that trait in the population to shift in the next generation. When the relationship is quadratic, selection may be stabilizing or disruptive. Stabilizing selection reduces variation in a trait within a population by reducing the frequencies of more extreme values. Individuals with intermediate phenotypes will survive more than others. As a result, the values of the trait in the population in the following generation will cluster more closely around the peak of the population mean. Disruptive selection increases variation by increasing the frequencies of the more extreme values of a trait. Individuals with extreme trait values will survive more than those with intermediate phenotypes, leading to two peaks in frequency at the extreme values of the trait.The first and most common function to estimate fitness of a trait is linear \u03c9 =\u03b1 +\u03b2z , which represents directional selection. The slope of the linear regression line (\u03b2) is the selection gradient, \u03c9 is the fitness of a trait value z, and \u03b1 is the y-intercept of the fitness function. Here, the function indicates either an increase or decrease in fitness with increases in the value of a trait. The second fitness function is nonlinear \u03c9 = \u03b1 +\u03b2z +(\u03b3/2)z2 , which represents stabilizing or disruptive selection. The quadratic regression (\u03b3) is the selection gradient, \u03c9 is the fitness of a trait value z, and \u03b1 is the y-intercept of the fitness function. Here, individuals with intermediate trait values may have the highest fitness (stabilizing selection) or those with extreme trait values may have the highest fitness (disruptive selection). When, \u03b2 = 0 and \u03b3 is significantly positive, the selection gradient indicates disruptive selection. When, \u03b2= 0 and \u03b3 is significantly negative, the selection gradient indicates stabilizing selection. In both the cases \u03b3 measures the strength of selection.Evolutionary biologists use estimates of the selection gradient of traits to identify patterns in the evolutionary pressures on a population and predict changes in species traits. When traits are correlated with one another to some degree, for example beak length (z1) and body size (z2) in a bird, selection on one will affect the distribution of the other. For correlated traits, the effects of natural selection can be separated by estimating the selection gradient for one trait (beak length (z1)) while holding the other trait (body size (z2)) constant. The process enables researchers to determine how greatly variations in one trait (beak length) affect fitness among individuals with the same body size. In 1977 when the Galapagos Islands suffered from severe drought, Peter and Rosemary Grant estimated the selection gradient for Darwin's finches to estimate the strength of the relationship between fitness and each trait while holding other traits constant. They estimated selection gradient for finches\u2019 weight (0.23), bill length (-0.17) and bill depth (0.43). The result showed that selection strongly favored larger birds with deeper bills. Evolutionary biologists also use selection gradients to estimate strength and mode of natural selection. Selection gradients, for example, have provided an explanation for fitness differences among individuals in a population, among different species and strengths of selection. In a study of the fresh-water Eurasian perch, a change in fitness was reported with a change in their density. An estimate of the selection gradient by linear and quadratic regression indicated a shift of the selection regime between stabilizing and directional selection at low density to disruptive selection at higher density.Despite the conceptual simplicity of the selection gradient, there are ongoing debates about its usefulness as an estimator of causes and consequences of natural selection. In 2017, Franklin & Morrissey showed that when performance measures such as body size, biomass, or growth rate are used in place of fitness components in regression-based analysis, accurate estimation of selection gradient is limited, which may lead to under-estimates of selection. Another complication of using selection gradient as an estimator of natural selection is when the phenotype of an individual is itself affected by individuals it interacts with. It complicates the process of separating direct and indirect selection as there are multiple ways selection can work. One alternative to selection gradients is the use of high throughput sequencing to identify targets and agents of selection.", "combined_approach": "population heritable traits increase organisms ability survive reproduce tend increase frequency generations process known natural selection. selection gradient shows much organisms relative fitness \u03c9 changes response given increase decrease value trait. defined slope relationship may linear complex. shape selection gradient function also help identify type selection acting population. function linear selection directional. directional selection favors one extreme trait another. individual favored extreme value trait survive others causing mean value trait population shift next generation. relationship quadratic selection may stabilizing disruptive. stabilizing selection reduces variation trait within population reducing frequencies extreme values. individuals intermediate phenotypes survive others. result values trait population following generation cluster closely around peak population mean. disruptive selection increases variation increasing frequencies extreme values trait. individuals extreme trait values survive intermediate phenotypes leading two peaks frequency extreme values traitthe first common function estimate fitness trait linear \u03c9 \u03b1 \u03b2z represents directional selection. slope linear regression line \u03b2 selection gradient \u03c9 fitness trait value z \u03b1 yintercept fitness function. function indicates either increase decrease fitness increases value trait. second fitness function nonlinear \u03c9 \u03b1 \u03b2z \u03b32z2 represents stabilizing disruptive selection. quadratic regression \u03b3 selection gradient \u03c9 fitness trait value z \u03b1 yintercept fitness function. individuals intermediate trait values may highest fitness stabilizing selection extreme trait values may highest fitness disruptive selection. \u03b2 0 \u03b3 significantly positive selection gradient indicates disruptive selection. \u03b2 0 \u03b3 significantly negative selection gradient indicates stabilizing selection. cases \u03b3 measures strength selectionevolutionary biologists use estimates selection gradient traits identify patterns evolutionary pressures population predict changes species traits. traits correlated one another degree example beak length z1 body size z2 bird selection one affect distribution. correlated traits effects natural selection separated estimating selection gradient one trait beak length z1 holding trait body size z2 constant. process enables researchers determine greatly variations one trait beak length affect fitness among individuals body size. 1977 galapagos islands suffered severe drought peter rosemary grant estimated selection gradient darwins finches estimate strength relationship fitness trait holding traits constant. estimated selection gradient finches \u2019 weight 023 bill length 017 bill depth 043. result showed selection strongly favored larger birds deeper bills. evolutionary biologists also use selection gradients estimate strength mode natural selection. selection gradients example provided explanation fitness differences among individuals population among different species strengths selection. study freshwater eurasian perch change fitness reported change density. estimate selection gradient linear quadratic regression indicated shift selection regime stabilizing directional selection low density disruptive selection higher densitydespite conceptual simplicity selection gradient ongoing debates usefulness estimator causes consequences natural selection. 2017 franklin morrissey showed performance measures body size biomass growth rate used place fitness components regressionbased analysis accurate estimation selection gradient limited may lead underestimates selection. another complication using selection gradient estimator natural selection phenotype individual affected individuals interacts. complicates process separating direct indirect selection multiple ways selection work. one alternative selection gradients use high throughput sequencing identify targets agents selection."}, {"topic": "Life history theory", "summary": "Life history theory is an analytical framework designed to study the diversity of life history strategies used by different organisms throughout the world, as well as the causes and results of the variation in their life cycles. It is a theory of biological evolution that seeks to explain aspects of organisms' anatomy and behavior by reference to the way that their life histories\u2014including their reproductive development and behaviors, post-reproductive behaviors, and lifespan (length of time alive)\u2014have been shaped by natural selection. A life history strategy is the \"age- and stage-specific patterns\" and timing of events that make up an organism's life, such as birth, weaning, maturation, death, etc. These events, notably juvenile development, age of sexual maturity, first reproduction, number of offspring and level of parental investment, senescence and death, depend on the physical and ecological environment of the organism.\nThe theory was developed in the 1950s and is used to answer questions about topics such as organism size, age of maturation, number of offspring, life span, and many others. In order to study these topics, life history strategies must be identified, and then models are constructed to study their effects. Finally, predictions about the importance and role of the strategies are made, and these predictions are used to understand how evolution affects the ordering and length of life history events in an organism's life, particularly the lifespan and period of reproduction. Life history theory draws on an evolutionary foundation, and studies the effects of natural selection on organisms, both throughout their lifetime and across generations. It also uses measures of evolutionary fitness to determine if organisms are able to maximize or optimize this fitness, by allocating resources to a range of different demands throughout the organism's life. It serves as a method to investigate further the \"many layers of complexity of organisms and their worlds\".Organisms have evolved a great variety of life histories, from Pacific salmon, which produce thousands of eggs at one time and then die, to human beings, who produce a few offspring over the course of decades. The theory depends on principles of evolutionary biology and ecology and is widely used in other areas of science.\n\n", "content": "\n== Brief history of field ==\nLife history theory is seen as a branch of evolutionary ecology and is used in a variety of different fields. Beginning in the 1950s, mathematical analysis became an important aspect of research regarding LHT. There are two main focuses that have developed over time: genetic and phenotypic, but there has been a recent movement towards combining these two approaches.\n\n\n== Life cycle ==\nAll organisms follow a specific sequence in their development, beginning with gestation and ending with death, which is known as the life cycle. Events in between usually include birth, childhood, maturation, reproduction, and senescence, and together these comprise the life history strategy of that organism.The major events in this life cycle are usually shaped by the demographic qualities of the organism. Some are more obvious shifts than others, and may be marked by physical changes\u2014for example, teeth erupting in young children. Some events may have little variation between individuals in a species, such as length of gestation, but other events may show a lot of variation between individuals, such as age at first reproduction.\nLife cycles can be divided into two major stages: growth and reproduction. These two cannot take place at the same time, so once reproduction has begun, growth usually ends. This shift is important because it can also affect other aspects of an organism's life, such as the organization of its group or its social interactions.Each species has its own pattern and timing for these events, often known as its ontogeny, and the variety produced by this is what LHT studies. Evolution then works upon these stages to ensure that an organism adapts to its environment. For example, a human, between being born and reaching adulthood, will pass through an assortment of life stages, which include: birth, infancy, weaning, childhood and growth, adolescence, sexual maturation, and reproduction. All of these are defined in a specific biological way, which is not necessarily the same as the way that they are commonly used.\n\n\n== Darwinian fitness ==\nIn the context of evolution, fitness is determined by how the organism is represented in the future. Genetically, a fit allele outcompetes its rivals over generations. Often, as a shorthand for natural selection, researchers only assess the number of descendants an organism produces over the course of its life. Then, the main elements are survivorship and reproductive rate. This means that the organism's traits and genes are carried on into the next generation, and are presumed to contribute to evolutionary \"success\". The process of adaptation contributes to this \"success\" by impacting rates of survival and reproduction, which in turn establishes an organism's level of Darwinian fitness. In life history theory, evolution works on the life stages of particular species (e.g., length of juvenile period) but is also discussed for a single organism's functional, lifetime adaptation. In both cases, researchers assume adaptation\u2014processes that establish fitness.\n\n\n== Traits ==\nThere are seven traits that are traditionally recognized as important in life history theory:\nsize at birth\ngrowth pattern\nage and size at maturity\nnumber, size, and sex ratio of offspring\nage- and size-specific reproductive investments\nage- and size-specific mortality schedules\nlength of lifeThe trait that is seen as the most important for any given organism is the one where a change in that trait creates the most significant difference in that organism's level of fitness. In this sense, an organism's fitness is determined by its changing life history traits. The way in which evolutionary forces act on these life history traits serves to limit the genetic variability and heritability of the life history strategies, although there are still large varieties that exist in the world.\n\n\n== Strategies ==\nCombinations of these life history traits and life events create the life history strategies. As an example, Winemiller and Rose, as cited by Lartillot & Delsuc, propose three types of life history strategies in the fish they study: opportunistic, periodic, and equilibrium. These types of strategies are defined by the body size of the fish, age at maturation, high or low survivorship, and the type of environment they are found in. A fish with a large body size, a late age of maturation, and low survivorship, found in a seasonal environment, would be classified as having a periodic life strategy. The type of behaviors taking place during life events can also define life history strategies. For example, an exploitative life history strategy would be one where an organism benefits by using more resources than others, or by taking these resources from other organisms.\n\n\n== Characteristics ==\nLife history characteristics are traits that affect the life table of an organism, and can be imagined as various investments in growth, reproduction, and survivorship.\nThe goal of life history theory is to understand the variation in such life history strategies. This knowledge can be used to construct models to predict what kinds of traits will be favoured in different environments. Without constraints, the highest fitness would belong to a Darwinian demon, a hypothetical organism for whom such trade-offs do not exist. The key to life history theory is that there are limited resources available, and focusing on only a few life history characteristics is necessary.\nExamples of some major life history characteristics include:\n\nAge at first reproductive event\nReproductive lifespan and ageing\nNumber and size of offspringVariations in these characteristics reflect different allocations of an individual's resources (i.e., time, effort, and energy expenditure) to competing life functions. For any given individual, available resources in any particular environment are finite. Time, effort, and energy used for one purpose diminishes the time, effort, and energy available for another.\nFor example, birds with larger broods are unable to afford more prominent secondary sexual characteristics. Life history characteristics will, in some cases, change according to the population density, since genotypes with the highest fitness at high population densities will not have the highest fitness at low population densities. Other conditions, such as the stability of the environment, will lead to selection for certain life history traits. Experiments by Michael R. Rose and Brian Charlesworth showed that unstable environments select for flies with both shorter lifespans and higher fecundity\u2014in unreliable conditions, it is better for an organism to breed early and abundantly than waste resources promoting its own survival.Biological tradeoffs also appear to characterize the life histories of viruses, including bacteriophages.\n\n\n=== Reproductive value and costs of reproduction ===\nReproductive value models the tradeoffs between reproduction, growth, and survivorship. An organism's reproductive value (RV) is defined as its expected contribution to the population through both current and future reproduction:\nRV = Current Reproduction + Residual Reproductive Value (RRV)The residual reproductive value represents an organism's future reproduction through its investment in growth and survivorship. The cost of reproduction hypothesis predicts that higher investment in current reproduction hinders growth and survivorship and reduces future reproduction, while investments in growth will pay off with higher fecundity (number of offspring produced) and reproductive episodes in the future. This cost-of-reproduction tradeoff influences major life history characteristics. For example, a 2009 study by J. Creighton, N. Heflin, and M. Belk on burying beetles provided \"unconfounded support\" for the costs of reproduction. The study found that beetles that had allocated too many resources to current reproduction also had the shortest lifespans. In their lifetimes, they also had the fewest reproductive events and offspring, reflecting how over-investment in current reproduction lowers residual reproductive value.\nThe related terminal investment hypothesis describes a shift to current reproduction with higher age. At early ages, RRV is typically high, and organisms should invest in growth to increase reproduction at a later age. As organisms age, this investment in growth gradually increases current reproduction. However, when an organism grows old and begins losing physiological function, mortality increases while fecundity decreases. This senescence shifts the reproduction tradeoff towards current reproduction: the effects of aging and higher risk of death make current reproduction more favorable. The burying beetle study also supported the terminal investment hypothesis: the authors found beetles that bred later in life also had increased brood sizes, reflecting greater investment in those reproductive events.\n\n\n=== r/K selection theory ===\n\nThe selection pressures that determine the reproductive strategy, and therefore much of the life history, of an organism can be understood in terms of r/K selection theory. The central trade-off to life history theory is the number of offspring vs. the timing of reproduction. Organisms that are r-selected have a high growth rate (r) and tend to produce a high number of offspring with minimal parental care; their lifespans also tend to be shorter. r-selected organisms are suited to life in an unstable environment, because they reproduce early and abundantly and allow for a low survival rate of offspring. K-selected organisms subsist near the carrying capacity of their environment (K), produce a relatively low number of offspring over a longer span of time, and have high parental investment. They are more suited to life in a stable environment in which they can rely on a long lifespan and a low mortality rate that will allow them to reproduce multiple times with a high offspring survival rate.Some organisms that are very r-selected are semelparous, only reproducing once before they die. Semelparous organisms may be short-lived, like annual crops. However, some semelparous organisms are relatively long-lived, such as the African flowering plant Lobelia telekii which spends up to several decades growing an inflorescence that blooms only once before the plant dies, or the periodical cicada which spends 17 years as a larva before emerging as an adult. Organisms with longer lifespans are usually iteroparous, reproducing more than once in a lifetime. However, iteroparous organisms can be more r-selected than K-selected, such as a sparrow, which gives birth to several chicks per year but lives only a few years, as compared to a wandering albatross, which first reproduces at ten years old and breeds every other year during its 40-year lifespan.r-selected organisms usually:\n\nmature rapidly and have an early age of first reproduction\nhave a relatively short lifespan\nhave a large number of offspring at a time, and few reproductive events, or are semelparous\nhave a high mortality rate and a low offspring survival rate\nhave minimal parental care/investmentK-selected organisms usually:\n\nmature more slowly and have a later age of first reproduction\nhave a longer lifespan\nhave few offspring at a time and more reproductive events spread out over a longer span of time\nhave a low mortality rate and a high offspring survival rate\nhave high parental investment\n\n\n=== Variation ===\nVariation is a major part of what LHT studies, because every organism has its own life history strategy. Differences between strategies can be minimal or great. For example, one organism may have a single offspring while another may have hundreds. Some species may live for only a few hours, and some may live for decades. Some may reproduce dozens of times throughout their lifespan, and others may only reproduce one or twice.\n\n\n=== Trade-offs ===\nAn essential component of studying life history strategies is identifying the trade-offs that take place for any given organism. Energy use in life history strategies is regulated by thermodynamics and the conservation of energy, and the \"inherent scarcity of resources\", so not all traits or tasks can be invested in at the same time. Thus, organisms must choose between tasks, such as growth, reproduction, and survival, prioritizing some and not others. For example, there is a trade-off between maximizing body size and maximizing lifespan, and between maximizing offspring size and maximizing offspring number. This is also sometimes seen as a choice between quantity and quality of offspring. These choices are the trade-offs that life history theory studies.\nOne significant trade off is between somatic effort (towards growth and maintenance of the body) and reproductive effort (towards producing offspring). Since an organism can't put energy towards doing these simultaneously, many organisms have a period where energy is put just toward growth, followed by a period where energy is focused on reproduction, creating a separation of the two in the life cycle. Thus, the end of the period of growth marks the beginning of the period of reproduction. Another fundamental trade-off associated with reproduction is between mating effort and parenting effort. If an organism is focused on raising its offspring, it cannot devote that energy to pursuing a mate.An important trade-off in the dedication of resources to breeding has to do with predation risk: organisms that have to deal with an increased risk of predation often invest less in breeding. This is because it is not worth as much to invest a lot in breeding when the benefit of such investment is uncertain.These trade-offs, once identified, can then be put into models that estimate their effects on different life history strategies and answer questions about the selection pressures that exist on different life events. Over time, there has been a shift in how these models are constructed. Instead of focusing on one trait and looking at how it changed, scientists are looking at these trade-offs as part of a larger system, with complex inputs and outcomes.\n\n\n=== Constraints ===\nThe idea of constraints is closely linked to the idea of trade-offs discussed above. Because organisms have a finite amount of energy, the process of trade-offs acts as a natural limit on the organism's adaptations and potential for fitness. This occurs in populations as well. These limits can be physical, developmental, or historical, and they are imposed by the existing traits of the organism.\n\n\n=== Optimal life-history strategies ===\nPopulations can adapt and thereby achieve an \"optimal\" life history strategy that allows the highest level of fitness possible (fitness maximization). There are several methods from which to approach the study of optimality, including energetic and demographic. Achieving optimal fitness also encompasses multiple generations, because the optimal use of energy includes both the parents and the offspring. For example, \"optimal investment in offspring is where the decrease in total number of offspring is equaled by the increase of the number who survive\".Optimality is important for the study of life history theory because it serves as the basis for many of the models used, which work from the assumption that natural selection, as it works on a life history traits, is moving towards the most optimal group of traits and use of energy. This base assumption, that over the course of its life span an organism is aiming for optimal energy use, then allows scientists to test other predictions. However, actually gaining this optimal life history strategy cannot be guaranteed for any organism.\n\n\n=== Allocation of resources ===\nAn organism's allocation of resources ties into several other important concepts, such as trade-offs and optimality. The best possible allocation of resources is what allows an organism to achieve an optimal life history strategy and obtain the maximum level of fitness, and making the best possible choices about how to allocate energy to various trade-offs contributes to this. Models of resource allocation have been developed and used to study problems such as parental involvement, the length of the learning period for children, and other developmental issues. The allocation of resources also plays a role in variation, because the different resource allocations by different species create the variety of life history strategies.\n\n\n=== Capital and income breeding ===\n\nThe division of capital and income breeding focuses on how organisms use resources to finance breeding, and how they time it. In capital breeders, resources collected before breeding are used to pay for it, and they breed once they reach a body-condition threshold, which decreases as the season progresses. Income breeders, on the other hand, breed using resources that are generated concurrently with breeding, and time that using the rate of change in body-condition relative to multiple fixed thresholds. This distinction, though, is not necessarily a dichotomy; instead, it is a spectrum, with pure capital breeding lying on one end, and pure income breeding on the other.Capital breeding is more often seen in organisms that deal with strong seasonality. This is because when offspring value is low, yet food is abundant, building stores to breed from allows these organisms to achieve higher rates of reproduction than they otherwise would have. In less seasonal environments, income breeding is likely to be favoured because waiting to breed would not have fitness benefits.\n\n\n=== Phenotypic plasticity ===\nPhenotypic plasticity focuses on the concept that the same genotype can produce different phenotypes in response to different environments. It affects the levels of genetic variability by serving as a source of variation and integration of fitness traits.\n\n\n== Determinants ==\nMany factors can determine the evolution of an organism's life history, especially the unpredictability of the environment. A very unpredictable environment\u2014one in which resources, hazards, and competitors may fluctuate rapidly\u2014selects for organisms that produce more offspring earlier in their lives, because it is never certain whether they will survive to reproduce again. Mortality rate may be the best indicator of a species' life history: organisms with high mortality rates\u2014the usual result of an unpredictable environment\u2014typically mature earlier than those species with low mortality rates, and give birth to more offspring at a time. A highly unpredictable environment can also lead to plasticity, in which individual organisms can shift along the spectrum of r-selected vs. K-selected life histories to suit the environment.\n\n\n== Human life history ==\nIn studying humans, life history theory is used in many ways, including in biology, psychology, economics, anthropology, and other fields. For humans, life history strategies include all the usual factors\u2014trade-offs, constraints, reproductive effort, etc.\u2014but also includes a culture factor that allows them to solve problems through cultural means in addition to through adaptation. Humans also have unique traits that make them stand out from other organisms, such as a large brain, later maturity and age of first reproduction, a long lifespan, and a high level of reproduction, often supported by fathers and older (post-menopausal) relatives. There are a variety of possible explanations for these unique traits. For example, a long juvenile period may have been adapted to support a period of learning the skills needed for successful hunting and foraging. This period of learning may also explain the longer lifespan, as a longer amount of time over which to use those skills makes the period needed to acquire them worth it. Cooperative breeding and the grandmothering hypothesis have been proposed as the reasons that humans continue to live for many years after they are no longer capable of reproducing. The large brain allows for a greater learning capacity, and the ability to engage in new behaviors and create new things. The change in brain size may have been the result of a dietary shift\u2014towards higher quality and difficult to obtain food sources\u2014or may have been driven by the social requirements of group living, which promoted sharing and provisioning. Recent authors, such as Kaplan, argue that both aspects are probably important. Research has also indicated that humans may pursue different reproductive strategies.\n\n\n== Tools used ==\nmathematical modeling\nquantitative genetics\nartificial selection\ndemography\noptimality modeling\nmechanistic approach\nMalthusian parameter\n\n\n== Perspectives ==\nLife history theory has provided new perspectives in understanding many aspects of human reproductive behavior, such as the relationship between poverty and fertility. A number of statistical predictions have been confirmed by social data and there is a large body of scientific literature from studies in experimental animal models, and naturalistic studies among many organisms.\n\n\n== Criticism ==\nThe claim that long periods of helplessness in young would select for more parenting effort in protecting the young at the same time as high levels of predation would select for less parenting effort is criticized for assuming that absolute chronology would determine direction of selection. This criticism argues that the total amount of predation threat faced by the young has the same effective protection need effect no matter if it comes in the form of a long childhood and far between the natural enemies or a short childhood and closely spaced natural enemies, as different life speeds are subjectively the same thing for the animals and only outwardly looks different. One cited example is that small animals that have more natural enemies would face approximately the same number of threats and need approximately the same amount of protection (at the relative timescale of the animals) as large animals with fewer natural enemies that grow more slowly (e.g. that many small carnivores that could not eat even a very young human child could easily eat multiple very young blind meerkats). This criticism also argues that when a carnivore eats a batch stored together, there is no significant difference in the chance of one surviving depending on the number of young stored together, concluding that humans do not stand out from many small animals such as mice in selection for protecting helpless young.There is criticism of the claim that menopause and somewhat earlier age-related declines in female fertility could co-evolve with a long term dependency on monogamous male providers who preferred fertile females. This criticism argues that the longer the time the child needed parental investment relative to the lifespans of the species, the higher the percentage of children born would still need parental care when the female was no longer fertile or dramatically reduced in her fertility. These critics argue that unless male preference for fertile females and ability to switch to a new female was annulled, any need for a male provider would have selected against menopause to use her fertility to keep the provider male attracted to her, and that the theory of monogamous fathers providing for their families therefore cannot explain why menopause evolved in humans.One criticism of the notion of a trade-off between mating effort and parenting effort is that in a species in which it is common to spend much effort on something other than mating, including but not exclusive to parenting, there is less energy and time available for such for the competitors as well, meaning that species-wide reductions in the effort spent at mating does not reduce the ability of an individual to attract other mates. These critics also criticize the dichotomy between parenting effort and mating effort for missing the existence of other efforts that take time from mating, such as survival effort which would have the same species-wide effects.There are also criticisms of size and organ trade-offs, including criticism of the claim of a trade-off between body size and longevity that cites the observation of longer lifespans in larger species, as well as criticism of the claim that big brains promoted sociality citing primate studies in which monkeys with large portions of their brains surgically removed remained socially functioning though their technical problem solving deteriorated in flexibility, computer simulations of chimpanzee social interaction showing that it requires no complex cognition, and cases of socially functioning humans with microcephalic brain sizes.\n\n\n== See also ==\nAge determination in herbaceous plants\nAge determination in woody plants\nBehavioral ecology\nBiological life cycle\nDynamic energy budget theory for metabolic organisation\nEvolutionary developmental psychology\nEvolutionary history of life\nEvolutionary physiology\nHuman behavioral ecology\nPaternal care\nPlant strategies\n\n\n== References ==\n\n52) Marco Del Giudice \"Evolutionary psychopathology: a unified approach\", Oxford university Press, 2018\n\n\n== Further reading ==\nCharnov, E. L. (1993). Life history invariants. Oxford, England: Oxford University Press.\nEllis, B.J. (2004). Timing of pubertal maturation in girls: an integrated life history approach. Psychological Bulletin. 130:920-58.\nFabian, D. & Flatt, T. (2012) Life History Evolution. Nature Education Knowledge 3(10):24\nFreeman, Scott and Herron, Jon C. 2007. Evolutionary Analysis 4th Ed: Aging and Other Life History Characteristics. 485-86, 514, 516.\nKaplan, H., K. Hill, J. Lancaster, and A.M. Hurtado. (2000). The Evolution of intelligence and the Human life history. Evolutionary Anthropology, 9(4): 156-184.\nKaplan, H.S., and A.J. Robson. (2002) \"The emergence of humans: The coevolution of intelligence and longevity with intergenerational transfers\". PNAS 99: 10221-10226.\nKaplan, H.S., Lancaster, J.B., & Robson (2003). Embodied Capital and the Evolutionary Economics Of the Human Lifespan. In: Lifespan: Evolutionary, Ecology and Demographic Perspectives, J.R. Carey & S. Tuljapakur (2003). (eds.) Population and Development Review 29, Supplement: 152\u2013182.\nKozlowski, J and Wiegert, RG 1986. Optimal allocation to growth and reproduction. Theoretical Population Biology 29: 16-37.\nQuinlan, R.J. (2007). Human parental effort and environmental risk. Proceedings of the Royal Society B: Biological Sciences, 274(1606):121-125.\nDerek A. Roff (2007). Contributions of genomics to life-history theory. Nature Reviews Genetics 8, 116-125.\nRoff, D. (1992). The evolution of life histories: Theory and analysis. New York:Chapman & Hall.\nStearns, S. (1992). The evolution of life histories. Oxford, England: Oxford University Press.\nVigil, J. M., Geary, D. C., & Byrd-Craven, J. (2005). A life history assessment of early childhood sexual abuse in women. Developmental Psychology, 41, 553-561.\nWalker, R., Gurven, M., Hill, K., Migliano, A., Chagnon, N., Djurovic, G., Hames, R., Hurtado, AM, Kaplan, H., Oliver, W., de Souza, R., Valeggia, C., Yamauchi, T. (2006). Growth rates, developmental markers and life histories in 21 small-scale societies. American Journal of Human Biology 18:295-311.", "content_traditional": "critics also criticize dichotomy parenting effort mating effort missing existence efforts take time mating survival effort would specieswide effectsthere also criticisms size organ tradeoffs including criticism claim tradeoff body size longevity cites observation longer lifespans larger species well criticism claim big brains promoted sociality citing primate studies monkeys large portions brains surgically removed remained socially functioning though technical problem solving deteriorated flexibility computer simulations chimpanzee social interaction showing requires complex cognition cases socially functioning humans microcephalic brain sizes. critics argue unless male preference fertile females ability switch new female annulled need male provider would selected menopause use fertility keep provider male attracted theory monogamous fathers providing families therefore explain menopause evolved humansone criticism notion tradeoff mating effort parenting effort species common spend much effort something mating including exclusive parenting less energy time available competitors well meaning specieswide reductions effort spent mating reduce ability individual attract mates. however iteroparous organisms rselected kselected sparrow gives birth several chicks per year lives years compared wandering albatross first reproduces ten years old breeds every year 40year lifespanrselected organisms usually mature rapidly early age first reproduction relatively short lifespan large number offspring time reproductive events semelparous high mortality rate low offspring survival rate minimal parental careinvestmentkselected organisms usually mature slowly later age first reproduction longer lifespan offspring time reproductive events spread longer span time low mortality rate high offspring survival rate high parental investment variation variation major part lht studies every organism life history strategy. criticism also argues carnivore eats batch stored together significant difference chance one surviving depending number young stored together concluding humans stand many small animals mice selection protecting helpless youngthere criticism claim menopause somewhat earlier agerelated declines female fertility could coevolve long term dependency monogamous male providers preferred fertile females. experiments michael r rose brian charlesworth showed unstable environments select flies shorter lifespans higher fecundity \u2014 unreliable conditions better organism breed early abundantly waste resources promoting survivalbiological tradeoffs also appear characterize life histories viruses including bacteriophages. worth much invest lot breeding benefit investment uncertainthese tradeoffs identified put models estimate effects different life history strategies answer questions selection pressures exist different life events. criticism argues total amount predation threat faced young effective protection need effect matter comes form long childhood far natural enemies short childhood closely spaced natural enemies different life speeds subjectively thing animals outwardly looks different. example optimal investment offspring decrease total number offspring equaled increase number surviveoptimality important study life history theory serves basis many models used work assumption natural selection works life history traits moving towards optimal group traits use energy. traits seven traits traditionally recognized important life history theory size birth growth pattern age size maturity number size sex ratio offspring age sizespecific reproductive investments age sizespecific mortality schedules length lifethe trait seen important given organism one change trait creates significant difference organisms level fitness. see also age determination herbaceous plants age determination woody plants behavioral ecology biological life cycle dynamic energy budget theory metabolic organisation evolutionary developmental psychology evolutionary history life evolutionary physiology human behavioral ecology paternal care plant strategies references 52 marco del giudice evolutionary psychopathology unified approach oxford university press 2018 reading charnov e l 1993. shift important also affect aspects organisms life organization group social interactionseach species pattern timing events often known ontogeny variety produced lht studies. suited life stable environment rely long lifespan low mortality rate allow reproduce multiple times high offspring survival ratesome organisms rselected semelparous reproducing die. however semelparous organisms relatively longlived african flowering plant lobelia telekii spends several decades growing inflorescence blooms plant dies periodical cicada spends 17 years larva emerging adult. humans also unique traits make stand organisms large brain later maturity age first reproduction long lifespan high level reproduction often supported fathers older postmenopausal relatives. organism focused raising offspring devote energy pursuing matean important tradeoff dedication resources breeding predation risk organisms deal increased risk predation often invest less breeding. unpredictable environment \u2014 one resources hazards competitors may fluctuate rapidly \u2014 selects organisms produce offspring earlier lives never certain whether survive reproduce. tools used mathematical modeling quantitative genetics artificial selection demography optimality modeling mechanistic approach malthusian parameter perspectives life history theory provided new perspectives understanding many aspects human reproductive behavior relationship poverty fertility. examples major life history characteristics include age first reproductive event reproductive lifespan ageing number size offspringvariations characteristics reflect different allocations individuals resources ie time effort energy expenditure competing life functions. change brain size may result dietary shift \u2014 towards higher quality difficult obtain food sources \u2014 may driven social requirements group living promoted sharing provisioning. criticism argues longer time child needed parental investment relative lifespans species higher percentage children born would still need parental care female longer fertile dramatically reduced fertility. humans life history strategies include usual factors \u2014 tradeoffs constraints reproductive effort etc \u2014 also includes culture factor allows solve problems cultural means addition adaptation.", "custom_approach": "These critics also criticize the dichotomy between parenting effort and mating effort for missing the existence of other efforts that take time from mating, such as survival effort which would have the same species-wide effects.There are also criticisms of size and organ trade-offs, including criticism of the claim of a trade-off between body size and longevity that cites the observation of longer lifespans in larger species, as well as criticism of the claim that big brains promoted sociality citing primate studies in which monkeys with large portions of their brains surgically removed remained socially functioning though their technical problem solving deteriorated in flexibility, computer simulations of chimpanzee social interaction showing that it requires no complex cognition, and cases of socially functioning humans with microcephalic brain sizes. These critics argue that unless male preference for fertile females and ability to switch to a new female was annulled, any need for a male provider would have selected against menopause to use her fertility to keep the provider male attracted to her, and that the theory of monogamous fathers providing for their families therefore cannot explain why menopause evolved in humans.One criticism of the notion of a trade-off between mating effort and parenting effort is that in a species in which it is common to spend much effort on something other than mating, including but not exclusive to parenting, there is less energy and time available for such for the competitors as well, meaning that species-wide reductions in the effort spent at mating does not reduce the ability of an individual to attract other mates. However, iteroparous organisms can be more r-selected than K-selected, such as a sparrow, which gives birth to several chicks per year but lives only a few years, as compared to a wandering albatross, which first reproduces at ten years old and breeds every other year during its 40-year lifespan.r-selected organisms usually: mature rapidly and have an early age of first reproduction have a relatively short lifespan have a large number of offspring at a time, and few reproductive events, or are semelparous have a high mortality rate and a low offspring survival rate have minimal parental care/investmentK-selected organisms usually: mature more slowly and have a later age of first reproduction have a longer lifespan have few offspring at a time and more reproductive events spread out over a longer span of time have a low mortality rate and a high offspring survival rate have high parental investmentVariation is a major part of what LHT studies, because every organism has its own life history strategy. This criticism also argues that when a carnivore eats a batch stored together, there is no significant difference in the chance of one surviving depending on the number of young stored together, concluding that humans do not stand out from many small animals such as mice in selection for protecting helpless young.There is criticism of the claim that menopause and somewhat earlier age-related declines in female fertility could co-evolve with a long term dependency on monogamous male providers who preferred fertile females. Experiments by Michael R. Rose and Brian Charlesworth showed that unstable environments select for flies with both shorter lifespans and higher fecundity\u2014in unreliable conditions, it is better for an organism to breed early and abundantly than waste resources promoting its own survival.Biological tradeoffs also appear to characterize the life histories of viruses, including bacteriophages.Reproductive value models the tradeoffs between reproduction, growth, and survivorship. A number of statistical predictions have been confirmed by social data and there is a large body of scientific literature from studies in experimental animal models, and naturalistic studies among many organisms.The claim that long periods of helplessness in young would select for more parenting effort in protecting the young at the same time as high levels of predation would select for less parenting effort is criticized for assuming that absolute chronology would determine direction of selection. In both cases, researchers assume adaptation\u2014processes that establish fitness.There are seven traits that are traditionally recognized as important in life history theory: size at birth growth pattern age and size at maturity number, size, and sex ratio of offspring age- and size-specific reproductive investments age- and size-specific mortality schedules length of lifeThe trait that is seen as the most important for any given organism is the one where a change in that trait creates the most significant difference in that organism's level of fitness. This is because it is not worth as much to invest a lot in breeding when the benefit of such investment is uncertain.These trade-offs, once identified, can then be put into models that estimate their effects on different life history strategies and answer questions about the selection pressures that exist on different life events. For example, \"optimal investment in offspring is where the decrease in total number of offspring is equaled by the increase of the number who survive\".Optimality is important for the study of life history theory because it serves as the basis for many of the models used, which work from the assumption that natural selection, as it works on a life history traits, is moving towards the most optimal group of traits and use of energy. This criticism argues that the total amount of predation threat faced by the young has the same effective protection need effect no matter if it comes in the form of a long childhood and far between the natural enemies or a short childhood and closely spaced natural enemies, as different life speeds are subjectively the same thing for the animals and only outwardly looks different. The burying beetle study also supported the terminal investment hypothesis: the authors found beetles that bred later in life also had increased brood sizes, reflecting greater investment in those reproductive events.The selection pressures that determine the reproductive strategy, and therefore much of the life history, of an organism can be understood in terms of r/K selection theory. For example, an exploitative life history strategy would be one where an organism benefits by using more resources than others, or by taking these resources from other organisms.Life history characteristics are traits that affect the life table of an organism, and can be imagined as various investments in growth, reproduction, and survivorship. There are two main focuses that have developed over time: genetic and phenotypic, but there has been a recent movement towards combining these two approaches.All organisms follow a specific sequence in their development, beginning with gestation and ending with death, which is known as the life cycle. Research has also indicated that humans may pursue different reproductive strategies.mathematical modeling quantitative genetics artificial selection demography optimality modeling mechanistic approach Malthusian parameterLife history theory has provided new perspectives in understanding many aspects of human reproductive behavior, such as the relationship between poverty and fertility. This shift is important because it can also affect other aspects of an organism's life, such as the organization of its group or its social interactions.Each species has its own pattern and timing for these events, often known as its ontogeny, and the variety produced by this is what LHT studies. They are more suited to life in a stable environment in which they can rely on a long lifespan and a low mortality rate that will allow them to reproduce multiple times with a high offspring survival rate.Some organisms that are very r-selected are semelparous, only reproducing once before they die. A highly unpredictable environment can also lead to plasticity, in which individual organisms can shift along the spectrum of r-selected vs. K-selected life histories to suit the environment.In studying humans, life history theory is used in many ways, including in biology, psychology, economics, anthropology, and other fields. However, some semelparous organisms are relatively long-lived, such as the African flowering plant Lobelia telekii which spends up to several decades growing an inflorescence that blooms only once before the plant dies, or the periodical cicada which spends 17 years as a larva before emerging as an adult. Humans also have unique traits that make them stand out from other organisms, such as a large brain, later maturity and age of first reproduction, a long lifespan, and a high level of reproduction, often supported by fathers and older (post-menopausal) relatives.", "combined_approach": "critics also criticize dichotomy parenting effort mating effort missing existence efforts take time mating survival effort would specieswide effectsthere also criticisms size organ tradeoffs including criticism claim tradeoff body size longevity cites observation longer lifespans larger species well criticism claim big brains promoted sociality citing primate studies monkeys large portions brains surgically removed remained socially functioning though technical problem solving deteriorated flexibility computer simulations chimpanzee social interaction showing requires complex cognition cases socially functioning humans microcephalic brain sizes. critics argue unless male preference fertile females ability switch new female annulled need male provider would selected menopause use fertility keep provider male attracted theory monogamous fathers providing families therefore explain menopause evolved humansone criticism notion tradeoff mating effort parenting effort species common spend much effort something mating including exclusive parenting less energy time available competitors well meaning specieswide reductions effort spent mating reduce ability individual attract mates. however iteroparous organisms rselected kselected sparrow gives birth several chicks per year lives years compared wandering albatross first reproduces ten years old breeds every year 40year lifespanrselected organisms usually mature rapidly early age first reproduction relatively short lifespan large number offspring time reproductive events semelparous high mortality rate low offspring survival rate minimal parental careinvestmentkselected organisms usually mature slowly later age first reproduction longer lifespan offspring time reproductive events spread longer span time low mortality rate high offspring survival rate high parental investmentvariation major part lht studies every organism life history strategy. criticism also argues carnivore eats batch stored together significant difference chance one surviving depending number young stored together concluding humans stand many small animals mice selection protecting helpless youngthere criticism claim menopause somewhat earlier agerelated declines female fertility could coevolve long term dependency monogamous male providers preferred fertile females. experiments michael r rose brian charlesworth showed unstable environments select flies shorter lifespans higher fecundity \u2014 unreliable conditions better organism breed early abundantly waste resources promoting survivalbiological tradeoffs also appear characterize life histories viruses including bacteriophagesreproductive value models tradeoffs reproduction growth survivorship. number statistical predictions confirmed social data large body scientific literature studies experimental animal models naturalistic studies among many organismsthe claim long periods helplessness young would select parenting effort protecting young time high levels predation would select less parenting effort criticized assuming absolute chronology would determine direction selection. cases researchers assume adaptation \u2014 processes establish fitnessthere seven traits traditionally recognized important life history theory size birth growth pattern age size maturity number size sex ratio offspring age sizespecific reproductive investments age sizespecific mortality schedules length lifethe trait seen important given organism one change trait creates significant difference organisms level fitness. worth much invest lot breeding benefit investment uncertainthese tradeoffs identified put models estimate effects different life history strategies answer questions selection pressures exist different life events. example optimal investment offspring decrease total number offspring equaled increase number surviveoptimality important study life history theory serves basis many models used work assumption natural selection works life history traits moving towards optimal group traits use energy. criticism argues total amount predation threat faced young effective protection need effect matter comes form long childhood far natural enemies short childhood closely spaced natural enemies different life speeds subjectively thing animals outwardly looks different. burying beetle study also supported terminal investment hypothesis authors found beetles bred later life also increased brood sizes reflecting greater investment reproductive eventsthe selection pressures determine reproductive strategy therefore much life history organism understood terms rk selection theory. example exploitative life history strategy would one organism benefits using resources others taking resources organismslife history characteristics traits affect life table organism imagined various investments growth reproduction survivorship. two main focuses developed time genetic phenotypic recent movement towards combining two approachesall organisms follow specific sequence development beginning gestation ending death known life cycle. research also indicated humans may pursue different reproductive strategiesmathematical modeling quantitative genetics artificial selection demography optimality modeling mechanistic approach malthusian parameterlife history theory provided new perspectives understanding many aspects human reproductive behavior relationship poverty fertility. shift important also affect aspects organisms life organization group social interactionseach species pattern timing events often known ontogeny variety produced lht studies. suited life stable environment rely long lifespan low mortality rate allow reproduce multiple times high offspring survival ratesome organisms rselected semelparous reproducing die. highly unpredictable environment also lead plasticity individual organisms shift along spectrum rselected vs kselected life histories suit environmentin studying humans life history theory used many ways including biology psychology economics anthropology fields. however semelparous organisms relatively longlived african flowering plant lobelia telekii spends several decades growing inflorescence blooms plant dies periodical cicada spends 17 years larva emerging adult. humans also unique traits make stand organisms large brain later maturity age first reproduction long lifespan high level reproduction often supported fathers older postmenopausal relatives."}, {"topic": "Particle physics", "summary": "Particle physics or high energy physics is the study of fundamental particles and forces that constitute matter and radiation. The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) and bosons (force-carrying particles). There are three generations of fermions, although ordinary matter is made only from the first fermion generation. The first generation consists of up and down quarks which form protons and neutrons, and electrons and electron neutrinos. The three fundamental interactions known to be mediated by bosons are electromagnetism, the weak interaction, and the strong interaction.\nQuarks cannot exist on their own but form hadrons. Hadrons that contain an odd number of quarks are called baryons and those that contain an even number are called mesons. Two baryons, the proton and the neutron, make up most of the mass of ordinary matter. Mesons are unstable and the longest-lived last for only a few hundredths of a microsecond. They occur after collisions between particles made of quarks, such as fast-moving protons and neutrons in cosmic rays. Mesons are also produced in cyclotrons or other particle accelerators.\nParticles have corresponding antiparticles with the same mass but with opposite electric charges. For example, the antiparticle of the electron is the positron. The electron has a negative electric charge, the positron has a positive charge. These antiparticles can theoretically form a corresponding form of matter called antimatter. Some particles, such as the photon, are their own antiparticle.\nThese elementary particles are excitations of the quantum fields that also govern their interactions. The dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model. The reconciliation of gravity to the current particle physics theory is not solved; many theories have addressed this problem, such as loop quantum gravity, string theory and supersymmetry theory.\nPractical particle physics is the study of these particles in radioactive processes and in particle accelerators such as the Large Hadron Collider. Theoretical particle physics is the study of these particles in the context of cosmology and quantum theory. The two are closely interrelated: the Higgs boson was postulated by theoretical particle physicists and its presence confirmed by practical experiments.\n\n", "content": "\n== History ==\n\nThe idea that all matter is fundamentally composed of elementary particles dates from at least the 6th century BC. In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. The word atom, after the Greek word atomos meaning \"indivisible\", has since then denoted the smallest particle of a chemical element, but physicists soon discovered that atoms are not, in fact, the fundamental particles of nature, but are conglomerates of even smaller particles, such as the electron. The early 20th century explorations of nuclear physics and quantum physics led to proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn), and nuclear fusion by Hans Bethe in that same year; both discoveries also led to the development of nuclear weapons.\nThroughout the 1950s and 1960s, a bewildering variety of particles were found in collisions of particles from beams of increasingly high energy. It was referred to informally as the \"particle zoo\". Important discoveries such as the CP violation by James Cronin and Val Fitch brought new questions to matter-antimatter imbalance. After the formulation of the Standard Model during the 1970s, physicists clarified the origin of the particle zoo. The large number of particles was explained as combinations of a (relatively) small number of more fundamental particles and framed in the context of quantum field theories. This reclassification marked the beginning of modern particle physics.\n\n\n== Standard Model ==\n\nThe current state of the classification of all elementary particles is explained by the Standard Model, which gained widespread acceptance in the mid-1970s after experimental confirmation of the existence of quarks. It describes the strong, weak, and electromagnetic fundamental interactions, using mediating gauge bosons. The species of gauge bosons are eight gluons, W\u2212, W+ and Z bosons, and the photon. The Standard Model also contains 24 fundamental fermions (12 particles and their associated anti-particles), which are the constituents of all matter. Finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. On 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson.The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature and that a more fundamental theory awaits discovery (See Theory of Everything). In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model, since neutrinos do not have mass in the Standard Model.\n\n\n== Subatomic particles ==\nModern particle physics research is focused on subatomic particles, including atomic constituents, such as electrons, protons, and neutrons (protons and neutrons are composite particles called baryons, made of quarks), that are produced by radioactive and scattering processes; such particles are photons, neutrinos, and muons, as well as a wide range of exotic particles. All particles and their interactions observed to date can be described almost entirely by the Standard Model.Dynamics of particles are also governed by quantum mechanics; they exhibit wave\u2013particle duality, displaying particle-like behaviour under certain experimental conditions and wave-like behaviour in others. In more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. Following the convention of particle physicists, the term elementary particles is applied to those particles that are, according to current understanding, presumed to be indivisible and not composed of other particles.\n\n\n=== Quarks and leptons ===\n\nOrdinary matter is made from first-generation quarks (up, down) and leptons (electron, electron neutrino). Collectively, quarks and leptons are called fermions, because they have a quantum spin of half-integers (-1/2, 1/2, 3/2, etc.). This cause the fermions to obey the Pauli exclusion principle, where no two particles may occupy the same quantum state. Quarks have fractional elementary electric charge (-1/3 or 2/3) and leptons have whole-numbered electric charge (0 or 1). Quarks also have color charge, which is labeled arbitrarily with no correlation to actual light color as red, green and blue. Because the interactions between the quarks stores energy which can convert to other particles when the quarks are far apart enough, quarks cannot be observed independently. This is called color confinement.There are three known generations of quarks (up and down, strange and charm, top and bottom) and leptons (electron and its neutrino, muon and its neutrino, tau and its neutrino), with strong indirect evidence that the fourth generation of fermions doesn't exist.\n\n\n=== Bosons ===\n\nBosons are the mediators or carriers of fundamental interactions, such as electromagnetism, the weak interaction, and the strong interaction. Electromagnetism is mediated by the photon, the quanta of light.:\u200a29\u201330\u200a The weak interaction is mediated by the W and Z bosons. The strong interaction is mediated by the gluon, which can link quarks together to form composite particles. Due to the aforementioned color confinement, gluons are never observed independently. The Higgs boson gives mass to the W and Z bosons via the Higgs mechanism \u2013 the gluon and photon are expected to be massless. All bosons have an integer quantum spin (0 and 1) and can have the same quantum state.\n\n\n=== Antiparticles and color charge ===\n\nMost aforementioned particles have corresponding antiparticles, which compose antimatter. Normal particles have positive lepton or baryon number, and antiparticles have these numbers negative. Most properties of corresponding antiparticles and particles are the same, with a few gets reversed; the electron's antiparticle, positron, has an opposite charge. To differentiate between antiparticles and particles, a plus or negative sign is added in superscript. For example, the electron and the positron are denoted e\u2212 and e+. When a particle and an antiparticle interact with each other, they are annihilated and convert to other particles. Some particles have no antiparticles, such as the photon or gluon.Quarks and gluons additionally have color charges, which influences the strong interaction. Quark's color charges are called red, green and blue (though the particle itself have no physical color), and in antiquarks are called antired, antigreen and antiblue. The gluon can have eight color charges, which are the result of quarks' interactions to form composite particles (gauge symmetry SU(3)).\n\n\n=== Composite ===\n\nThe neutrons and protons in the atomic nuclei are baryons \u2013 the neutron is composed of two down quarks and one up quark, and the proton is composed of two up quarks and one down quark. A baryon is composed of three quarks, and a meson is composed of two quarks (one normal, one anti). Baryons and mesons are collectively called hadrons. Quarks inside hadrons are governed by the strong interaction, thus are subjected to quantum chromodynamics (color charges). The bounded quarks must have their color charge to be neutral, or \"white\" for analogy with mixing the primary colors. More exotic hadrons can have other types, arrangement or number of quarks (tetraquark, pentaquark).A normal atom is made from protons, neutrons and electrons. By modifying the particles inside a normal atom, exotic atoms can be formed. A simple example would be the hydrogen-4.1, which has one of its electrons replaced with a muon.\n\n\n=== Hypothetical ===\nGraviton is a hypothetical particle that can mediate the gravitational interaction, but it has not been detected nor completely reconciled with current theories.\n\n\n== Experimental laboratories ==\n\nThe world's major particle physics laboratories are:\n\nBrookhaven National Laboratory (Long Island, United States). Its main facility is the Relativistic Heavy Ion Collider (RHIC), which collides heavy ions such as gold ions and polarized protons. It is the world's first heavy ion collider, and the world's only polarized proton collider.\nBudker Institute of Nuclear Physics (Novosibirsk, Russia). Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006, and VEPP-4, started experiments in 1994. Earlier facilities include the first electron\u2013electron beam\u2013beam collider VEP-1, which conducted experiments from 1964 to 1968; the electron-positron colliders VEPP-2, operated from 1965 to 1974; and, its successor VEPP-2M, performed experiments from 1974 to 2000.\nCERN (European Organization for Nuclear Research) (Franco-Swiss border, near Geneva). Its main project is now the Large Hadron Collider (LHC), which had its first beam circulation on 10 September 2008, and is now the world's most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions. Earlier facilities include the Large Electron\u2013Positron Collider (LEP), which was stopped on 2 November 2000 and then dismantled to give way for LHC; and the Super Proton Synchrotron, which is being reused as a pre-accelerator for the LHC and for fixed-target experiments.\nDESY (Deutsches Elektronen-Synchrotron) (Hamburg, Germany). Its main facility was the Hadron Elektron Ring Anlage (HERA), which collided electrons and positrons with protons. The accelerator complex is now focused on the production of synchrotron radiation with PETRA III, FLASH and the European XFEL.\nFermi National Accelerator Laboratory (Fermilab) (Batavia, United States). Its main facility until 2011 was the Tevatron, which collided protons and antiprotons and was the highest-energy particle collider on earth until the Large Hadron Collider surpassed it on 29 November 2009.\nInstitute of High Energy Physics (IHEP) (Beijing, China). IHEP manages a number of China's major particle physics facilities, including the Beijing Electron\u2013Positron Collider II(BEPC II), the Beijing Spectrometer (BES), the Beijing Synchrotron Radiation Facility (BSRF), the International Cosmic-Ray Observatory at Yangbajing in Tibet, the Daya Bay Reactor Neutrino Experiment, the China Spallation Neutron Source, the Hard X-ray Modulation Telescope (HXMT), and the Accelerator-driven Sub-critical System (ADS) as well as the Jiangmen Underground Neutrino Observatory (JUNO).\nKEK (Tsukuba, Japan). It is the home of a number of experiments such as the K2K experiment, a neutrino oscillation experiment and Belle II, an experiment measuring the CP violation of B mesons.\nSLAC National Accelerator Laboratory (Menlo Park, United States). Its 2-mile-long linear particle accelerator began operating in 1962 and was the basis for numerous electron and positron collision experiments until 2008. Since then the linear accelerator is being used for the Linac Coherent Light Source X-ray laser as well as advanced accelerator design research. SLAC staff continue to participate in developing and building many particle detectors around the world.\n\n\n== Theory ==\nTheoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments (see also theoretical physics). There are several major interrelated efforts being made in theoretical particle physics today.\nOne important branch attempts to better understand the Standard Model and its tests. Theorists make quantitative predictions of observables at collider and astronomical experiments, which along with experimental measurements is used to extract the parameters of the Standard Model with less uncertainty. This work probes the limits of the Standard Model and therefore expands scientific understanding of nature's building blocks. Those efforts are made challenging by the difficulty of calculating high precision quantities in quantum chromodynamics. Some theorists working in this area use the tools of perturbative quantum field theory and effective field theory, referring to themselves as phenomenologists. Others make use of lattice field theory and call themselves lattice theorists.\nAnother major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). This work is often motivated by the hierarchy problem and is constrained by existing experimental data. It may involve work on supersymmetry, alternatives to the Higgs mechanism, extra spatial dimensions (such as the Randall\u2013Sundrum models), Preon theory, combinations of these, or other ideas.\nA third major effort in theoretical particle physics is string theory. String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. If the theory is successful, it may be considered a \"Theory of Everything\", or \"TOE\".There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity.\n\n\n== Practical applications ==\nIn principle, all physics (and practical applications developed therefrom) can be derived from the study of fundamental particles. In practice, even if \"particle physics\" is taken to mean only \"high-energy atom smashers\", many technologies have been developed during these pioneering investigations that later find wide uses in society. Particle accelerators are used to produce medical isotopes for research and treatment (for example, isotopes used in PET imaging), or used directly in external beam radiotherapy. The development of superconductors has been pushed forward by their use in particle physics. The World Wide Web and touchscreen technology were initially developed at CERN. Additional applications are found in medicine, national security, industry, computing, science, and workforce development, illustrating a long and growing list of beneficial practical applications with contributions from particle physics.\n\n\n== Future ==\nThe primary goal, which is pursued in several distinct ways, is to find and understand what physics may lie beyond the standard model. There are several powerful experimental reasons to expect new physics, including dark matter and neutrino mass. There are also theoretical hints that this new physics should be found at accessible energy scales.\nMuch of the effort to find beyond Standard Model physics has been focused on new collider experiments. As of March, 2023, no beyond-Standard-Model signatures are observed at the Large Hadron Collider (LHC). This implies that new physics signals must be too rare or else manifest at too high energy to be observed at LHC. To address rare signals, one builds a very high rate source with low backgrounds. This is the concept behind the multiple Higgs factory proposals. These consist of lepton-lepton (either electron-positron or muon-antimuon) colliders with center of mass energy chosen to produce Higgs particles. Because the leptons annihilate, the events in the detector have few extraneous particles, unlike hadron colliders. This allows for accurate event reconstruction. New physics is probed by high precision reconstruction of the large sample of Higgs bosons. Alternatively, to reach higher energies, it is necessary to construct a collider even larger than the LHC. The Future Circular Collider proposed for CERN is an example of a 100 TeV center of mass proton collider proposal, representing an order of magnitude increase over the LHC.\nThere are important non-collider experiments that attempt to find and understand physics beyond the Standard Model. One is the determination of the neutrino masses, since these masses may arise from neutrinos mixing with very heavy particles. Another is cosmological observations that provide constraints on the dark matter, although it may be impossible to determine the exact nature of the dark matter without the colliders. Finally, lower bounds on the very long lifetime of the proton put constraints on Grand Unified Theories at energy scales much higher than collider experiments will be able to probe any time soon.\nIn 2023, the Particle Physics Project Prioritization Panel (P5) began a new decadal study on the future of particle physics in the US that will update the 2014 P5 study that recommended the Deep Underground Neutrino Experiment, among other experiments.\n\n\n== See also ==\n\n\n== References ==\n\n\n== External links ==", "content_traditional": "ihep manages number chinas major particle physics facilities including beijing electron \u2013 positron collider iibepc ii beijing spectrometer bes beijing synchrotron radiation facility bsrf international cosmicray observatory yangbajing tibet daya bay reactor neutrino experiment china spallation neutron source hard xray modulation telescope hxmt acceleratordriven subcritical system ads well jiangmen underground neutrino observatory juno. 4 july 2012 physicists large hadron collider cern announced found new particle behaves similarly expected higgs bosonthe standard model currently formulated 61 elementary particles. word atom greek word atomos meaning indivisible since denoted smallest particle chemical element physicists soon discovered atoms fact fundamental particles nature conglomerates even smaller particles electron. subatomic particles modern particle physics research focused subatomic particles including atomic constituents electrons protons neutrons protons neutrons composite particles called baryons made quarks produced radioactive scattering processes particles photons neutrinos muons well wide range exotic particles. earlier facilities include large electron \u2013 positron collider lep stopped 2 november 2000 dismantled give way lhc super proton synchrotron reused preaccelerator lhc fixedtarget experiments. particles interactions observed date described almost entirely standard modeldynamics particles also governed quantum mechanics exhibit wave \u2013 particle duality displaying particlelike behaviour certain experimental conditions wavelike behaviour others. called color confinementthere three known generations quarks strange charm top bottom leptons electron neutrino muon neutrino tau neutrino strong indirect evidence fourth generation fermions nt exist. finally lower bounds long lifetime proton put constraints grand unified theories energy scales much higher collider experiments able probe time soon. practice even particle physics taken mean highenergy atom smashers many technologies developed pioneering investigations later find wide uses society. early 20th century explorations nuclear physics quantum physics led proofs nuclear fission 1939 lise meitner based experiments otto hahn nuclear fusion hans bethe year discoveries also led development nuclear weapons. theory successful may considered theory everything toethere also areas work theoretical particle physics ranging particle cosmology loop quantum gravity. 2023 particle physics project prioritization panel p5 began new decadal study future particle physics us update 2014 p5 study recommended deep underground neutrino experiment among experiments. may involve work supersymmetry alternatives higgs mechanism extra spatial dimensions randall \u2013 sundrum models preon theory combinations ideas. theorists make quantitative predictions observables collider astronomical experiments along experimental measurements used extract parameters standard model less uncertainty. earlier facilities include first electron \u2013 electron beam \u2013 beam collider vep1 conducted experiments 1964 1968 electronpositron colliders vepp2 operated 1965 1974 successor vepp2 performed experiments 1974 2000. main facility 2011 tevatron collided protons antiprotons highestenergy particle collider earth large hadron collider surpassed 29 november 2009. additional applications found medicine national security industry computing science workforce development illustrating long growing list beneficial practical applications contributions particle physics. standard model current state classification elementary particles explained standard model gained widespread acceptance mid1970s experimental confirmation existence quarks. following convention particle physicists term elementary particles applied particles according current understanding presumed indivisible composed particles. string theorists attempt construct unified description quantum mechanics general relativity building theory based small strings branes rather particles. main project large hadron collider lhc first beam circulation 10 september 2008 worlds energetic collider protons. another major effort model building model builders develop ideas physics may lie beyond standard model higher energies smaller distances. future circular collider proposed cern example 100 tev center mass proton collider proposal representing order magnitude increase lhc. 19th century john dalton work stoichiometry concluded element nature composed single unique type particle. exotic hadrons types arrangement number quarks tetraquark pentaquarka normal atom made protons neutrons electrons. 2milelong linear particle accelerator began operating 1962 basis numerous electron positron collision experiments 2008. future primary goal pursued several distinct ways find understand physics may lie beyond standard model. another cosmological observations provide constraints dark matter although may impossible determine exact nature dark matter without colliders. interactions quarks stores energy convert particles quarks far apart enough quarks observed independently. particles antiparticles photon gluonquarks gluons additionally color charges influences strong interaction. elementary particles combine form composite particles accounting hundreds species particles discovered since 1960s. however particle physicists believe incomplete description nature fundamental theory awaits discovery see theory everything. properties corresponding antiparticles particles gets reversed electrons antiparticle positron opposite charge. quarks color charges called red green blue though particle physical color antiquarks called antired antigreen antiblue. hypothetical graviton hypothetical particle mediate gravitational interaction detected completely reconciled current theories. since linear accelerator used linac coherent light source xray laser well advanced accelerator design research. recent years measurements neutrino mass provided first experimental deviations standard model since neutrinos mass standard model. theory theoretical particle physics attempts develop models theoretical framework mathematical tools understand current experiments make predictions future experiments see also theoretical physics. gluon eight color charges result quarks interactions form composite particles gauge symmetry su3. standard model also contains 24 fundamental fermions 12 particles associated antiparticles constituents matter.", "custom_approach": "In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model, since neutrinos do not have mass in the Standard Model.Modern particle physics research is focused on subatomic particles, including atomic constituents, such as electrons, protons, and neutrons (protons and neutrons are composite particles called baryons, made of quarks), that are produced by radioactive and scattering processes; such particles are photons, neutrinos, and muons, as well as a wide range of exotic particles. IHEP manages a number of China's major particle physics facilities, including the Beijing Electron\u2013Positron Collider II(BEPC II), the Beijing Spectrometer (BES), the Beijing Synchrotron Radiation Facility (BSRF), the International Cosmic-Ray Observatory at Yangbajing in Tibet, the Daya Bay Reactor Neutrino Experiment, the China Spallation Neutron Source, the Hard X-ray Modulation Telescope (HXMT), and the Accelerator-driven Sub-critical System (ADS) as well as the Jiangmen Underground Neutrino Observatory (JUNO). A simple example would be the hydrogen-4.1, which has one of its electrons replaced with a muon.Graviton is a hypothetical particle that can mediate the gravitational interaction, but it has not been detected nor completely reconciled with current theories.The world's major particle physics laboratories are: Brookhaven National Laboratory (Long Island, United States). Additional applications are found in medicine, national security, industry, computing, science, and workforce development, illustrating a long and growing list of beneficial practical applications with contributions from particle physics.The primary goal, which is pursued in several distinct ways, is to find and understand what physics may lie beyond the standard model. This is called color confinement.There are three known generations of quarks (up and down, strange and charm, top and bottom) and leptons (electron and its neutrino, muon and its neutrino, tau and its neutrino), with strong indirect evidence that the fourth generation of fermions doesn't exist.Bosons are the mediators or carriers of fundamental interactions, such as electromagnetism, the weak interaction, and the strong interaction. If the theory is successful, it may be considered a \"Theory of Everything\", or \"TOE\".There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity.In principle, all physics (and practical applications developed therefrom) can be derived from the study of fundamental particles. Following the convention of particle physicists, the term elementary particles is applied to those particles that are, according to current understanding, presumed to be indivisible and not composed of other particles.Ordinary matter is made from first-generation quarks (up, down) and leptons (electron, electron neutrino). On 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson.The Standard Model, as currently formulated, has 61 elementary particles. The word atom, after the Greek word atomos meaning \"indivisible\", has since then denoted the smallest particle of a chemical element, but physicists soon discovered that atoms are not, in fact, the fundamental particles of nature, but are conglomerates of even smaller particles, such as the electron. Earlier facilities include the Large Electron\u2013Positron Collider (LEP), which was stopped on 2 November 2000 and then dismantled to give way for LHC; and the Super Proton Synchrotron, which is being reused as a pre-accelerator for the LHC and for fixed-target experiments. All particles and their interactions observed to date can be described almost entirely by the Standard Model.Dynamics of particles are also governed by quantum mechanics; they exhibit wave\u2013particle duality, displaying particle-like behaviour under certain experimental conditions and wave-like behaviour in others. The gluon can have eight color charges, which are the result of quarks' interactions to form composite particles (gauge symmetry SU(3)).The neutrons and protons in the atomic nuclei are baryons \u2013 the neutron is composed of two down quarks and one up quark, and the proton is composed of two up quarks and one down quark. Finally, lower bounds on the very long lifetime of the proton put constraints on Grand Unified Theories at energy scales much higher than collider experiments will be able to probe any time soon. SLAC staff continue to participate in developing and building many particle detectors around the world.Theoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments (see also theoretical physics). In practice, even if \"particle physics\" is taken to mean only \"high-energy atom smashers\", many technologies have been developed during these pioneering investigations that later find wide uses in society. This reclassification marked the beginning of modern particle physics.The current state of the classification of all elementary particles is explained by the Standard Model, which gained widespread acceptance in the mid-1970s after experimental confirmation of the existence of quarks. The early 20th century explorations of nuclear physics and quantum physics led to proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn), and nuclear fusion by Hans Bethe in that same year; both discoveries also led to the development of nuclear weapons. In 2023, the Particle Physics Project Prioritization Panel (P5) began a new decadal study on the future of particle physics in the US that will update the 2014 P5 study that recommended the Deep Underground Neutrino Experiment, among other experiments. It may involve work on supersymmetry, alternatives to the Higgs mechanism, extra spatial dimensions (such as the Randall\u2013Sundrum models), Preon theory, combinations of these, or other ideas. Theorists make quantitative predictions of observables at collider and astronomical experiments, which along with experimental measurements is used to extract the parameters of the Standard Model with less uncertainty. Its main facility until 2011 was the Tevatron, which collided protons and antiprotons and was the highest-energy particle collider on earth until the Large Hadron Collider surpassed it on 29 November 2009. Earlier facilities include the first electron\u2013electron beam\u2013beam collider VEP-1, which conducted experiments from 1964 to 1968; the electron-positron colliders VEPP-2, operated from 1965 to 1974; and, its successor VEPP-2M, performed experiments from 1974 to 2000. String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. Another major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). Its main project is now the Large Hadron Collider (LHC), which had its first beam circulation on 10 September 2008, and is now the world's most energetic collider of protons. The Future Circular Collider proposed for CERN is an example of a 100 TeV center of mass proton collider proposal, representing an order of magnitude increase over the LHC. In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. More exotic hadrons can have other types, arrangement or number of quarks (tetraquark, pentaquark).A normal atom is made from protons, neutrons and electrons. Its 2-mile-long linear particle accelerator began operating in 1962 and was the basis for numerous electron and positron collision experiments until 2008. Another is cosmological observations that provide constraints on the dark matter, although it may be impossible to determine the exact nature of the dark matter without the colliders. Because the interactions between the quarks stores energy which can convert to other particles when the quarks are far apart enough, quarks cannot be observed independently. Some particles have no antiparticles, such as the photon or gluon.Quarks and gluons additionally have color charges, which influences the strong interaction. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s.", "combined_approach": "recent years measurements neutrino mass provided first experimental deviations standard model since neutrinos mass standard modelmodern particle physics research focused subatomic particles including atomic constituents electrons protons neutrons protons neutrons composite particles called baryons made quarks produced radioactive scattering processes particles photons neutrinos muons well wide range exotic particles. ihep manages number chinas major particle physics facilities including beijing electron \u2013 positron collider iibepc ii beijing spectrometer bes beijing synchrotron radiation facility bsrf international cosmicray observatory yangbajing tibet daya bay reactor neutrino experiment china spallation neutron source hard xray modulation telescope hxmt acceleratordriven subcritical system ads well jiangmen underground neutrino observatory juno. simple example would hydrogen41 one electrons replaced muongraviton hypothetical particle mediate gravitational interaction detected completely reconciled current theoriesthe worlds major particle physics laboratories brookhaven national laboratory long island united states. additional applications found medicine national security industry computing science workforce development illustrating long growing list beneficial practical applications contributions particle physicsthe primary goal pursued several distinct ways find understand physics may lie beyond standard model. called color confinementthere three known generations quarks strange charm top bottom leptons electron neutrino muon neutrino tau neutrino strong indirect evidence fourth generation fermions nt existbosons mediators carriers fundamental interactions electromagnetism weak interaction strong interaction. theory successful may considered theory everything toethere also areas work theoretical particle physics ranging particle cosmology loop quantum gravityin principle physics practical applications developed therefrom derived study fundamental particles. following convention particle physicists term elementary particles applied particles according current understanding presumed indivisible composed particlesordinary matter made firstgeneration quarks leptons electron electron neutrino. 4 july 2012 physicists large hadron collider cern announced found new particle behaves similarly expected higgs bosonthe standard model currently formulated 61 elementary particles. word atom greek word atomos meaning indivisible since denoted smallest particle chemical element physicists soon discovered atoms fact fundamental particles nature conglomerates even smaller particles electron. earlier facilities include large electron \u2013 positron collider lep stopped 2 november 2000 dismantled give way lhc super proton synchrotron reused preaccelerator lhc fixedtarget experiments. particles interactions observed date described almost entirely standard modeldynamics particles also governed quantum mechanics exhibit wave \u2013 particle duality displaying particlelike behaviour certain experimental conditions wavelike behaviour others. gluon eight color charges result quarks interactions form composite particles gauge symmetry su3the neutrons protons atomic nuclei baryons \u2013 neutron composed two quarks one quark proton composed two quarks one quark. finally lower bounds long lifetime proton put constraints grand unified theories energy scales much higher collider experiments able probe time soon. slac staff continue participate developing building many particle detectors around worldtheoretical particle physics attempts develop models theoretical framework mathematical tools understand current experiments make predictions future experiments see also theoretical physics. practice even particle physics taken mean highenergy atom smashers many technologies developed pioneering investigations later find wide uses society. reclassification marked beginning modern particle physicsthe current state classification elementary particles explained standard model gained widespread acceptance mid1970s experimental confirmation existence quarks. early 20th century explorations nuclear physics quantum physics led proofs nuclear fission 1939 lise meitner based experiments otto hahn nuclear fusion hans bethe year discoveries also led development nuclear weapons. 2023 particle physics project prioritization panel p5 began new decadal study future particle physics us update 2014 p5 study recommended deep underground neutrino experiment among experiments. may involve work supersymmetry alternatives higgs mechanism extra spatial dimensions randall \u2013 sundrum models preon theory combinations ideas. theorists make quantitative predictions observables collider astronomical experiments along experimental measurements used extract parameters standard model less uncertainty. main facility 2011 tevatron collided protons antiprotons highestenergy particle collider earth large hadron collider surpassed 29 november 2009. earlier facilities include first electron \u2013 electron beam \u2013 beam collider vep1 conducted experiments 1964 1968 electronpositron colliders vepp2 operated 1965 1974 successor vepp2 performed experiments 1974 2000. string theorists attempt construct unified description quantum mechanics general relativity building theory based small strings branes rather particles. another major effort model building model builders develop ideas physics may lie beyond standard model higher energies smaller distances. main project large hadron collider lhc first beam circulation 10 september 2008 worlds energetic collider protons. future circular collider proposed cern example 100 tev center mass proton collider proposal representing order magnitude increase lhc. 19th century john dalton work stoichiometry concluded element nature composed single unique type particle. exotic hadrons types arrangement number quarks tetraquark pentaquarka normal atom made protons neutrons electrons. 2milelong linear particle accelerator began operating 1962 basis numerous electron positron collision experiments 2008. another cosmological observations provide constraints dark matter although may impossible determine exact nature dark matter without colliders. interactions quarks stores energy convert particles quarks far apart enough quarks observed independently. particles antiparticles photon gluonquarks gluons additionally color charges influences strong interaction. elementary particles combine form composite particles accounting hundreds species particles discovered since 1960s."}, {"topic": "Superfluidity", "summary": "Superfluidity is the characteristic property of a fluid with zero viscosity which therefore flows without any loss of kinetic energy. When stirred, a superfluid forms vortices that continue to rotate indefinitely. Superfluidity occurs in two isotopes of helium (helium-3 and helium-4) when they are liquefied by cooling to cryogenic temperatures. It is also a property of various other exotic states of matter theorized to exist in astrophysics, high-energy physics, and theories of quantum gravity. The theory of superfluidity was developed by Soviet theoretical physicists Lev Landau and Isaak Khalatnikov.\nSuperfluidity is often coincidental with Bose\u2013Einstein condensation, but neither phenomenon is directly related to the other; not all Bose\u2013Einstein condensates can be regarded as superfluids, and not all superfluids are Bose\u2013Einstein condensates.", "content": "\n\n\n== Superfluidity of liquid helium ==\n\nSuperfluidity was discovered in helium-4 by Pyotr Kapitsa and independently by John F. Allen and Don Misener in 1937. It has since been described through phenomenology and microscopic theories. In liquid helium-4, the superfluidity occurs at far higher temperatures than it does in helium-3. Each atom of helium-4 is a boson particle, by virtue of its integer spin. A helium-3 atom is a fermion particle; it can form bosons only by pairing with another particle like itself at much lower temperatures. The discovery of superfluidity in helium-3 was the basis for the award of the 1996 Nobel Prize in Physics. This process is similar to the electron pairing in superconductivity.\n\n\n== Ultracold atomic gases ==\nSuperfluidity in an ultracold fermionic gas was experimentally proven by Wolfgang Ketterle and his team who observed quantum vortices in lithium-6 at a temperature of 50 nK at MIT in April 2005. Such vortices had previously been observed in an ultracold bosonic gas using rubidium-87 in 2000, and more recently in two-dimensional gases. As early as 1999, Lene Hau created such a condensate using sodium atoms for the purpose of slowing light, and later stopping it completely. Her team subsequently used this system of compressed light to generate the superfluid analogue of shock waves and tornadoes:\n These dramatic excitations result in the formation of solitons that in turn decay into quantized vortices\u2014created far out of equilibrium, in pairs of opposite circulation\u2014revealing directly the process of superfluid breakdown in Bose\u2013Einstein condensates. With a double light-roadblock setup, we can generate controlled collisions between shock waves resulting in completely unexpected, nonlinear excitations. We have observed hybrid structures consisting of vortex rings embedded in dark solitonic shells. The vortex rings act as 'phantom propellers' leading to very rich excitation dynamics. \n\n\n== Superfluids in astrophysics ==\nThe idea that superfluidity exists inside neutron stars was first proposed by Arkady Migdal. By analogy with electrons inside superconductors forming Cooper pairs because of electron-lattice interaction, it is expected that nucleons in a neutron star at sufficiently high density and low temperature can also form Cooper pairs because of the long-range attractive nuclear force and lead to superfluidity and superconductivity.\n\n\n== In high-energy physics and quantum gravity ==\n\nSuperfluid vacuum theory (SVT) is an approach in theoretical physics and quantum mechanics where the physical vacuum is viewed as superfluid.\nThe ultimate goal of the approach is to develop scientific models that unify quantum mechanics (describing three of the four known fundamental interactions) with gravity. This makes SVT a candidate for the theory of quantum gravity and an extension of the Standard Model.\nIt is hoped that development of such theory would unify into a single consistent model of all fundamental interactions,\nand to describe all known interactions and elementary particles as different manifestations of the same entity, superfluid vacuum.\nOn the macro-scale a larger similar phenomenon has been suggested as happening in the murmurations of starlings. The rapidity of change in flight patterns mimics the phase change leading to superfluidity in some liquid states.Light behaves like a superfluid in various applications such as Poisson's Spot. As the liquid helium shown above, light will travel along the surface of an obstacle before continuing along its trajectory. Since light is not affected by local gravity its \"level\" becomes its own trajectory and velocity. Another example is how a beam of light travels through the hole of an aperture and along its backside before diffraction.\n\n\n== See also ==\nBoojum (superfluidity)\nCondensed matter physics\nMacroscopic quantum phenomena\nQuantum hydrodynamics\nSlow light\nSuperconductivity\nSupersolid\n\n\n== References ==\n\n\n== Further reading ==\nKhalatnikov, Isaac M. (2018). An introduction to the theory of superfluidity. CRC Press. ISBN 978-0-42-997144-0.\nAnnett, James F. (2005). Superconductivity, superfluids, and condensates. Oxford: Oxford Univ. Press. ISBN 978-0-19-850756-7.\nGu\u00e9nault, Antony M. (2003). Basic superfluids. London: Taylor & Francis. ISBN 0-7484-0891-6. Guenault, Tony (28 November 2002). 2002 pbk edition. ISBN 9780748408917.\nSvistunov, B. V., Babaev E. S., Prokof'ev N. V. Superfluid States of Matter\nVolovik, G. E. (2003). The Universe in a helium droplet. Int. Ser. Monogr. Phys. Vol. 117. pp. 1\u2013507. ISBN 978-0-19-850782-6; hbk edition{{cite book}}: CS1 maint: postscript (link) Volovik, Grigory E. (6 March 2003). 2003 pbk edition. ISBN 9780198507826.\n\n\n== External links ==\n Quotations related to Superfluidity at Wikiquote\n Media related to Superfluidity at Wikimedia Commons\nVideo: Demonstration of superfluid helium (Alfred Leitner, 1963, 38 min.)\nSuperfluidity seen in a 2d fermi gas recent 2021 observation relevant for Cuprate superconductors", "content_traditional": "superfluidity liquid helium superfluidity discovered helium4 pyotr kapitsa independently john f allen misener 1937. since described phenomenology microscopic theories. liquid helium4 superfluidity occurs far higher temperatures helium3. atom helium4 boson particle virtue integer spin. helium3 atom fermion particle form bosons pairing another particle like much lower temperatures. discovery superfluidity helium3 basis award 1996 nobel prize physics. process similar electron pairing superconductivity. ultracold atomic gases superfluidity ultracold fermionic gas experimentally proven wolfgang ketterle team observed quantum vortices lithium6 temperature 50 nk mit april 2005. vortices previously observed ultracold bosonic gas using rubidium87 2000 recently twodimensional gases. early 1999 lene hau created condensate using sodium atoms purpose slowing light later stopping completely. team subsequently used system compressed light generate superfluid analogue shock waves tornadoes dramatic excitations result formation solitons turn decay quantized vortices \u2014 created far equilibrium pairs opposite circulation \u2014 revealing directly process superfluid breakdown bose \u2013 einstein condensates. double lightroadblock setup generate controlled collisions shock waves resulting completely unexpected nonlinear excitations. observed hybrid structures consisting vortex rings embedded dark solitonic shells. vortex rings act phantom propellers leading rich excitation dynamics. superfluids astrophysics idea superfluidity exists inside neutron stars first proposed arkady migdal. analogy electrons inside superconductors forming cooper pairs electronlattice interaction expected nucleons neutron star sufficiently high density low temperature also form cooper pairs longrange attractive nuclear force lead superfluidity superconductivity. highenergy physics quantum gravity superfluid vacuum theory svt approach theoretical physics quantum mechanics physical vacuum viewed superfluid. ultimate goal approach develop scientific models unify quantum mechanics describing three four known fundamental interactions gravity. makes svt candidate theory quantum gravity extension standard model. hoped development theory would unify single consistent model fundamental interactions describe known interactions elementary particles different manifestations entity superfluid vacuum. macroscale larger similar phenomenon suggested happening murmurations starlings. rapidity change flight patterns mimics phase change leading superfluidity liquid stateslight behaves like superfluid various applications poissons spot. liquid helium shown light travel along surface obstacle continuing along trajectory. since light affected local gravity level becomes trajectory velocity. another example beam light travels hole aperture along backside diffraction. see also boojum superfluidity condensed matter physics macroscopic quantum phenomena quantum hydrodynamics slow light superconductivity supersolid references reading khalatnikov isaac 2018. introduction theory superfluidity. crc press. isbn 9780429971440. annett james f 2005. superconductivity superfluids condensates. oxford oxford univ. press. isbn 9780198507567. gu\u00e9nault antony 2003. basic superfluids. london taylor francis. isbn 0748408916. guenault tony 28 november 2002. 2002 pbk edition. isbn 9780748408917. svistunov b v babaev e prokofev n v superfluid states matter volovik g e 2003. universe helium droplet. int. ser. monogr. phys. vol. 117 pp. 1\u2013507. isbn 9780198507826 hbk editioncite book cs1 maint postscript link volovik grigory e 6 march 2003. 2003 pbk edition. isbn 9780198507826. external links quotations related superfluidity wikiquote media related superfluidity wikimedia commons video demonstration superfluid helium alfred leitner 1963 38 min. superfluidity seen 2d fermi gas recent 2021 observation relevant cuprate superconductors.", "custom_approach": "Superfluidity was discovered in helium-4 by Pyotr Kapitsa and independently by John F. Allen and Don Misener in 1937. It has since been described through phenomenology and microscopic theories. In liquid helium-4, the superfluidity occurs at far higher temperatures than it does in helium-3. Each atom of helium-4 is a boson particle, by virtue of its integer spin. A helium-3 atom is a fermion particle; it can form bosons only by pairing with another particle like itself at much lower temperatures. The discovery of superfluidity in helium-3 was the basis for the award of the 1996 Nobel Prize in Physics. This process is similar to the electron pairing in superconductivity.Superfluidity in an ultracold fermionic gas was experimentally proven by Wolfgang Ketterle and his team who observed quantum vortices in lithium-6 at a temperature of 50 nK at MIT in April 2005. Such vortices had previously been observed in an ultracold bosonic gas using rubidium-87 in 2000, and more recently in two-dimensional gases. As early as 1999, Lene Hau created such a condensate using sodium atoms for the purpose of slowing light, and later stopping it completely. Her team subsequently used this system of compressed light to generate the superfluid analogue of shock waves and tornadoes: These dramatic excitations result in the formation of solitons that in turn decay into quantized vortices\u2014created far out of equilibrium, in pairs of opposite circulation\u2014revealing directly the process of superfluid breakdown in Bose\u2013Einstein condensates. With a double light-roadblock setup, we can generate controlled collisions between shock waves resulting in completely unexpected, nonlinear excitations. We have observed hybrid structures consisting of vortex rings embedded in dark solitonic shells. The vortex rings act as 'phantom propellers' leading to very rich excitation dynamics.The idea that superfluidity exists inside neutron stars was first proposed by Arkady Migdal. By analogy with electrons inside superconductors forming Cooper pairs because of electron-lattice interaction, it is expected that nucleons in a neutron star at sufficiently high density and low temperature can also form Cooper pairs because of the long-range attractive nuclear force and lead to superfluidity and superconductivity.Superfluid vacuum theory (SVT) is an approach in theoretical physics and quantum mechanics where the physical vacuum is viewed as superfluid. The ultimate goal of the approach is to develop scientific models that unify quantum mechanics (describing three of the four known fundamental interactions) with gravity. This makes SVT a candidate for the theory of quantum gravity and an extension of the Standard Model. It is hoped that development of such theory would unify into a single consistent model of all fundamental interactions, and to describe all known interactions and elementary particles as different manifestations of the same entity, superfluid vacuum. On the macro-scale a larger similar phenomenon has been suggested as happening in the murmurations of starlings. The rapidity of change in flight patterns mimics the phase change leading to superfluidity in some liquid states.Light behaves like a superfluid in various applications such as Poisson's Spot. As the liquid helium shown above, light will travel along the surface of an obstacle before continuing along its trajectory. Since light is not affected by local gravity its \"level\" becomes its own trajectory and velocity. Another example is how a beam of light travels through the hole of an aperture and along its backside before diffraction.", "combined_approach": "superfluidity discovered helium4 pyotr kapitsa independently john f allen misener 1937. since described phenomenology microscopic theories. liquid helium4 superfluidity occurs far higher temperatures helium3. atom helium4 boson particle virtue integer spin. helium3 atom fermion particle form bosons pairing another particle like much lower temperatures. discovery superfluidity helium3 basis award 1996 nobel prize physics. process similar electron pairing superconductivitysuperfluidity ultracold fermionic gas experimentally proven wolfgang ketterle team observed quantum vortices lithium6 temperature 50 nk mit april 2005. vortices previously observed ultracold bosonic gas using rubidium87 2000 recently twodimensional gases. early 1999 lene hau created condensate using sodium atoms purpose slowing light later stopping completely. team subsequently used system compressed light generate superfluid analogue shock waves tornadoes dramatic excitations result formation solitons turn decay quantized vortices \u2014 created far equilibrium pairs opposite circulation \u2014 revealing directly process superfluid breakdown bose \u2013 einstein condensates. double lightroadblock setup generate controlled collisions shock waves resulting completely unexpected nonlinear excitations. observed hybrid structures consisting vortex rings embedded dark solitonic shells. vortex rings act phantom propellers leading rich excitation dynamicsthe idea superfluidity exists inside neutron stars first proposed arkady migdal. analogy electrons inside superconductors forming cooper pairs electronlattice interaction expected nucleons neutron star sufficiently high density low temperature also form cooper pairs longrange attractive nuclear force lead superfluidity superconductivitysuperfluid vacuum theory svt approach theoretical physics quantum mechanics physical vacuum viewed superfluid. ultimate goal approach develop scientific models unify quantum mechanics describing three four known fundamental interactions gravity. makes svt candidate theory quantum gravity extension standard model. hoped development theory would unify single consistent model fundamental interactions describe known interactions elementary particles different manifestations entity superfluid vacuum. macroscale larger similar phenomenon suggested happening murmurations starlings. rapidity change flight patterns mimics phase change leading superfluidity liquid stateslight behaves like superfluid various applications poissons spot. liquid helium shown light travel along surface obstacle continuing along trajectory. since light affected local gravity level becomes trajectory velocity. another example beam light travels hole aperture along backside diffraction."}, {"topic": "Sedimentary basin", "summary": "Sedimentary basins are region-scale depressions of the Earth's crust where subsidence has occurred and a thick sequence of sediments have accumulated to form a large three-dimensional body of sedimentary rock. They form when long-term subsidence creates a regional depression that provides accommodation space for accumulation of sediments. Over millions or tens or hundreds of millions of years the deposition of sediment, primarily gravity-driven transportation of water-borne eroded material, acts to fill the depression. As the sediments are buried, they are subject to increasing pressure and begin the processes of compaction and lithification that transform them into sedimentary rock.\n\nSedimentary basins are created by deformation of Earth's lithosphere in diverse geological settings, usually as a result of plate tectonic activity. Mechanisms of crustal deformation that lead to subsidence and sedimentary basin formation include the thinning of underlying crust; depression of the crust by sedimentary, tectonic or volcanic loading; or changes in the thickness or density of underlying or adjacent lithosphere. Once the process of basin formation has begun, the weight of the sediments being deposited in the basin adds a further load on the underlying crust that accentuates subsidence and thus amplifies basin development as a result of isostasy.The long-term preserved geologic record of a sedimentary basin is a large scale contiguous three-dimensional package of sedimentary rocks created during a particular period of geologic time, a 'stratigraphic succession', that geologists continue to refer to as a sedimentary basin even if it is no longer a bathymetric or topographic depression. The Williston Basin, Molasse basin and Magallanes Basin are examples of sedimentary basins that are no longer depressions. Basins formed in different tectonic regimes vary in their preservation potential. Intracratonic basins, which form on highly-stable continental interiors, have a high probability of preservation. In contrast, sedimentary basins formed on oceanic crust are likely to be destroyed by subduction. Continental margins formed when new ocean basins like the Atlantic are created as continents rift apart are likely to have lifespans of hundreds of millions of years, but may be only partially preserved when those ocean basins close as continents collide.Sedimentary basins are of great economic importance. Almost all the world's natural gas and petroleum and all of its coal are found in sedimentary rock. Many metal ores are found in sedimentary rocks formed in particular sedimentary environments. Sedimentary basins are also important from a purely scientific perspective because their sedimentary fill provides a record of Earth's history during the time in which the basin was actively receiving sediment.\nMore than six hundred sedimentary basins have been identified worldwide. They range in areal size from tens of square kilometers to well over a million, and their sedimentary fills range from one to almost twenty kilometers in thickness.", "content": "\n\n\n== Basin classification ==\nA dozen or so common types of sedimentary basins are widely recognized and several classification schemes are proposed, however no single classification scheme is recognized as the standard.Most sedimentary basin classification schemes are based on one or more of these interrelated criteria: \n\nPlate tectonic setting - the proximity to a divergent, convergent or transform plate tectonic boundary and the type and origin of the tectonically-induced forces that cause a basin to form, specifically those active at the time of active sedimentation in the basin.\nNature of underlying crust - basins formed on continental crust are quite different from those formed on oceanic crust as the two types of lithosphere have very different mechanical characteristics (rheology) and different densities, which means they respond differently to isostasy.\nGeodynamics of basin formation - the mechanical and thermal forces that cause lithosphere to subside to form a basin.\nPetroleum/economic potential - basin characteristics that influence the likelihood for the basin to have an accumulations of petroleum or the manner in which it formed.\n\n\n== Widely-recognized types of sedimentary basins ==\nAlthough no one basin classification scheme has been widely adopted, several common types of sedimentary basins are widely accepted and well understood as distinct types. Over its complete lifespan a single sedimentary basin can go through multiple phases and evolve from one of these types to another, such as a rift process going to completion to form a passive margin. In this case the sedimentary rocks of the rift basin phase are overlain by those rocks deposited during the passive margin phase. Hybrid basins where a single regional basin results from the processes that are characteristic of multiple of these types are also possible.\n\n\n== Mechanics of formation ==\nSedimentary basins form as a result of regional subsidence of the lithosphere, mostly as a result of a few geodynamic processes.\n\n\n=== Lithospheric stretching ===\n\nIf the lithosphere is caused to stretch horizontally, by mechanisms such as rifting (which is associated with divergent plate boundaries) or ridge-push or trench-pull (associated with convergent boundaries), the effect is believed to be twofold. The lower, hotter part of the lithosphere will \"flow\" slowly away from the main area being stretched, whilst the upper, cooler and more brittle crust will tend to fault (crack) and fracture. The combined effect of these two mechanisms is for Earth's surface in the area of extension to subside, creating a geographical depression which is then often infilled with water and/or sediments. (An analogy is a piece of rubber, which thins in the middle when stretched.)\nAn example of a basin caused by lithospheric stretching is the North Sea \u2013 also an important location for significant hydrocarbon reserves. Another such feature is the Basin and Range Province which covers most of Nevada, forming a series of horst and graben structures.\nTectonic extension at divergent boundaries where continental rifting is occurring can create a nascent ocean basin leading to either an ocean or the failure of the rift zone. Another expression of lithospheric stretching results in the formation of ocean basins with central ridges. The Red Sea is in fact an incipient ocean, in a plate tectonic context. The mouth of the Red Sea is also a tectonic triple junction where the Indian Ocean Ridge, Red Sea Rift and East African Rift meet. This is the only place on the planet where such a triple junction in oceanic crust is exposed subaerially. This is due to a high thermal buoyancy (thermal subsidence) of the junction, and also to a local crumpled zone of seafloor crust acting as a dam against the Red Sea.\n\n\n=== Lithospheric flexure ===\n\nLithospheric flexure is another geodynamic mechanism that can cause regional subsidence resulting in the creation of a sedimentary basin. If a load is placed on the lithosphere, it will tend to flex in the manner of an elastic plate. The magnitude of the lithospheric flexure is a function of the imposed load and the flexural rigidity of the lithosphere, and the wavelength of flexure is a function of flexural rigidity of the lithospheric plate. Flexural rigidity is in itself, a function of the lithospheric mineral composition, thermal regime, and effective elastic thickness of the lithosphere.Plate tectonic processes that can create sufficient loads on the lithosphere to induce basin-forming processes include:\n\nformation of new mountain belts through orogeny create massive regional topographic highs that impose loads on the lithosphere and can result in foreland basins.\ngrowth of a volcanic arc as the result of subduction or even the formation of a hotspot volcanic chain.\ngrowth of an accretionary wedge and thrusting of it onto the overriding tectonic plate can contribute to the formation of forearc basins.After any kind of sedimentary basin has begun to form, the load created by the water and sediments filling the basin creates additional load, thus causing additional lithospheric flexure and amplifying the original subsidence that created the basin, regardless of the original cause of basin inception.\n\n\n=== Thermal subsidence ===\nCooling of a lithospheric plate, particularly young oceanic crust or recently stretched continental crust, causes thermal subsidence. As the plate cools it shrinks and becomes denser through thermal contraction. Analogous to a solid floating in a liquid, as the lithospheric plate gets denser it sinks because it displaces more of the underlying mantle through an equilibrium process known as isostasy.\nThermal subsidence is particularly measurable and observable with oceanic crust, as there is a well-established correlation between the age of the underlying crust and depth of the ocean. As newly-formed oceanic crust cools over a period of tens of millions of years. This is an important contribution to subsidence in rift basins, backarc basins and passive margins where they are underlain by newly-formed oceanic crust.\n\n\n=== Strike-slip deformation ===\n\nIn strike-slip tectonic settings, deformation of the lithosphere occurs primarily in the plane of Earth as a result of near horizontal maximum and minimum principal stresses. Faults associated with these plate boundaries are primarily vertical. Wherever these vertical fault planes encounter bends, movement along the fault can create local areas of compression or tension.\nWhen the curve in the fault plane moves apart, a region of transtension occurs and sometimes is large enough and long-lived enough to create a sedimentary basin often called a pull-apart basin or strike-slip basin. These basins are often roughly rhombohedral in shape and may be called a rhombochasm. A classic rhombochasm is illustrated by the Dead Sea rift, where northward movement of the Arabian Plate relative to the Anatolian Plate has created a strike slip basin.\nThe opposite effect is that of transpression, where converging movement of a curved fault plane causes collision of the opposing sides of the fault. An example is the San Bernardino Mountains north of Los Angeles, which result from convergence along a curve in the San Andreas fault system. The Northridge earthquake was caused by vertical movement along local thrust and reverse faults \"bunching up\" against the bend in the otherwise strike-slip fault environment.\n\n\n== Study of sedimentary basins ==\nThe study of sedimentary basins as entities unto themselves is often referred to as sedimentary basin analysis. Study involving quantitative modeling of the dynamic geologic processes by which they evolved is called basin modelling.The sedimentary rocks comprising the fill of sedimentary basins hold the most complete historical record of the evolution of the earth's surface over time. Regional study of these rocks can be used as the primary record for different kinds of scientific investigation aimed at understanding and reconstructing the earth's past plate tectonics (paleotectonics), geography (paleogeography, climate (paleoclimatology), oceans (paleoceanography), habitats (paleoecology and paleobiogeography). Sedimentary basin analysis is thus an important area of study for purely scientific and academic reasons. There are however important economic incentives as well for understanding the processes of sedimentary basin formation and evolution because almost all of the world's fossil fuel reserves were formed in sedimentary basins.\n\nAll of these perspectives on the history of a particular region are based on the study of a large three-dimensional body of sedimentary rocks that resulted from the fill of one or more sedimentary basins over time. The scientific studies of stratigraphy and in recent decades sequence stratigraphy are focused on understanding the three-dimensional architecture, packaging and layering of this body of sedimentary rocks as a record resulting from sedimentary processes acting over time, influenced by global sea level change and regional plate tectonics.\n\n\n=== Surface geologic study ===\nWhere the sedimentary rocks comprising a sedimentary basin's fill are exposed at the earth's surface, traditional field geology and aerial photography techniques as well as satellite imagery can be used in the study of sedimentary basins.\n\n\n=== Subsurface geologic study ===\nMuch of a sedimentary basin's fill often remains buried below the surface, often submerged in the ocean, and thus cannot be studied directly. Acoustic imaging using seismic reflection acquired through seismic data acquisition and studied through the specific sub-discipline of seismic stratigraphy is the primary means of understanding the three-dimensional architecture of the basin's fill through remote sensing.\nDirect sampling of the rocks themselves is accomplished via the drilling of boreholes and the retrieval of rock samples in the form of both core samples and drill cuttings. These allow geologists to study small samples of the rocks directly and also very importantly allow paleontologists to study the microfossils they contain (micropaleontology).\nAt the time they are being drilled, boreholes are also surveyed by pulling electronic instruments along the length of the borehole in a process known as well logging. Well logging, which is sometimes appropriately called borehole geophysics, uses electromagnetic and radioactive properties of the rocks surrounding the borehole, as well as their interaction with the fluids used in the process of drilling the borehole, to create a continuous record of the rocks along the length of the borehole, displayed as of a family of curves. Comparison of well log curves between multiple boreholes can be used to understand the stratigraphy of a sedimentary basin, particularly if used in conjunction with seismic stratigraphy.\n\n\n== See also ==\nStructural basin \u2013 Large-scale structural geological depression formed by tectonic warping\nDrainage basin \u2013 Area of land where precipitation collects and drains off into a common outlet\nEndorheic basin \u2013 Closed drainage basin that allows no outflow\nPlate tectonics \u2013 Movement of Earth's lithosphere\nIsostasy \u2013 State of gravitational equilibrium between Earth's crust and mantle\n\n\n== References ==\n\n\n== External links ==\nPreliminary Catalog of the Sedimentary Basins of the United States United States Geological Survey\nGlobal sedimentary basin map\nMap/database of sedimentary basins of the World", "content_traditional": "see also structural basin \u2013 largescale structural geological depression formed tectonic warping drainage basin \u2013 area land precipitation collects drains common outlet endorheic basin \u2013 closed drainage basin allows outflow plate tectonics \u2013 movement earths lithosphere isostasy \u2013 state gravitational equilibrium earths crust mantle references external links preliminary catalog sedimentary basins united states united states geological survey global sedimentary basin map mapdatabase sedimentary basins world basin classification dozen common types sedimentary basins widely recognized several classification schemes proposed however single classification scheme recognized standardmost sedimentary basin classification schemes based one interrelated criteria plate tectonic setting proximity divergent convergent transform plate tectonic boundary type origin tectonicallyinduced forces cause basin form specifically active time active sedimentation basin. flexural rigidity function lithospheric mineral composition thermal regime effective elastic thickness lithosphereplate tectonic processes create sufficient loads lithosphere induce basinforming processes include formation new mountain belts orogeny create massive regional topographic highs impose loads lithosphere result foreland basins. growth accretionary wedge thrusting onto overriding tectonic plate contribute formation forearc basinsafter kind sedimentary basin begun form load created water sediments filling basin creates additional load thus causing additional lithospheric flexure amplifying original subsidence created basin regardless original cause basin inception. scientific studies stratigraphy recent decades sequence stratigraphy focused understanding threedimensional architecture packaging layering body sedimentary rocks record resulting sedimentary processes acting time influenced global sea level change regional plate tectonics. regional study rocks used primary record different kinds scientific investigation aimed understanding reconstructing earths past plate tectonics paleotectonics geography paleogeography climate paleoclimatology oceans paleoceanography habitats paleoecology paleobiogeography. study involving quantitative modeling dynamic geologic processes evolved called basin modellingthe sedimentary rocks comprising fill sedimentary basins hold complete historical record evolution earths surface time. lithospheric stretching lithosphere caused stretch horizontally mechanisms rifting associated divergent plate boundaries ridgepush trenchpull associated convergent boundaries effect believed twofold. well logging sometimes appropriately called borehole geophysics uses electromagnetic radioactive properties rocks surrounding borehole well interaction fluids used process drilling borehole create continuous record rocks along length borehole displayed family curves. complete lifespan single sedimentary basin go multiple phases evolve one types another rift process going completion form passive margin. surface geologic study sedimentary rocks comprising sedimentary basins fill exposed earths surface traditional field geology aerial photography techniques well satellite imagery used study sedimentary basins. however important economic incentives well understanding processes sedimentary basin formation evolution almost worlds fossil fuel reserves formed sedimentary basins. perspectives history particular region based study large threedimensional body sedimentary rocks resulted fill one sedimentary basins time. combined effect two mechanisms earths surface area extension subside creating geographical depression often infilled water andor sediments. nature underlying crust basins formed continental crust quite different formed oceanic crust two types lithosphere different mechanical characteristics rheology different densities means respond differently isostasy. lower hotter part lithosphere flow slowly away main area stretched whilst upper cooler brittle crust tend fault crack fracture. curve fault plane moves apart region transtension occurs sometimes large enough longlived enough create sedimentary basin often called pullapart basin strikeslip basin. analogous solid floating liquid lithospheric plate gets denser sinks displaces underlying mantle equilibrium process known isostasy. acoustic imaging using seismic reflection acquired seismic data acquisition studied specific subdiscipline seismic stratigraphy primary means understanding threedimensional architecture basins fill remote sensing. northridge earthquake caused vertical movement along local thrust reverse faults bunching bend otherwise strikeslip fault environment. tectonic extension divergent boundaries continental rifting occurring create nascent ocean basin leading either ocean failure rift zone. time drilled boreholes also surveyed pulling electronic instruments along length borehole process known well logging. due high thermal buoyancy thermal subsidence junction also local crumpled zone seafloor crust acting dam red sea. important contribution subsidence rift basins backarc basins passive margins underlain newlyformed oceanic crust. classic rhombochasm illustrated dead sea rift northward movement arabian plate relative anatolian plate created strike slip basin. thermal subsidence particularly measurable observable oceanic crust wellestablished correlation age underlying crust depth ocean. subsurface geologic study much sedimentary basins fill often remains buried surface often submerged ocean thus studied directly. comparison well log curves multiple boreholes used understand stratigraphy sedimentary basin particularly used conjunction seismic stratigraphy. direct sampling rocks accomplished via drilling boreholes retrieval rock samples form core samples drill cuttings. strikeslip deformation strikeslip tectonic settings deformation lithosphere occurs primarily plane earth result near horizontal maximum minimum principal stresses. widelyrecognized types sedimentary basins although one basin classification scheme widely adopted several common types sedimentary basins widely accepted well understood distinct types. petroleumeconomic potential basin characteristics influence likelihood basin accumulations petroleum manner formed. example san bernardino mountains north los angeles result convergence along curve san andreas fault system. hybrid basins single regional basin results processes characteristic multiple types also possible. another feature basin range province covers nevada forming series horst graben structures. allow geologists study small samples rocks directly also importantly allow paleontologists study microfossils contain micropaleontology. example basin caused lithospheric stretching north sea \u2013 also important location significant hydrocarbon reserves. mouth red sea also tectonic triple junction indian ocean ridge red sea rift east african rift meet.", "custom_approach": "The scientific studies of stratigraphy and in recent decades sequence stratigraphy are focused on understanding the three-dimensional architecture, packaging and layering of this body of sedimentary rocks as a record resulting from sedimentary processes acting over time, influenced by global sea level change and regional plate tectonics.Where the sedimentary rocks comprising a sedimentary basin's fill are exposed at the earth's surface, traditional field geology and aerial photography techniques as well as satellite imagery can be used in the study of sedimentary basins.Much of a sedimentary basin's fill often remains buried below the surface, often submerged in the ocean, and thus cannot be studied directly. growth of an accretionary wedge and thrusting of it onto the overriding tectonic plate can contribute to the formation of forearc basins.After any kind of sedimentary basin has begun to form, the load created by the water and sediments filling the basin creates additional load, thus causing additional lithospheric flexure and amplifying the original subsidence that created the basin, regardless of the original cause of basin inception.Cooling of a lithospheric plate, particularly young oceanic crust or recently stretched continental crust, causes thermal subsidence. Hybrid basins where a single regional basin results from the processes that are characteristic of multiple of these types are also possible.Sedimentary basins form as a result of regional subsidence of the lithosphere, mostly as a result of a few geodynamic processes.If the lithosphere is caused to stretch horizontally, by mechanisms such as rifting (which is associated with divergent plate boundaries) or ridge-push or trench-pull (associated with convergent boundaries), the effect is believed to be twofold. A dozen or so common types of sedimentary basins are widely recognized and several classification schemes are proposed, however no single classification scheme is recognized as the standard.Most sedimentary basin classification schemes are based on one or more of these interrelated criteria: Plate tectonic setting - the proximity to a divergent, convergent or transform plate tectonic boundary and the type and origin of the tectonically-induced forces that cause a basin to form, specifically those active at the time of active sedimentation in the basin. Flexural rigidity is in itself, a function of the lithospheric mineral composition, thermal regime, and effective elastic thickness of the lithosphere.Plate tectonic processes that can create sufficient loads on the lithosphere to induce basin-forming processes include: formation of new mountain belts through orogeny create massive regional topographic highs that impose loads on the lithosphere and can result in foreland basins. This is an important contribution to subsidence in rift basins, backarc basins and passive margins where they are underlain by newly-formed oceanic crust.In strike-slip tectonic settings, deformation of the lithosphere occurs primarily in the plane of Earth as a result of near horizontal maximum and minimum principal stresses. Petroleum/economic potential - basin characteristics that influence the likelihood for the basin to have an accumulations of petroleum or the manner in which it formed.Although no one basin classification scheme has been widely adopted, several common types of sedimentary basins are widely accepted and well understood as distinct types. The Northridge earthquake was caused by vertical movement along local thrust and reverse faults \"bunching up\" against the bend in the otherwise strike-slip fault environment.The study of sedimentary basins as entities unto themselves is often referred to as sedimentary basin analysis. This is due to a high thermal buoyancy (thermal subsidence) of the junction, and also to a local crumpled zone of seafloor crust acting as a dam against the Red Sea.Lithospheric flexure is another geodynamic mechanism that can cause regional subsidence resulting in the creation of a sedimentary basin. Regional study of these rocks can be used as the primary record for different kinds of scientific investigation aimed at understanding and reconstructing the earth's past plate tectonics (paleotectonics), geography (paleogeography, climate (paleoclimatology), oceans (paleoceanography), habitats (paleoecology and paleobiogeography). Study involving quantitative modeling of the dynamic geologic processes by which they evolved is called basin modelling.The sedimentary rocks comprising the fill of sedimentary basins hold the most complete historical record of the evolution of the earth's surface over time. Well logging, which is sometimes appropriately called borehole geophysics, uses electromagnetic and radioactive properties of the rocks surrounding the borehole, as well as their interaction with the fluids used in the process of drilling the borehole, to create a continuous record of the rocks along the length of the borehole, displayed as of a family of curves. Over its complete lifespan a single sedimentary basin can go through multiple phases and evolve from one of these types to another, such as a rift process going to completion to form a passive margin. There are however important economic incentives as well for understanding the processes of sedimentary basin formation and evolution because almost all of the world's fossil fuel reserves were formed in sedimentary basins. All of these perspectives on the history of a particular region are based on the study of a large three-dimensional body of sedimentary rocks that resulted from the fill of one or more sedimentary basins over time. The combined effect of these two mechanisms is for Earth's surface in the area of extension to subside, creating a geographical depression which is then often infilled with water and/or sediments. Nature of underlying crust - basins formed on continental crust are quite different from those formed on oceanic crust as the two types of lithosphere have very different mechanical characteristics (rheology) and different densities, which means they respond differently to isostasy. The lower, hotter part of the lithosphere will \"flow\" slowly away from the main area being stretched, whilst the upper, cooler and more brittle crust will tend to fault (crack) and fracture. When the curve in the fault plane moves apart, a region of transtension occurs and sometimes is large enough and long-lived enough to create a sedimentary basin often called a pull-apart basin or strike-slip basin. Analogous to a solid floating in a liquid, as the lithospheric plate gets denser it sinks because it displaces more of the underlying mantle through an equilibrium process known as isostasy. Acoustic imaging using seismic reflection acquired through seismic data acquisition and studied through the specific sub-discipline of seismic stratigraphy is the primary means of understanding the three-dimensional architecture of the basin's fill through remote sensing. Tectonic extension at divergent boundaries where continental rifting is occurring can create a nascent ocean basin leading to either an ocean or the failure of the rift zone. At the time they are being drilled, boreholes are also surveyed by pulling electronic instruments along the length of the borehole in a process known as well logging. A classic rhombochasm is illustrated by the Dead Sea rift, where northward movement of the Arabian Plate relative to the Anatolian Plate has created a strike slip basin. Thermal subsidence is particularly measurable and observable with oceanic crust, as there is a well-established correlation between the age of the underlying crust and depth of the ocean. Comparison of well log curves between multiple boreholes can be used to understand the stratigraphy of a sedimentary basin, particularly if used in conjunction with seismic stratigraphy. Direct sampling of the rocks themselves is accomplished via the drilling of boreholes and the retrieval of rock samples in the form of both core samples and drill cuttings. An example is the San Bernardino Mountains north of Los Angeles, which result from convergence along a curve in the San Andreas fault system. Another such feature is the Basin and Range Province which covers most of Nevada, forming a series of horst and graben structures. These allow geologists to study small samples of the rocks directly and also very importantly allow paleontologists to study the microfossils they contain (micropaleontology). An example of a basin caused by lithospheric stretching is the North Sea \u2013 also an important location for significant hydrocarbon reserves. The mouth of the Red Sea is also a tectonic triple junction where the Indian Ocean Ridge, Red Sea Rift and East African Rift meet. If a load is placed on the lithosphere, it will tend to flex in the manner of an elastic plate. Wherever these vertical fault planes encounter bends, movement along the fault can create local areas of compression or tension.", "combined_approach": "scientific studies stratigraphy recent decades sequence stratigraphy focused understanding threedimensional architecture packaging layering body sedimentary rocks record resulting sedimentary processes acting time influenced global sea level change regional plate tectonicswhere sedimentary rocks comprising sedimentary basins fill exposed earths surface traditional field geology aerial photography techniques well satellite imagery used study sedimentary basinsmuch sedimentary basins fill often remains buried surface often submerged ocean thus studied directly. growth accretionary wedge thrusting onto overriding tectonic plate contribute formation forearc basinsafter kind sedimentary basin begun form load created water sediments filling basin creates additional load thus causing additional lithospheric flexure amplifying original subsidence created basin regardless original cause basin inceptioncooling lithospheric plate particularly young oceanic crust recently stretched continental crust causes thermal subsidence. hybrid basins single regional basin results processes characteristic multiple types also possiblesedimentary basins form result regional subsidence lithosphere mostly result geodynamic processesif lithosphere caused stretch horizontally mechanisms rifting associated divergent plate boundaries ridgepush trenchpull associated convergent boundaries effect believed twofold. dozen common types sedimentary basins widely recognized several classification schemes proposed however single classification scheme recognized standardmost sedimentary basin classification schemes based one interrelated criteria plate tectonic setting proximity divergent convergent transform plate tectonic boundary type origin tectonicallyinduced forces cause basin form specifically active time active sedimentation basin. flexural rigidity function lithospheric mineral composition thermal regime effective elastic thickness lithosphereplate tectonic processes create sufficient loads lithosphere induce basinforming processes include formation new mountain belts orogeny create massive regional topographic highs impose loads lithosphere result foreland basins. important contribution subsidence rift basins backarc basins passive margins underlain newlyformed oceanic crustin strikeslip tectonic settings deformation lithosphere occurs primarily plane earth result near horizontal maximum minimum principal stresses. petroleumeconomic potential basin characteristics influence likelihood basin accumulations petroleum manner formedalthough one basin classification scheme widely adopted several common types sedimentary basins widely accepted well understood distinct types. northridge earthquake caused vertical movement along local thrust reverse faults bunching bend otherwise strikeslip fault environmentthe study sedimentary basins entities unto often referred sedimentary basin analysis. due high thermal buoyancy thermal subsidence junction also local crumpled zone seafloor crust acting dam red sealithospheric flexure another geodynamic mechanism cause regional subsidence resulting creation sedimentary basin. regional study rocks used primary record different kinds scientific investigation aimed understanding reconstructing earths past plate tectonics paleotectonics geography paleogeography climate paleoclimatology oceans paleoceanography habitats paleoecology paleobiogeography. study involving quantitative modeling dynamic geologic processes evolved called basin modellingthe sedimentary rocks comprising fill sedimentary basins hold complete historical record evolution earths surface time. well logging sometimes appropriately called borehole geophysics uses electromagnetic radioactive properties rocks surrounding borehole well interaction fluids used process drilling borehole create continuous record rocks along length borehole displayed family curves. complete lifespan single sedimentary basin go multiple phases evolve one types another rift process going completion form passive margin. however important economic incentives well understanding processes sedimentary basin formation evolution almost worlds fossil fuel reserves formed sedimentary basins. perspectives history particular region based study large threedimensional body sedimentary rocks resulted fill one sedimentary basins time. combined effect two mechanisms earths surface area extension subside creating geographical depression often infilled water andor sediments. nature underlying crust basins formed continental crust quite different formed oceanic crust two types lithosphere different mechanical characteristics rheology different densities means respond differently isostasy. lower hotter part lithosphere flow slowly away main area stretched whilst upper cooler brittle crust tend fault crack fracture. curve fault plane moves apart region transtension occurs sometimes large enough longlived enough create sedimentary basin often called pullapart basin strikeslip basin. analogous solid floating liquid lithospheric plate gets denser sinks displaces underlying mantle equilibrium process known isostasy. acoustic imaging using seismic reflection acquired seismic data acquisition studied specific subdiscipline seismic stratigraphy primary means understanding threedimensional architecture basins fill remote sensing. tectonic extension divergent boundaries continental rifting occurring create nascent ocean basin leading either ocean failure rift zone. time drilled boreholes also surveyed pulling electronic instruments along length borehole process known well logging. classic rhombochasm illustrated dead sea rift northward movement arabian plate relative anatolian plate created strike slip basin. thermal subsidence particularly measurable observable oceanic crust wellestablished correlation age underlying crust depth ocean. comparison well log curves multiple boreholes used understand stratigraphy sedimentary basin particularly used conjunction seismic stratigraphy. direct sampling rocks accomplished via drilling boreholes retrieval rock samples form core samples drill cuttings. example san bernardino mountains north los angeles result convergence along curve san andreas fault system. another feature basin range province covers nevada forming series horst graben structures. allow geologists study small samples rocks directly also importantly allow paleontologists study microfossils contain micropaleontology. example basin caused lithospheric stretching north sea \u2013 also important location significant hydrocarbon reserves. mouth red sea also tectonic triple junction indian ocean ridge red sea rift east african rift meet. load placed lithosphere tend flex manner elastic plate. wherever vertical fault planes encounter bends movement along fault create local areas compression tension."}, {"topic": "Black hole", "summary": "A black hole is a region of spacetime where gravity is so strong that nothing, including light or other electromagnetic waves, has enough energy to escape its event horizon. The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of no escape is called the event horizon. Although it has a great effect on the fate and circumstances of an object crossing it, it has no locally detectable features according to general relativity. In many ways, a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly.\nObjects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterize a black hole. David Finkelstein, in 1958, first published the interpretation of \"black hole\" as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars by Jocelyn Bell Burnell in 1967 sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971.Black holes of stellar mass form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses (M\u2609) may form by absorbing other stars and merging with other black holes. There is consensus that supermassive black holes exist in the centres of most galaxies.\nThe presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Any matter that falls onto a black hole can form an external accretion disk heated by friction, forming quasars, some of the brightest objects in the universe. Stars passing too close to a supermassive black hole can be shredded into streamers that shine very brightly before being \"swallowed.\" If other stars are orbiting a black hole, their orbits can determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses.", "content": "\n\n\n== History ==\n\nThe idea of a body so big that even light could not escape was briefly proposed by English astronomical pioneer and clergyman John Michell in a letter published in November 1784. Michell's simplistic calculations assumed such a body might have the same density as the Sun, and concluded that one would form when a star's diameter exceeds the Sun's by a factor of 500, and its surface escape velocity exceeds the usual speed of light. Michell referred to these bodies as dark stars. He correctly noted that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. Scholars of the time were initially excited by the proposal that giant but invisible 'dark stars' might be hiding in plain view, but enthusiasm dampened when the wavelike nature of light became apparent in the early nineteenth century, as if light were a wave rather than a particle, it was unclear what, if any, influence gravity would have on escaping light waves.Modern physics discredits Michell's notion of a light ray shooting directly from the surface of a supermassive star, being slowed down by the star's gravity, stopping, and then free-falling back to the star's surface.\n\n\n=== General relativity ===\n\nIn 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light's motion. Only a few months later, Karl Schwarzschild found a solution to the Einstein field equations that describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution for the point mass and wrote more extensively about its properties. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, where it became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this surface was not quite understood at the time. In 1924, Arthur Eddington showed that the singularity disappeared after a change of coordinates, although it took until 1933 for Georges Lema\u00eetre to realize that this meant the singularity at the Schwarzschild radius was a non-physical coordinate singularity. Arthur Eddington did however comment on the possibility of a star with mass compressed to the Schwarzschild radius in a 1926 book, noting that Einstein's theory allows us to rule out overly large densities for visible stars like Betelgeuse because \"a star of 250 million km radius could not possibly have so high a density as the Sun. Firstly, the force of gravitation would be so great that light would be unable to escape from it, the rays falling back to the star like a stone to the earth. Secondly, the red shift of the spectral lines would be so great that the spectrum would be shifted out of existence. Thirdly, the mass would produce so much curvature of the spacetime metric that space would close up around the star, leaving us outside (i.e., nowhere).\"In 1931, Subrahmanyan Chandrasekhar calculated, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at 1.4 M\u2609) has no stable solutions. His arguments were opposed by many of his contemporaries like Eddington and Lev Landau, who argued that some yet unknown mechanism would stop the collapse. They were partly correct: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star, which is itself stable. But in 1939, Robert Oppenheimer and others predicted that neutron stars above another limit (the Tolman\u2013Oppenheimer\u2013Volkoff limit) would collapse further for the reasons presented by Chandrasekhar, and concluded that no law of physics was likely to intervene and stop at least some stars from collapsing to black holes. Their original calculations, based on the Pauli exclusion principle, gave it as 0.7 M\u2609; subsequent consideration of neutron-neutron repulsion mediated by the strong force raised the estimate to approximately 1.5 M\u2609 to 3.0 M\u2609. Observations of the neutron star merger GW170817, which is thought to have generated a black hole shortly afterward, have refined the TOV limit estimate to ~2.17 M\u2609.Oppenheimer and his co-authors interpreted the singularity at the boundary of the Schwarzschild radius as indicating that this was the boundary of a bubble in which time stopped. This is a valid point of view for external observers, but not for infalling observers. Because of this property, the collapsed stars were called \"frozen stars\", because an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it to the Schwarzschild radius.\n\n\n==== Golden age ====\nIn 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, \"a perfect unidirectional membrane: causal influences can cross it in only one direction\". This did not strictly contradict Oppenheimer's results, but extended them to include the point of view of infalling observers. Finkelstein's solution extended the Schwarzschild solution for the future of observers falling into a black hole. A complete extension had already been found by Martin Kruskal, who was urged to publish it.These results came at the beginning of the golden age of general relativity, which was marked by general relativity and black holes becoming mainstream subjects of research. This process was helped by the discovery of pulsars by Jocelyn Bell Burnell in 1967, which, by 1969, were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse.In this period more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the axisymmetric solution for a black hole that is both rotating and electrically charged. Through the work of Werner Israel, Brandon Carter, and David Robinson the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr\u2013Newman metric: mass, angular momentum, and electric charge.At first, it was suspected that the strange features of the black hole solutions were pathological artifacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. This view was held in particular by Vladimir Belinsky, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions. However, in the late 1960s Roger Penrose and Stephen Hawking used global techniques to prove that singularities appear generically. For this work, Penrose received half of the 2020 Nobel Prize in Physics, Hawking having died in 2018. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole.Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation.\n\n\n=== Observation ===\nOn 11 February 2016, the LIGO Scientific Collaboration and the Virgo collaboration announced the first direct detection of gravitational waves, representing the first observation of a black hole merger. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. As of 2021, the nearest known body thought to be a black hole is around 1,500 light-years (460 parsecs) away. Though only a couple dozen black holes have been found so far in the Milky Way, there are thought to be hundreds of millions, most of which are solitary and do not cause emission of radiation. Therefore, they would only be detectable by gravitational lensing.\n\n\n=== Etymology ===\nJohn Michell used the term \"dark star\" in a November 1783 letter to Henry Cavendish, and in the early 20th century, physicists used the term \"gravitationally collapsed object\". Science writer Marcia Bartusiak traces the term \"black hole\" to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive.The term \"black hole\" was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article \"'Black Holes' in Space\", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio.In December 1967, a student reportedly suggested the phrase \"black hole\" at a lecture by John Wheeler; Wheeler adopted the term for its brevity and \"advertising value\", and it quickly caught on, leading some to credit Wheeler with coining the phrase.\n\n\n== Properties and structure ==\n\nThe no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes under the laws of modern physics is currently an unsolved problem.These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analog of Gauss's law (through the ADM mass), far away from the black hole. Likewise, the angular momentum (or spin) can be measured from far away using frame dragging by the gravitomagnetic field, through for example the Lense\u2013Thirring effect.When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behavior of the horizon in this situation is a dissipative system that is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance\u2014the membrane paradigm. This is different from other field theories such as electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible. Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including approximately conserved quantum numbers such as the total baryon number and lepton number. This behavior is so puzzling that it has been called the black hole information loss paradox.\n\n\n=== Physical properties ===\nThe simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric. This means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole \"sucking in everything\" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass.Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner\u2013Nordstr\u00f6m metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr\u2013Newman metric, which describes a black hole with both charge and angular momentum.While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality\n\n \n \n \n \n \n \n Q\n \n 2\n \n \n \n 4\n \u03c0\n \n \u03f5\n \n 0\n \n \n \n \n \n +\n \n \n \n \n c\n \n 2\n \n \n \n J\n \n 2\n \n \n \n \n G\n \n M\n \n 2\n \n \n \n \n \n \u2264\n G\n \n M\n \n 2\n \n \n \n \n {\\displaystyle {\\frac {Q^{2}}{4\\pi \\epsilon _{0}}}+{\\frac {c^{2}J^{2}}{GM^{2}}}\\leq GM^{2}}\n for a black hole of mass M. Black holes with the minimum possible mass satisfying this inequality are called extremal. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed unphysical. The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. This is supported by numerical simulations.Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a universal feature of compact astrophysical objects. The black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed value. That uncharged limit is\n\n \n \n \n J\n \u2264\n \n \n \n G\n \n M\n \n 2\n \n \n \n c\n \n \n ,\n \n \n {\\displaystyle J\\leq {\\frac {GM^{2}}{c}},}\n allowing definition of a dimensionless spin parameter such that\n\n \n \n \n 0\n \u2264\n \n \n \n c\n J\n \n \n G\n \n M\n \n 2\n \n \n \n \n \n \u2264\n 1.\n \n \n {\\displaystyle 0\\leq {\\frac {cJ}{GM^{2}}}\\leq 1.}\n Black holes are commonly classified according to their mass, independent of angular momentum, J. The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through\n\n \n \n \n \n r\n \n \n s\n \n \n \n =\n \n \n \n 2\n G\n M\n \n \n c\n \n 2\n \n \n \n \n \u2248\n 2.95\n \n \n \n M\n \n M\n \n \u2299\n \n \n \n \n \n \n k\n m\n ,\n \n \n \n {\\displaystyle r_{\\mathrm {s} }={\\frac {2GM}{c^{2}}}\\approx 2.95\\,{\\frac {M}{M_{\\odot }}}~\\mathrm {km,} }\n where rs is the Schwarzschild radius and M\u2609 is the mass of the Sun. For a black hole with nonzero spin and/or electric charge, the radius is smaller, until an extremal black hole could have an event horizon close to\n\n \n \n \n \n r\n \n \n +\n \n \n \n =\n \n \n \n G\n M\n \n \n c\n \n 2\n \n \n \n \n .\n \n \n {\\displaystyle r_{\\mathrm {+} }={\\frac {GM}{c^{2}}}.}\n \n\n\n=== Event horizon ===\n\nThe defining feature of a black hole is the appearance of an event horizon\u2014a boundary in spacetime through which matter and light can pass only inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine whether such an event occurred.As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass. At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole.To a distant observer, clocks near a black hole would appear to tick more slowly than those farther away from the black hole. Due to this effect, known as gravitational time dilation, an object falling into a black hole appears to slow as it approaches the event horizon, taking an infinite time to reach it. At the same time, all processes on this object slow down, from the viewpoint of a fixed outside observer, causing any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. Eventually, the falling object fades away until it can no longer be seen. Typically this process happens very rapidly with an object disappearing from view within less than a second.On the other hand, indestructible observers falling into a black hole do not notice any of these effects as they cross the event horizon. According to their own clocks, which appear to them to tick normally, they cross the event horizon after a finite time without noting any singular behaviour; in classical general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.The topology of the event horizon of a black hole at equilibrium is always spherical. For non-rotating (static) black holes the geometry of the event horizon is precisely spherical, while for rotating black holes the event horizon is oblate.\n\n\n=== Singularity ===\n\nAt the centre of a black hole, as described by general relativity, may lie a gravitational singularity, a region where the spacetime curvature becomes infinite. For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation. In both cases, the singular region has zero volume. It can also be shown that the singular region contains all the mass of the black hole solution. The singular region can thus be thought of as having infinite density.Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. They can prolong the experience by accelerating away to slow their descent, but only up to a limit. When they reach the singularity, they are crushed to infinite density and their mass is added to the total of the black hole. Before that happens, they will have been torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the \"noodle effect\".In the case of a charged (Reissner\u2013Nordstr\u00f6m) or rotating (Kerr) black hole, it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole. The possibility of traveling to another universe is, however, only theoretical since any perturbation would destroy this possibility. It also appears to be possible to follow closed timelike curves (returning to one's own past) around the Kerr singularity, which leads to problems with causality like the grandfather paradox. It is expected that none of these peculiar effects would survive in a proper quantum treatment of rotating and charged black holes.The appearance of singularities in general relativity is commonly perceived as signaling the breakdown of the theory. This breakdown, however, is expected; it occurs in a situation where quantum effects should describe these actions, due to the extremely high density and therefore particle interactions. To date, it has not been possible to combine quantum and gravitational effects into a single theory, although there exist attempts to formulate such a theory of quantum gravity. It is generally expected that such a theory will not feature any singularities.\n\n\n=== Photon sphere ===\n\nThe photon sphere is a spherical boundary of zero thickness in which photons that move on tangents to that sphere would be trapped in a circular orbit about the black hole. For non-rotating black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius. Their orbits would be dynamically unstable, hence any small perturbation, such as a particle of infalling matter, would cause an instability that would grow over time, either setting the photon on an outward trajectory causing it to escape the black hole, or on an inward spiral where it would eventually cross the event horizon.While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. For a Kerr black hole the radius of the photon sphere depends on the spin parameter and on the details of the photon orbit, which can be prograde (the photon rotates in the same sense of the black hole spin) or retrograde.\n\n\n=== Ergosphere ===\n\nRotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly \"drag\" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect is so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still.The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but is at a much greater distance around the equator.Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole. Thereby the rotation of the black hole slows down. A variation of the Penrose process in the presence of strong magnetic fields, the Blandford\u2013Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei.\n\n\n=== Innermost stable circular orbit (ISCO) ===\n\nIn Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists an innermost stable circular orbit (often called the ISCO), for which any infinitesimal inward perturbations to a circular orbit will lead to spiraling into the black hole, and any outward perturbations will, depending on the energy, result in spiraling in, stably orbiting between apastron and periastron, or escaping to infinity. The location of the ISCO depends on the spin of the black hole, in the case of a Schwarzschild black hole (spin zero) is:\n\n \n \n \n \n r\n \n \n I\n S\n C\n O\n \n \n \n =\n 3\n \n \n r\n \n s\n \n \n =\n \n \n \n 6\n \n G\n M\n \n \n c\n \n 2\n \n \n \n \n ,\n \n \n {\\displaystyle r_{\\rm {ISCO}}=3\\,r_{s}={\\frac {6\\,GM}{c^{2}}},}\n and decreases with increasing black hole spin for particles orbiting in the same direction as the spin.\n\n\n== Formation and evolution ==\nGiven the bizarre character of black holes, it was long questioned whether such objects could actually exist in nature or whether they were merely pathological solutions to Einstein's equations. Einstein himself wrongly thought black holes would not form, because he held that the angular momentum of collapsing particles would stabilize their motion at some radius. This led the general relativity community to dismiss all results to the contrary for many years. However, a minority of relativists continued to contend that black holes were physical objects, and by the end of the 1960s, they had persuaded the majority of researchers in the field that there is no obstacle to the formation of an event horizon.\n\nPenrose demonstrated that once an event horizon forms, general relativity without quantum mechanics requires that a singularity will form within. Shortly afterwards, Hawking showed that many cosmological solutions that describe the Big Bang have singularities without scalar fields or other exotic matter. The Kerr solution, the no-hair theorem, and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research. Conventional black holes are formed by gravitational collapse of heavy objects such as stars, but they can also in theory be formed by other processes.\n\n\n=== Gravitational collapse ===\n\nGravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little \"fuel\" left to maintain its temperature through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight.\nThe collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. The result is one of the various types of compact star. Which type forms depends on the mass of the remnant of the original star left if the outer layers have been blown away (for example, in a Type II supernova). The mass of the remnant, the collapsed object that survives the explosion, can be substantially less than that of the original star. Remnants exceeding 5 M\u2609 are produced by stars that were over 20 M\u2609 before the collapse.If the mass of the remnant exceeds about 3\u20134 M\u2609 (the Tolman\u2013Oppenheimer\u2013Volkoff limit), either because the original star was very heavy or because the remnant collected additional mass through accretion of matter, even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole.\n\nThe gravitational collapse of heavy stars is assumed to be responsible for the formation of stellar mass black holes. Star formation in the early universe may have resulted in very massive stars, which upon their collapse would have produced black holes of up to 103 M\u2609. These black holes could be the seeds of the supermassive black holes found in the centres of most galaxies. It has further been suggested that massive black holes with typical masses of ~105 M\u2609 could have formed from the direct collapse of gas clouds in the young universe. These massive objects have been proposed as the seeds that eventually formed the earliest quasars observed already at redshift \n \n \n \n z\n \u223c\n 7\n \n \n {\\displaystyle z\\sim 7}\n . Some candidates for such objects have been found in observations of the young universe.While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the light emitted just before the event horizon forms delayed an infinite amount of time. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away.\n\n\n==== Primordial black holes and the Big Bang ====\nGravitational collapse requires great density. In the current epoch of the universe these high densities are found only in stars, but in the early universe shortly after the Big Bang densities were much greater, possibly allowing for the creation of black holes. High density alone is not enough to allow black hole formation since a uniform mass distribution will not allow the mass to bunch up. In order for primordial black holes to have formed in such a dense medium, there must have been initial density perturbations that could then grow under their own gravity. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging in size from a Planck mass (\n \n \n \n \n m\n \n P\n \n \n =\n \n \n \u210f\n c\n \n /\n \n G\n \n \n \n \n {\\displaystyle m_{P}={\\sqrt {\\hbar c/G}}}\n \u2248 1.2\u00d71019 GeV/c2 \u2248 2.2\u00d710\u22128 kg) to hundreds of thousands of solar masses.Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the expansion rate was greater than the attraction. Following inflation theory there was a net repulsive gravitation in the beginning until the end of inflation. Since then the Hubble flow was slowed by the energy density of the universe.\nModels for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang.\n\n\n=== High-energy collisions ===\n\nGravitational collapse is not the only process that could create black holes. In principle, black holes could be formed in high-energy collisions that achieve sufficient density. As of 2002, no such events have been detected, either directly or indirectly as a deficiency of the mass balance in particle accelerator experiments. This suggests that there must be a lower limit for the mass of black holes. Theoretically, this boundary is expected to lie around the Planck mass, where quantum effects are expected to invalidate the predictions of general relativity. This would put the creation of black holes firmly out of reach of any high-energy process occurring on or near the Earth. However, certain developments in quantum gravity suggest that the minimum black hole mass could be much lower: some braneworld scenarios for example put the boundary as low as 1 TeV/c2. This would make it conceivable for micro black holes to be created in the high-energy collisions that occur when cosmic rays hit the Earth's atmosphere, or possibly in the Large Hadron Collider at CERN. These theories are very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists. Even if micro black holes could be formed, it is expected that they would evaporate in about 10\u221225 seconds, posing no threat to the Earth.\n\n\n=== Growth ===\nOnce a black hole has formed, it can continue to grow by absorbing additional matter. Any black hole will continually absorb gas and interstellar dust from its surroundings. This growth process is one possible way through which some supermassive black holes may have been formed, although the formation of supermassive black holes is still an open field of research. A similar process has been suggested for the formation of intermediate-mass black holes found in globular clusters. Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes.\n\n\n=== Evaporation ===\n\nIn 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature \u210fc3/(8\u03c0GMkB); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.A stellar black hole of 1 M\u2609 has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimeter.If a black hole is very small, the radiation effects are expected to become very strong. A black hole with the mass of a car would have a diameter of about 10\u221224 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity of more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/c2 would take less than 10\u221288 seconds to evaporate completely. For such a small black hole, quantum gravity effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case.The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception, however, is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes. NASA's Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes.If black holes evaporate via Hawking radiation, a solar mass black hole will evaporate (beginning once the temperature of the cosmic microwave background drops below that of the black hole) over a period of 1064 years. A supermassive black hole with a mass of 1011 M\u2609 will evaporate in around 2\u00d710100 years. Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014 M\u2609 during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10106 years.Some models of quantum gravity predict modifications of the Hawking description of black holes. In particular, the evolution equations describing the mass loss rate and charge loss rate get modified.\n\n\n== Observational evidence ==\nBy nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. For example, a black hole's existence can sometimes be inferred by observing its gravitational influence on its surroundings.On 10 April 2019, an image was released of a black hole, which is seen magnified because the light paths near the event horizon are highly bent. The dark shadow in the middle results from light paths absorbed by the black hole. The image is in false color, as the detected light halo in this image is not in the visible spectrum, but radio waves.\n\nThe Event Horizon Telescope (EHT) is an active program that directly observes the immediate environment of black holes' event horizons, such as the black hole at the centre of the Milky Way. In April 2017, EHT began observing the black hole at the centre of Messier 87. \"In all, eight radio observatories on six mountains and four continents observed the galaxy in Virgo on and off for 10 days in April 2017\" to provide the data yielding the image in April 2019. After two years of data processing, EHT released the first direct image of a black hole; specifically, the supermassive black hole that lies in the centre of the aforementioned galaxy. What is visible is not the black hole\u2014which shows as black because of the loss of all light within this dark region. Instead, it is the gases at the edge of the event horizon (displayed as orange or red) that define the black hole.On 12 May 2022, the EHT released the first image of Sagittarius A*, the supermassive black hole at the centre of the Milky Way galaxy. The published image displayed the same ring-like structure and circular shadow as seen in the M87* black hole, and the image was created using the same techniques as for the M87 black hole. However, the imaging process for Sagittarius A*, which is more than a thousand times smaller and less massive than M87*, was significantly more complex because of the instability of its surroundings. The image of Sagittarius A* was also partially blurred by turbulent plasma on the way to the galactic centre, an effect which prevents resolution of the image at longer wavelengths.The brightening of this material in the 'bottom' half of the processed EHT image is thought to be caused by Doppler beaming, whereby material approaching the viewer at relativistic speeds is perceived as brighter than material moving away. In the case of a black hole, this phenomenon implies that the visible material is rotating at relativistic speeds (>1,000 km/s [2,200,000 mph]), the only speeds at which it is possible to centrifugally balance the immense gravitational attraction of the singularity, and thereby remain in orbit above the event horizon. This configuration of bright material implies that the EHT observed M87* from a perspective catching the black hole's accretion disc nearly edge-on, as the whole system rotated clockwise. However, the extreme gravitational lensing associated with black holes produces the illusion of a perspective that sees the accretion disc from above. In reality, most of the ring in the EHT image was created when the light emitted by the far side of the accretion disc bent around the black hole's gravity well and escaped, meaning that most of the possible perspectives on M87* can see the entire disc, even that directly behind the \"shadow\".\nIn 2015, the EHT detected magnetic fields just outside the event horizon of Sagittarius A* and even discerned some of their properties. The field lines that pass through the accretion disc were a complex mixture of ordered and tangled. Theoretical studies of black holes had predicted the existence of magnetic fields.\n\n\n=== Detection of gravitational waves from merging black holes ===\nOn 14 September 2015, the LIGO gravitational wave observatory made the first-ever successful direct observation of gravitational waves. The signal was consistent with theoretical predictions for the gravitational waves produced by the merger of two black holes: one with about 36 solar masses, and the other around 29 solar masses. This observation provides the most concrete evidence for the existence of black holes to date. For instance, the gravitational wave signal suggests that the separation of the two objects before the merger was just 350 km (or roughly four times the Schwarzschild radius corresponding to the inferred masses). The objects must therefore have been extremely compact, leaving black holes as the most plausible interpretation.More importantly, the signal observed by LIGO also included the start of the post-merger ringdown, the signal produced as the newly formed compact object settles down to a stationary state. Arguably, the ringdown is the most direct way of observing a black hole. From the LIGO signal, it is possible to extract the frequency and damping time of the dominant mode of the ringdown. From these, it is possible to infer the mass and angular momentum of the final object, which match independent predictions from numerical simulations of the merger. The frequency and decay time of the dominant mode are determined by the geometry of the photon sphere. Hence, observation of this mode confirms the presence of a photon sphere; however, it cannot exclude possible exotic alternatives to black holes that are compact enough to have a photon sphere.The observation also provides the first observational evidence for the existence of stellar-mass black hole binaries. Furthermore, it is the first observational evidence of stellar-mass black holes weighing 25 solar masses or more.Since then, many more gravitational wave events have been observed.\n\n\n=== Proper motions of stars orbiting Sagittarius A* ===\nThe proper motions of stars near the centre of our own Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. By fitting their motions to Keplerian orbits, the astronomers were able to infer, in 1998, that a 2.6\u00d7106 M\u2609 object must be contained in a volume with a radius of 0.02 light-years to cause the motions of those stars. Since then, one of the stars\u2014called S2\u2014has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass to 4.3\u00d7106 M\u2609 and a radius of less than 0.002 light-years for the object causing the orbital motion of those stars. The upper limit on the object's size is still too large to test whether it is smaller than its Schwarzschild radius; nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes.\n\n\n=== Accretion of matter ===\n\nDue to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object. Artists' impressions such as the accompanying representation of a black hole with corona commonly depict the black hole as if it were a flat-space body hiding the part of the disk just behind it, but in reality gravitational lensing would greatly distort the image of the accretion disk.\n\nWithin such a disk, friction would cause angular momentum to be transported outward, allowing matter to fall farther inward, thus releasing potential energy and increasing the temperature of the gas.\n\nWhen the accreting object is a neutron star or a black hole, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the compact object. The resulting friction is so significant that it heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays). These bright X-ray sources may be detected by telescopes. This process of accretion is one of the most efficient energy-producing processes known; up to 40% of the rest mass of the accreted material can be emitted as radiation. (In nuclear fusion only about 0.7% of the rest mass will be emitted as energy.) In many cases, accretion disks are accompanied by relativistic jets that are emitted along the poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data.As such, many of the universe's more energetic phenomena have been attributed to the accretion of matter on black holes. In particular, active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. Similarly, X-ray binaries are generally accepted to be binary star systems in which one of the two stars is a compact object accreting matter from its companion. It has also been suggested that some ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes.In November 2011 the first direct observation of a quasar accretion disk around a supermassive black hole was reported.\n\n\n==== X-ray binaries ====\n\nX-ray binaries are binary star systems that emit a majority of their radiation in the X-ray part of the spectrum. These X-ray emissions are generally thought to result when one of the stars (compact object) accretes matter from another (regular) star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole.If such a system emits signals that can be directly traced back to the compact object, it cannot be a black hole. The absence of such a signal does, however, not exclude the possibility that the compact object is a neutron star. By studying the companion star it is often possible to obtain the orbital parameters of the system and to obtain an estimate for the mass of the compact object. If this is much larger than the Tolman\u2013Oppenheimer\u2013Volkoff limit (the maximum mass a star can have without collapsing) then the object cannot be a neutron star and is generally expected to be a black hole.The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Some doubt, however, remained due to the uncertainties that result from the companion star being much heavier than the candidate black hole. Currently, better candidates for black holes are found in a class of X-ray binaries called soft X-ray transients. In this class of system, the companion star is of relatively low mass allowing for more accurate estimates of the black hole mass. Moreover, these systems actively emit X-rays for only several months once every 10\u201350 years. During the period of low X-ray emission (called quiescence), the accretion disk is extremely faint allowing detailed observation of the companion star during this period. One of the best such candidates is V404 Cygni.\n\n\n===== Quasi-periodic oscillations =====\n\nThe X-ray emissions from accretion disks sometimes flicker at certain frequencies. These signals are called quasi-periodic oscillations and are thought to be caused by material moving along the inner edge of the accretion disk (the innermost stable circular orbit). As such their frequency is linked to the mass of the compact object. They can thus be used as an alternative way to determine the mass of candidate black holes.\n\n\n==== Galactic nuclei ====\n\nAstronomers use the term \"active galaxy\" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the activity in these active galactic nuclei (AGN) may be explained by the presence of supermassive black holes, which can be millions of times more massive than stellar ones. The models of these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of interstellar gas and dust called an accretion disk; and two jets perpendicular to the accretion disk.\n\nAlthough supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, NGC 4889, NGC 1277, OJ 287, APM 08279+5255 and the Sombrero Galaxy.It is now widely accepted that the centre of nearly every galaxy, not just active ones, contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M\u2013sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself.\n\n\n=== Microlensing ===\nAnother way the black hole nature of an object may be tested is through observation of effects caused by a strong gravitational field in their vicinity. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, such as light passing through an optic lens. Observations have been made of weak gravitational lensing, in which light rays are deflected by only a few arcseconds. Microlensing occurs when the sources are unresolved and the observer sees a small brightening. In January 2022, astronomers reported the first possible detection of a microlensing event from an isolated black hole.Another possibility for observing gravitational lensing by a black hole would be to observe stars orbiting the black hole. There are several candidates for such an observation in orbit around Sagittarius A*.\n\n\n== Alternatives ==\n\nThe evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound. A phase of free quarks at high density might allow the existence of dense quark stars, and some supersymmetric models predict the existence of Q stars. Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars. These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from arguments in general relativity that any such object will have a maximum mass.Since the average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass, supermassive black holes are much less dense than stellar black holes (the average density of a 108 M\u2609 black hole is comparable to that of water). Consequently, the physics of matter forming a supermassive black hole is much better understood and the possible alternative explanations for supermassive black hole observations are much more mundane. For example, a supermassive black hole could be modelled by a large cluster of very dark objects. However, such alternatives are typically not stable enough to explain the supermassive black hole candidates.The evidence for the existence of stellar and supermassive black holes implies that in order for black holes to not form, general relativity must fail as a theory of gravity, perhaps due to the onset of quantum mechanical corrections. A much anticipated feature of a theory of quantum gravity is that it will not feature singularities or event horizons and thus black holes would not be real artifacts. For example, in the fuzzball model based on string theory, the individual states of a black hole solution do not generally have an event horizon or singularity, but for a classical/semi-classical observer the statistical average of such states appears just as an ordinary black hole as deduced from general relativity.A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. These include the gravastar, the black star, and the dark-energy star.\n\n\n== Open questions ==\n\n\n=== Entropy and thermodynamics ===\n\nIn 1971, Hawking showed under general conditions that the total area of the event horizons of any collection of classical black holes can never decrease, even if they collide and merge. This result, now known as the second law of black hole mechanics, is remarkably similar to the second law of thermodynamics, which states that the total entropy of an isolated system can never decrease. As with classical objects at absolute zero temperature, it was assumed that black holes had zero entropy. If this were the case, the second law of thermodynamics would be violated by entropy-laden matter entering a black hole, resulting in a decrease in the total entropy of the universe. Therefore, Bekenstein proposed that a black hole should have an entropy, and that it should be proportional to its horizon area.The link with the laws of thermodynamics was further strengthened by Hawking's discovery in 1974 that quantum field theory predicts that a black hole radiates blackbody radiation at a constant temperature. This seemingly causes a violation of the second law of black hole mechanics, since the radiation will carry away energy from the black hole causing it to shrink. The radiation, however also carries away entropy, and it can be proven under general assumptions that the sum of the entropy of the matter surrounding a black hole and one quarter of the area of the horizon as measured in Planck units is in fact always increasing. This allows the formulation of the first law of black hole mechanics as an analogue of the first law of thermodynamics, with the mass acting as energy, the surface gravity as temperature and the area as entropy.One puzzling feature is that the entropy of a black hole scales with its area rather than with its volume, since entropy is normally an extensive quantity that scales linearly with the volume of the system. This odd property led Gerard 't Hooft and Leonard Susskind to propose the holographic principle, which suggests that anything that happens in a volume of spacetime can be described by data on the boundary of that volume.Although general relativity can be used to perform a semi-classical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities (such as mass, charge, pressure, etc.). Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein\u2013Hawking entropy. Since then, similar results have been reported for different black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity.Another promising approach is constituted by treating gravity as an effective field theory. One first computes the quantum gravitational corrections to the radius of the event horizon of the black hole, then integrates over it to find the quantum gravitational corrections to the entropy as given by the Wald formula. The method was applied for Schwarzschild black holes by Calmet and Kuipers, then successfully generalised for charged black holes by Campos Delgado.\n\n\n=== Information loss paradox ===\n\nBecause a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principle. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever.The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community. In quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputed. Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem.One attempt to resolve the black hole information paradox is known as black hole complementarity. In 2012, the \"firewall paradox\" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradox. According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will emit only a finite amount of information encoded within its Hawking radiation. According to research by physicists like Don Page and Leonard Susskind, there will eventually be a time by which an outgoing particle must be entangled with all the Hawking radiation the black hole has previously emitted. This seemingly creates a paradox: a principle called \"monogamy of entanglement\" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two other systems at the same time; yet here the outgoing particle appears to be entangled both with the infalling particle and, independently, with past Hawking radiation. In order to resolve this contradiction, physicists may eventually be forced to give up one of three time-tested principles: Einstein's equivalence principle, unitarity, or local quantum field theory. One possible solution, which violates the equivalence principle, is that a \"firewall\" destroys incoming particles at the event horizon. In general, which\u2014if any\u2014of these assumptions should be abandoned remains a topic of debate.\n\n\n== See also ==\n\n\n== Notes ==\n\n\n== References ==\n\n\n== Further reading ==\n\n\n=== Popular reading ===\n\n\n=== University textbooks and monographs ===\n\n\n=== Review papers ===\n\n\n== External links ==\n\nBlack Holes on In Our Time at the BBC\nStanford Encyclopedia of Philosophy: \"Singularities and Black Holes\" by Erik Curiel and Peter Bokulich.\nBlack Holes: Gravity's Relentless Pull \u2013 Interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute (HubbleSite)\nESA's Black Hole Visualization Archived 3 May 2019 at the Wayback Machine\nFrequently Asked Questions (FAQs) on Black Holes\nSchwarzschild Geometry\nBlack holes - basic (NYT; April 2021)\n\n\n=== Videos ===\n16-year-long study tracks stars orbiting Sagittarius A*\nMovie of Black Hole Candidate from Max Planck Institute\nCowen, Ron (20 April 2015). \"3D simulations of colliding black holes hailed as most realistic yet\". Nature. doi:10.1038/nature.2015.17360.\nComputer visualisation of the signal detected by LIGO\nTwo Black Holes Merge into One (based upon the signal GW150914)", "content_traditional": "science writer marcia bartusiak traces term black hole physicist robert h dicke early 1960s reportedly compared phenomenon black hole calcutta notorious prison people entered never left alivethe term black hole used print life science news magazines 1963 science journalist ann ewing article black holes space dated 18 january 1964 report meeting american association advancement science held cleveland ohioin december 1967 student reportedly suggested phrase black hole lecture john wheeler wheeler adopted term brevity advertising value quickly caught leading credit wheeler coining phrase. scholars time initially excited proposal giant invisible dark stars might hiding plain view enthusiasm dampened wavelike nature light became apparent early nineteenth century light wave rather particle unclear influence gravity would escaping light wavesmodern physics discredits michells notion light ray shooting directly surface supermassive star slowed stars gravity stopping freefalling back stars surface. black holes gravitys relentless pull \u2013 interactive multimedia web site physics astronomy black holes space telescope science institute hubblesite esas black hole visualization archived 3 may 2019 wayback machine frequently asked questions faqs black holes schwarzschild geometry black holes basic nyt april 2021 videos 16yearlong study tracks stars orbiting sagittarius movie black hole candidate max planck institute cowen ron 20 april 2015. work werner israel brandon carter david robinson nohair theorem emerged stating stationary black hole solution completely described three parameters kerr \u2013 newman metric mass angular momentum electric chargeat first suspected strange features black hole solutions pathological artifacts symmetry conditions imposed singularities would appear generic situations. example fuzzball model based string theory individual states black hole solution generally event horizon singularity classicalsemiclassical observer statistical average states appears ordinary black hole deduced general relativitya theoretical objects conjectured match observations astronomical black hole candidates identically nearidentically function via different mechanism. various models predict creation primordial black holes ranging size planck mass p \u210f c g displaystyle mpsqrt hbar cg \u2248 12\u00d71019 gevc2 \u2248 22\u00d710\u22128 kg hundreds thousands solar massesdespite early universe extremely dense recollapse black hole big bang since expansion rate greater attraction. rotating black hole effect strong near event horizon object would move faster speed light opposite direction stand stillthe ergosphere black hole volume bounded black holes event horizon ergosurface coincides event horizon poles much greater distance around equatorobjects radiation escape normally ergosphere. likewise angular momentum spin measured far away using frame dragging gravitomagnetic field example lense \u2013 thirring effectwhen object falls black hole information shape object distribution charge evenly distributed along horizon black hole lost outside observers. orbits would dynamically unstable hence small perturbation particle infalling matter would cause instability would grow time either setting photon outward trajectory causing escape black hole inward spiral would eventually cross event horizonwhile light still escape photon sphere light crosses photon sphere inbound trajectory captured black hole. much larger tolman \u2013 oppenheimer \u2013 volkoff limit maximum mass star without collapsing object neutron star generally expected black holethe first strong candidate black hole cygnus x1 discovered way charles thomas bolton louise webster paul murdin 1972. arthur eddington however comment possibility star mass compressed schwarzschild radius 1926 book noting einsteins theory allows us rule overly large densities visible stars like betelgeuse star 250 million km radius could possibly high density sun. according clocks appear tick normally cross event horizon finite time without noting singular behaviour classical general relativity impossible determine location event horizon local observations due einsteins equivalence principlethe topology event horizon black hole equilibrium always spherical. upper limit objects size still large test whether smaller schwarzschild radius nevertheless observations strongly suggest central object supermassive black hole plausible scenarios confining much invisible mass small volume. image sagittarius also partially blurred turbulent plasma way galactic centre effect prevents resolution image longer wavelengthsthe brightening material bottom half processed eht image thought caused doppler beaming whereby material approaching viewer relativistic speeds perceived brighter material moving away. small black hole quantum gravity effects expected play important role could hypothetically make small black hole stable although current developments quantum gravity indicate casethe hawking radiation astrophysical black hole predicted weak would thus exceedingly difficult detect earth. time neutron stars like black holes regarded theoretical curiosities discovery pulsars showed physical relevance spurred interest types compact objects might formed gravitational collapsein period general black hole solutions found. odd property led gerard hooft leonard susskind propose holographic principle suggests anything happens volume spacetime described data boundary volumealthough general relativity used perform semiclassical calculation black hole entropy situation theoretically unsatisfying. however shown arguments general relativity object maximum masssince average density black hole inside schwarzschild radius inversely proportional square mass supermassive black holes much less dense stellar black holes average density 108 \u2609 black hole comparable water. remnants exceeding 5 \u2609 produced stars 20 \u2609 collapseif mass remnant exceeds 3\u20134 \u2609 tolman \u2013 oppenheimer \u2013 volkoff limit either original star heavy remnant collected additional mass accretion matter even degeneracy pressure neutrons insufficient stop collapse.", "custom_approach": "Science writer Marcia Bartusiak traces the term \"black hole\" to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive.The term \"black hole\" was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article \"'Black Holes' in Space\", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio.In December 1967, a student reportedly suggested the phrase \"black hole\" at a lecture by John Wheeler; Wheeler adopted the term for its brevity and \"advertising value\", and it quickly caught on, leading some to credit Wheeler with coining the phrase.The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. Scholars of the time were initially excited by the proposal that giant but invisible 'dark stars' might be hiding in plain view, but enthusiasm dampened when the wavelike nature of light became apparent in the early nineteenth century, as if light were a wave rather than a particle, it was unclear what, if any, influence gravity would have on escaping light waves.Modern physics discredits Michell's notion of a light ray shooting directly from the surface of a supermassive star, being slowed down by the star's gravity, stopping, and then free-falling back to the star's surface.In 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light's motion. Through the work of Werner Israel, Brandon Carter, and David Robinson the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr\u2013Newman metric: mass, angular momentum, and electric charge.At first, it was suspected that the strange features of the black hole solutions were pathological artifacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. For example, in the fuzzball model based on string theory, the individual states of a black hole solution do not generally have an event horizon or singularity, but for a classical/semi-classical observer the statistical average of such states appears just as an ordinary black hole as deduced from general relativity.A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. Various models predict the creation of primordial black holes ranging in size from a Planck mass ( m P = \u210f c / G {\\displaystyle m_{P}={\\sqrt {\\hbar c/G}}} \u2248 1.2\u00d71019 GeV/c2 \u2248 2.2\u00d710\u22128 kg) to hundreds of thousands of solar masses.Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the expansion rate was greater than the attraction. For a rotating black hole, this effect is so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still.The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but is at a much greater distance around the equator.Objects and radiation can escape normally from the ergosphere. The analogy was completed when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation.On 11 February 2016, the LIGO Scientific Collaboration and the Virgo collaboration announced the first direct detection of gravitational waves, representing the first observation of a black hole merger. Likewise, the angular momentum (or spin) can be measured from far away using frame dragging by the gravitomagnetic field, through for example the Lense\u2013Thirring effect.When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. Their orbits would be dynamically unstable, hence any small perturbation, such as a particle of infalling matter, would cause an instability that would grow over time, either setting the photon on an outward trajectory causing it to escape the black hole, or on an inward spiral where it would eventually cross the event horizon.While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Arthur Eddington did however comment on the possibility of a star with mass compressed to the Schwarzschild radius in a 1926 book, noting that Einstein's theory allows us to rule out overly large densities for visible stars like Betelgeuse because \"a star of 250 million km radius could not possibly have so high a density as the Sun. If this is much larger than the Tolman\u2013Oppenheimer\u2013Volkoff limit (the maximum mass a star can have without collapsing) then the object cannot be a neutron star and is generally expected to be a black hole.The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. According to their own clocks, which appear to them to tick normally, they cross the event horizon after a finite time without noting any singular behaviour; in classical general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.The topology of the event horizon of a black hole at equilibrium is always spherical. The upper limit on the object's size is still too large to test whether it is smaller than its Schwarzschild radius; nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. The image of Sagittarius A* was also partially blurred by turbulent plasma on the way to the galactic centre, an effect which prevents resolution of the image at longer wavelengths.The brightening of this material in the 'bottom' half of the processed EHT image is thought to be caused by Doppler beaming, whereby material approaching the viewer at relativistic speeds is perceived as brighter than material moving away. For such a small black hole, quantum gravity effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case.The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. The location of the ISCO depends on the spin of the black hole, in the case of a Schwarzschild black hole (spin zero) is: r I S C O = 3 r s = 6 G M c 2 , {\\displaystyle r_{\\rm {ISCO}}=3\\,r_{s}={\\frac {6\\,GM}{c^{2}}},} and decreases with increasing black hole spin for particles orbiting in the same direction as the spin.Given the bizarre character of black holes, it was long questioned whether such objects could actually exist in nature or whether they were merely pathological solutions to Einstein's equations. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse.In this period more general black hole solutions were found.", "combined_approach": "science writer marcia bartusiak traces term black hole physicist robert h dicke early 1960s reportedly compared phenomenon black hole calcutta notorious prison people entered never left alivethe term black hole used print life science news magazines 1963 science journalist ann ewing article black holes space dated 18 january 1964 report meeting american association advancement science held cleveland ohioin december 1967 student reportedly suggested phrase black hole lecture john wheeler wheeler adopted term brevity advertising value quickly caught leading credit wheeler coining phrasethe nohair theorem postulates achieves stable condition formation black hole three independent physical properties mass electric charge angular momentum black hole otherwise featureless. scholars time initially excited proposal giant invisible dark stars might hiding plain view enthusiasm dampened wavelike nature light became apparent early nineteenth century light wave rather particle unclear influence gravity would escaping light wavesmodern physics discredits michells notion light ray shooting directly surface supermassive star slowed stars gravity stopping freefalling back stars surfacein 1915 albert einstein developed theory general relativity earlier shown gravity influence lights motion. work werner israel brandon carter david robinson nohair theorem emerged stating stationary black hole solution completely described three parameters kerr \u2013 newman metric mass angular momentum electric chargeat first suspected strange features black hole solutions pathological artifacts symmetry conditions imposed singularities would appear generic situations. example fuzzball model based string theory individual states black hole solution generally event horizon singularity classicalsemiclassical observer statistical average states appears ordinary black hole deduced general relativitya theoretical objects conjectured match observations astronomical black hole candidates identically nearidentically function via different mechanism. various models predict creation primordial black holes ranging size planck mass p \u210f c g displaystyle mpsqrt hbar cg \u2248 12\u00d71019 gevc2 \u2248 22\u00d710\u22128 kg hundreds thousands solar massesdespite early universe extremely dense recollapse black hole big bang since expansion rate greater attraction. rotating black hole effect strong near event horizon object would move faster speed light opposite direction stand stillthe ergosphere black hole volume bounded black holes event horizon ergosurface coincides event horizon poles much greater distance around equatorobjects radiation escape normally ergosphere. analogy completed hawking 1974 showed quantum field theory implies black holes radiate like black body temperature proportional surface gravity black hole predicting effect known hawking radiationon 11 february 2016 ligo scientific collaboration virgo collaboration announced first direct detection gravitational waves representing first observation black hole merger. likewise angular momentum spin measured far away using frame dragging gravitomagnetic field example lense \u2013 thirring effectwhen object falls black hole information shape object distribution charge evenly distributed along horizon black hole lost outside observers. orbits would dynamically unstable hence small perturbation particle infalling matter would cause instability would grow time either setting photon outward trajectory causing escape black hole inward spiral would eventually cross event horizonwhile light still escape photon sphere light crosses photon sphere inbound trajectory captured black hole. arthur eddington however comment possibility star mass compressed schwarzschild radius 1926 book noting einsteins theory allows us rule overly large densities visible stars like betelgeuse star 250 million km radius could possibly high density sun. much larger tolman \u2013 oppenheimer \u2013 volkoff limit maximum mass star without collapsing object neutron star generally expected black holethe first strong candidate black hole cygnus x1 discovered way charles thomas bolton louise webster paul murdin 1972. according clocks appear tick normally cross event horizon finite time without noting singular behaviour classical general relativity impossible determine location event horizon local observations due einsteins equivalence principlethe topology event horizon black hole equilibrium always spherical. upper limit objects size still large test whether smaller schwarzschild radius nevertheless observations strongly suggest central object supermassive black hole plausible scenarios confining much invisible mass small volume. image sagittarius also partially blurred turbulent plasma way galactic centre effect prevents resolution image longer wavelengthsthe brightening material bottom half processed eht image thought caused doppler beaming whereby material approaching viewer relativistic speeds perceived brighter material moving away. small black hole quantum gravity effects expected play important role could hypothetically make small black hole stable although current developments quantum gravity indicate casethe hawking radiation astrophysical black hole predicted weak would thus exceedingly difficult detect earth. location isco depends spin black hole case schwarzschild black hole spin zero r c 3 r 6 g c 2 displaystyle rrm isco3rsfrac 6gmc2 decreases increasing black hole spin particles orbiting direction spingiven bizarre character black holes long questioned whether objects could actually exist nature whether merely pathological solutions einsteins equations. time neutron stars like black holes regarded theoretical curiosities discovery pulsars showed physical relevance spurred interest types compact objects might formed gravitational collapsein period general black hole solutions found."}, {"topic": "Supermassive black hole", "summary": "A supermassive black hole (SMBH or sometimes SBH) is the largest type of black hole, with its mass being on the order of hundreds of thousands, or millions to billions of times the mass of the Sun (M\u2609). Black holes are a class of astronomical objects that have undergone gravitational collapse, leaving behind spheroidal regions of space from which nothing can escape, not even light. Observational evidence indicates that almost every large galaxy has a supermassive black hole at its center. For example, the Milky Way has a supermassive black hole in its Galactic Center, corresponding to the radio source Sagittarius A*. Accretion of interstellar gas onto supermassive black holes is the process responsible for powering active galactic nuclei (AGNs) and quasars.Two supermassive black holes have been directly imaged by the Event Horizon Telescope: the black hole in the giant elliptical galaxy Messier 87 and the black hole at the Milky Way\u2019s center.", "content": "\n\n\n== Description ==\nSupermassive black holes are classically defined as black holes with a mass above 100,000 (105) solar masses (M\u2609); some have masses of several billion M\u2609. Supermassive black holes have physical properties that clearly distinguish them from lower-mass classifications. First, the tidal forces in the vicinity of the event horizon are significantly weaker for supermassive black holes. The tidal force on a body at a black hole's event horizon is inversely proportional to the square of the black hole's mass: a person at the event horizon of a 10 million M\u2609 black hole experiences about the same tidal force between their head and feet as a person on the surface of the earth. Unlike with stellar mass black holes, one would not experience significant tidal force until very deep into the black hole's event horizon.It is somewhat counterintuitive to note that the average density of a SMBH within its event horizon (defined as the mass of the black hole divided by the volume of space within its Schwarzschild radius) can be less than the density of water. This is because the Schwarzschild radius (\n \n \n \n \n r\n \n s\n \n \n \n \n {\\displaystyle r_{\\text{s}}}\n ) is directly proportional to its mass. Since the volume of a spherical object (such as the event horizon of a non-rotating black hole) is directly proportional to the cube of the radius, the density of a black hole is inversely proportional to the square of the mass, and thus higher mass black holes have lower average density.The Schwarzschild radius of the event horizon of a nonrotating and uncharged supermassive black hole of around 1 billion M\u2609 is comparable to the semi-major axis of the orbit of planet Uranus, which is 19 \nAU.Some astronomers refer to black holes of greater than 5 billion M\u2609 as 'ultramassive black holes' (UMBHs or UBHs), but the term is not broadly used. Possible examples include the black holes at the centres of TON 618, NGC 6166, ESO 444-46 and NGC 4889, which are among the most massive black holes known.\nSome studies have suggested that the maximum natural mass that a black hole can reach, while being luminous accretors (featuring an accretion disk), is typically of the order of about 50 billion M\u2609.\n\n\n== History of research ==\nThe story of how supermassive black holes were found began with the investigation by Maarten Schmidt of the radio source 3C 273 in 1963. Initially this was thought to be a star, but the spectrum proved puzzling. It was determined to be hydrogen emission lines that had been red shifted, indicating the object was moving away from the Earth. Hubble's law showed that the object was located several billion light-years away, and thus must be emitting the energy equivalent of hundreds of galaxies. The rate of light variations of the source dubbed a quasi-stellar object, or quasar, suggested the emitting region had a diameter of one parsec or less. Four such sources had been identified by 1964.In 1963, Fred Hoyle and W. A. Fowler proposed the existence of hydrogen-burning supermassive stars (SMS) as an explanation for the compact dimensions and high energy output of quasars. These would have a mass of about 105 \u2013 109 M\u2609. However, Richard Feynman noted stars above a certain critical mass are dynamically unstable and would collapse into a black hole, at least if they were non-rotating. Fowler then proposed that these supermassive stars would undergo a series of collapse and explosion oscillations, thereby explaining the energy output pattern. Appenzeller and Fricke (1972) built models of this behavior, but found that the resulting star would still undergo collapse, concluding that a non-rotating 0.75\u00d7106 M\u2609 SMS \"cannot escape collapse to a black hole by burning its hydrogen through the CNO cycle\".Edwin E. Salpeter and Yakov Zeldovich made the proposal in 1964 that matter falling onto a massive compact object would explain the properties of quasars. It would require a mass of around 108 M\u2609 to match the output of these objects. Donald Lynden-Bell noted in 1969 that the infalling gas would form a flat disk that spirals into the central \"Schwarzschild throat\". He noted that the relatively low output of nearby galactic cores implied these were old, inactive quasars. Meanwhile, in 1967, Martin Ryle and Malcolm Longair suggested that nearly all sources of extra-galactic radio emission could be explained by a model in which particles are ejected from galaxies at relativistic velocities, meaning they are moving near the speed of light. Martin Ryle, Malcolm Longair, and Peter Scheuer then proposed in 1973 that the compact central nucleus could be the original energy source for these relativistic jets.Arthur M. Wolfe and Geoffrey Burbidge noted in 1970 that the large velocity dispersion of the stars in the nuclear region of elliptical galaxies could only be explained by a large mass concentration at the nucleus; larger than could be explained by ordinary stars. They showed that the behavior could be explained by a massive black hole with up to 1010 M\u2609, or a large number of smaller black holes with masses below 103 M\u2609. Dynamical evidence for a massive dark object was found at the core of the active elliptical galaxy Messier 87 in 1978, initially estimated at 5\u00d7109 M\u2609. Discovery of similar behavior in other galaxies soon followed, including the Andromeda Galaxy in 1984 and the Sombrero Galaxy in 1988.Donald Lynden-Bell and Martin Rees hypothesized in 1971 that the center of the Milky Way galaxy would contain a massive black hole. Sagittarius A* was discovered and named on February 13 and 15, 1974, by astronomers Bruce Balick and Robert Brown using the Green Bank Interferometer of the National Radio Astronomy Observatory. They discovered a radio source that emits synchrotron radiation; it was found to be dense and immobile because of its gravitation. This was, therefore, the first indication that a supermassive black hole exists in the center of the Milky Way.\nThe Hubble Space Telescope, launched in 1990, provided the resolution needed to perform more refined observations of galactic nuclei. In 1994 the Faint Object Spectrograph on the Hubble was used to observe Messier 87, finding that ionized gas was orbiting the central part of the nucleus at a velocity of \u00b1500 km/s. The data indicated a concentrated mass of (2.4\u00b10.7)\u00d7109 M\u2609 lay within a 0.25\u2033 span, providing strong evidence of a supermassive black hole. Using the Very Long Baseline Array to observe Messier 106, Miyoshi et al. (1995) were able to demonstrate that the emission from an H2O maser in this galaxy came from a gaseous disk in the nucleus that orbited a concentrated mass of 3.6\u00d7107 M\u2609, which was constrained to a radius of 0.13 parsecs. Their ground-breaking research noted that a swarm of solar mass black holes within a radius this small would not survive for long without undergoing collisions, making a supermassive black hole the sole viable candidate. Accompanying this observation which provided the first confirmation of supermassive black holes was the discovery of the highly broadened, ionised iron\nK\u03b1 emission line (6.4 keV) from the galaxy MCG-6-30-15. The broadening was due to the gravitational redshift of the light as it escaped from just 3 to 10 Schwarzschild radii from the black hole.\nOn April 10, 2019, the Event Horizon Telescope collaboration released the first horizon-scale image of a black hole, in the center of the galaxy Messier 87. In March 2020, astronomers suggested that additional subrings should form the photon ring, proposing a way of better detecting these signatures in the first black hole image.\n\n\n== Formation ==\n\nThe origin of supermassive black holes remains an active field of research. Astrophysicists agree that black holes can grow by accretion of matter and by merging with other black holes. There are several hypotheses for the formation mechanisms and initial masses of the progenitors, or \"seeds\", of supermassive black holes. Independently of the specific formation channel for the black hole seed, given sufficient mass nearby, it could accrete to become an intermediate-mass black hole and possibly a SMBH if the accretion rate persists.Distant and early supermassive black holes, such as J0313\u20131806, and ULAS J1342+0928, are hard to explain so soon after the Big Bang. Some postulate they might come from direct collapse of dark matter with self-interaction. A small minority of sources argue that they may be evidence that the Universe is the result of a Big Bounce, instead of a Big Bang, with these supermassive black holes being formed before the Big Bounce.\n\n\n=== First stars ===\n\nThe early progenitor seeds may be black holes of tens or perhaps hundreds of M\u2609 that are left behind by the explosions of massive stars and grow by accretion of matter. Another model involves a dense stellar cluster undergoing core collapse as the negative heat capacity of the system drives the velocity dispersion in the core to relativistic speeds.Before the first stars, large gas clouds could collapse into a \"quasi-star\", which would in turn collapse into a black hole of around 20 M\u2609. These stars may have also been formed by dark matter halos drawing in enormous amounts of gas by gravity, which would then produce supermassive stars with \ntens of thousands of solar masses. The \"quasi-star\" becomes unstable to radial perturbations because of electron-positron pair production in its core and could collapse directly into a black hole without a supernova explosion (which would eject most of its mass, preventing the black hole from growing as fast).\nA more recent theory proposes that SMBH seeds were formed in the very early universe each from the collapse of a supermassive star with mass of around 100,000 M\u2609.\n\n\n=== Direct-collapse and primordial black holes ===\nLarge, high-redshift clouds of metal-free gas, when irradiated by a sufficiently intense flux of Lyman\u2013Werner photons, can avoid cooling and fragmenting, thus collapsing as a single object due to self-gravitation. The core of the collapsing object reaches extremely large values of the matter density, of the order of about 107 g/cm3, and triggers a general relativistic instability. Thus, the object collapses directly into a black hole, without passing from the intermediate phase of a star, or of a quasi-star. These objects have a typical mass of about 100,000 M\u2609 and are named direct collapse black holes. A 2022 computer simulation showed that the first supermassive black holes can arise in rare turbulent clumps of gas, called primordial halos, that were fed by unusually strong streams of cold gas. The key simulation result was that cold flows suppressed star formation in the turbulent halo until the halo\u2019s gravity was finally able to overcome the turbulence and formed two direct-collapse black holes of 31,000 M\u2609 and 40,000 M\u2609. The birth of the first SMBHs can therefore be a result of standard cosmological structure formation \u2014 contrary to what had been thought for almost two decades.\n\nFinally, primordial black holes (PBHs) could have been produced directly from external pressure in the first moments after the Big Bang. These black holes would then have more time than any of the above models to accrete, allowing them sufficient time to reach supermassive sizes. Formation of black holes from the deaths of the first stars has been extensively studied and corroborated by observations. The other models for black hole formation listed above are theoretical.\nThe formation of a supermassive black hole requires a relatively small volume of highly dense matter having small angular momentum. Normally, the process of accretion involves transporting a large initial endowment of angular momentum outwards, and this appears to be the limiting factor in black hole growth. This is a major component of the theory of accretion disks. Gas accretion is the most efficient and also the most conspicuous way in which black holes grow. The majority of the mass growth of supermassive black holes is thought to occur through episodes of rapid gas accretion, which are observable as active galactic nuclei or quasars. Observations reveal that quasars were much more frequent when the Universe was younger, indicating that supermassive black holes formed and grew early. A major constraining factor for theories of supermassive black hole formation is the observation of distant luminous quasars, which indicate that supermassive black holes of billions of M\u2609 had already formed when the Universe was less than one billion years old. This suggests that supermassive black holes arose very early in the Universe, inside the first massive galaxies.\n\n\n=== Maximum mass limit ===\nThere is a natural upper limit to how large supermassive black holes can grow. Supermassive black holes in any quasar or active galactic nucleus (AGN) appear to have a theoretical upper limit of physically around 50 billion M\u2609 for typical parameters, as anything above this slows growth down to a crawl (the slowdown tends to start around 10 billion M\u2609) and causes the unstable accretion disk surrounding the black hole to coalesce into stars that orbit it. A study concluded that the radius of the innermost stable circular orbit (ISCO) for SMBH masses above this limit exceeds the self-gravity radius, making disc formation no longer possible.A larger upper limit of around 270 billion M\u2609 was represented as the absolute maximum mass limit for an accreting SMBH in extreme cases, for example its maximal prograde spin with a dimensionless spin parameter of a = 1, although the maximum limit for a black hole's spin paramater is very slightly lower at a = 0.9982. At masses just below the limit, the disc luminosity of a field galaxy is likely to be below the Eddington limit and not strong enough to trigger the feedback underlying the M\u2013sigma relation, so SMBHs close to the limit can evolve above this. It was noted that, however, black holes close to this limit are likely to be rather even rarer, as it would requires the accretion disc to be almost permanently prograde because the black hole grows and the spin-down effect of retrograde accretion is larger than the spin-up by prograde accretion, due to its ISCO and therefore its lever arm. This would in turn require the hole spin to be permanently correlated with a fixed direction of the potential controlling gas flow within the black hole's host galaxy, and thus would tend to produce a spin axis and hence AGN jet direction, which is similarly aligned with the galaxy. However, current observations do not support this correlation. The so-called 'chaotic accretion' presumably has to involve multiple small-scale events, essentially random in time and orientation if it is not controlled by a large-scale potential in this way. This would lead the accretion statistically to spin-down, due to retrograde events having larger lever arms than prograde, and occurring almost as often. There is also other interactions with large SMBHs that trend to reduce their spin, including particularly mergers with other black holes, which can statistically decrease the spin. All of these considerations suggested that SMBHs usually cross the critical theoretical mass limit at modest values of their spin parameters, so that 5\u00d71010 M\u2609 in all but rare cases.\n\n\n== Activity and galactic evolution ==\n\nGravitation from supermassive black holes in the center of many galaxies is thought to power active objects such as Seyfert galaxies and quasars, and the relationship between the mass of the central black hole and the mass of the host galaxy depends upon the galaxy type. An empirical correlation between the size of supermassive black holes and the stellar velocity dispersion \n \n \n \n \u03c3\n \n \n {\\displaystyle \\sigma }\n of a galaxy bulge is called the M\u2013sigma relation.\nAn AGN is now considered to be a galactic core hosting a massive black hole that is accreting matter and displays a sufficiently strong luminosity. The nuclear region of the Milky Way, for example, lacks sufficient luminosity to satisfy this condition. The unified model of AGN is the concept that the large range of observed properties of the AGN taxonomy can be explained using just a small number of physical parameters. For the initial model, these values consisted of the angle of the accretion disk's torus to the line of sight and the luminosity of the source. AGN can be divided into two main groups: a radiative mode AGN in which most of the output is in the form of electromagnetic radiation through an optically thick accretion disk, and a jet mode in which relativistic jets emerge perpendicular to the disk.The interaction of a pair of SMBH-hosting galaxies can lead to merger events. Dynamic friction on the hosted SMBH objects causes them to sink toward the center of the merged mass, eventually forming a pair with a separation of under a kiloparsec. The interaction of this pair with surrounding stars and gas will then gradually bring the SMBH together as a gravitationally bound binary system with a separation of ten parsecs or less. Once the pair draw as close as 0.001 parsecs, gravitational radiation will cause them to merge. By the time this happens, the resulting galaxy will have long since relaxed from the merger event, with the initial starburst activity and AGN having faded away. The gravitational waves from this coalescence can give the resulting SMBH a velocity boost of up to several thousand km/s, propelling it away from the galactic center and possibly even ejecting it from the galaxy.\n\n\n=== Hawking radiation ===\n\nHawking radiation is black-body radiation that is predicted to be released by black holes, due to quantum effects near the event horizon. This radiation reduces the mass and energy of black holes, causing them to shrink and ultimately vanish. If black holes evaporate via Hawking radiation, a non-rotating and uncharged stupendously large black hole with a mass of 1\u00d71011 M\u2609 will evaporate in around 2.1\u00d710100 years. Black holes formed during the predicted collapse of superclusters of galaxies in the far future with 1\u00d71014 M\u2609 would evaporate over a timescale of up to 2.1\u00d710109 years.\n\n\n== Evidence ==\n\n\n=== Doppler measurements ===\n\nSome of the best evidence for the presence of black holes is provided by the Doppler effect whereby light from nearby orbiting matter is red-shifted when receding and blue-shifted when advancing. For matter very close to a black hole the orbital speed must be comparable with the speed of light, so receding matter will appear very faint compared with advancing matter, which means that systems with intrinsically symmetric discs and rings will acquire a highly asymmetric visual appearance. This effect has been allowed for in modern computer-generated images such as the example presented here, based on a plausible model for the supermassive black hole in Sgr A* at the center of the Milky Way. However, the resolution provided by presently available telescope technology is still insufficient to confirm such predictions directly.\nWhat already has been observed directly in many systems are the lower non-relativistic velocities of matter orbiting further out from what are presumed to be black holes. Direct Doppler measures of water masers surrounding the nuclei of nearby galaxies have revealed a very fast Keplerian motion, only possible with a high concentration of matter in the center. Currently, the only known objects that can pack enough matter in such a small space are black holes, or things that will evolve into black holes within astrophysically short timescales. For active galaxies farther away, the width of broad spectral lines can be used to probe the gas orbiting near the event horizon. The technique of reverberation mapping uses variability of these lines to measure the mass and perhaps the spin of the black hole that powers active galaxies.\n\n\n=== In the Milky Way ===\n\nEvidence indicates that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because:\n\nThe star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light-hours (1.8\u00d71013 m or 120 AU) from the center of the central object.\nFrom the motion of star S2, the object's mass can be estimated as 4.0 million M\u2609, or about 7.96\u00d71036 kg.\nThe radius of the central object must be less than 17 light-hours, because otherwise S2 would collide with it. Observations of the star S14 indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit.\nNo known astronomical object other than a black hole can contain 4.0 million M\u2609 in this volume of space.Infrared observations of bright flare activity near Sagittarius A* show orbital motion of plasma with a period of 45\u00b115 min at a separation of six to ten times the gravitational radius of the candidate SMBH. This emission is consistent with a circularized orbit of a polarized \"hot spot\" on an accretion disk in a strong magnetic field. The radiating matter is orbiting at 30% of the speed of light just outside the innermost stable circular orbit.On January 5, 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers.\n\n\n=== Outside the Milky Way ===\n\nUnambiguous dynamical evidence for supermassive black holes exists only for a handful of galaxies; these include the Milky Way, the Local Group galaxies M31 and M32, and a few galaxies beyond the Local Group, such as NGC 4395. In these galaxies, the root mean square (or rms) velocities of the stars or gas rises proportionally to 1/r near the center, indicating a central point mass. In all other galaxies observed to date, the rms velocities are flat, or even falling, toward the center, making it impossible to state with certainty that a supermassive black hole is present. Nevertheless, it is commonly accepted that the center of nearly every galaxy contains a supermassive black hole. The reason for this assumption is the M\u2013sigma relation, a tight (low scatter) relation between the mass of the hole in the 10 or so galaxies with secure detections, and the velocity dispersion of the stars in the bulges of those galaxies. This correlation, although based on just a handful of galaxies, suggests to many astronomers a strong connection between the formation of the black hole and the galaxy itself.On March 28, 2011, a supermassive black hole was seen tearing a mid-size star apart. That is the only likely explanation of the observations that day of sudden X-ray radiation and the follow-up broad-band observations. The source was previously an inactive galactic nucleus, and from study of the outburst the galactic nucleus is estimated to be a SMBH with mass of the order of a million M\u2609. This rare event is assumed to be a relativistic outflow (material being emitted in a jet at a significant fraction of the speed of light) from a star tidally disrupted by the SMBH. A significant fraction of a solar mass of material is expected to have accreted onto the SMBH. Subsequent long-term observation will allow this assumption to be confirmed if the emission from the jet decays at the expected rate for mass accretion onto a SMBH.\nSupermassive black holes in binary or higher multiplicity systems can be ejected becoming runaway super massive black holes. They may trigger star formation in their wakes. A candidate runaway black hole has been spotted escaping a distant dwarf galaxy.\n\n\n== Individual studies ==\n\nThe nearby Andromeda Galaxy, 2.5 million light-years away, contains a 1.4+0.65\u22120.45\u00d7108 (140 million) M\u2609 central black hole, significantly larger than the Milky Way's. The largest supermassive black hole in the Milky Way's vicinity appears to be that of Messier 87 (i.e., M87*), at a mass of (6.5\u00b10.7)\u00d7109 (c. 6.5 billion) M\u2609 at a distance of 48.92 million light-years. The supergiant elliptical galaxy NGC 4889, at a distance of 336 million light-years away in the Coma Berenices constellation, contains a black hole measured to be 2.1+3.5\u22121.3\u00d71010 (21 billion) M\u2609.Masses of black holes in quasars can be estimated via indirect methods that are subject to substantial uncertainty. The quasar TON 618 is an example of an object with an extremely large black hole, estimated at 6.6\u00d71010 (66 billion) M\u2609. Its redshift is 2.219. Other examples of quasars with large estimated black hole masses are the hyperluminous quasar APM 08279+5255, with an estimated mass of 1\u00d71010 (10 billion) M\u2609, and the quasar SMSS J215728.21-360215.1, with a mass of (3.4\u00b10.6)\u00d71010 (34 billion) M\u2609, or nearly 10,000 times the mass of the black hole at the Milky Way's Galactic Center.Some galaxies, such as the galaxy 4C +37.11, appear to have two supermassive black holes at their centers, forming a binary system. If they collided, the event would create strong gravitational waves. Binary supermassive black holes are believed to be a common consequence of galactic mergers. The binary pair in OJ 287, 3.5 billion light-years away, contains the most massive black hole in a pair, with a mass estimated at 18.348 billion M\u2609. In 2011, a super-massive black hole was discovered in the dwarf galaxy Henize 2-10, which has no bulge. The precise implications for this discovery on black hole formation are unknown, but may indicate that black holes formed before bulges.\n\nIn 2012, astronomers reported an unusually large mass of approximately 17 billion M\u2609 for the black hole in the compact, lenticular galaxy NGC 1277, which lies 220 million light-years away in the constellation Perseus. The putative black hole has approximately 59 percent of the mass of the bulge of this lenticular galaxy (14 percent of the total stellar mass of the galaxy). Another study reached a very different conclusion: this black hole is not particularly overmassive, estimated at between 2 and 5 billion M\u2609 with 5 billion M\u2609 being the most likely value. On February 28, 2013, astronomers reported on the use of the NuSTAR satellite to accurately measure the spin of a supermassive black hole for the first time, in NGC 1365, reporting that the event horizon was spinning at almost the speed of light.In September 2014, data from different X-ray telescopes have shown that the extremely small, dense, ultracompact dwarf galaxy M60-UCD1 hosts a 20 million solar mass black hole at its center, accounting for more than 10% of the total mass of the galaxy. The discovery is quite surprising, since the black hole is five times more massive than the Milky Way's black hole despite the galaxy being less than five-thousandths the mass of the Milky Way.\nSome galaxies lack any supermassive black holes in their centers. Although most galaxies with no supermassive black holes are very small, dwarf galaxies, one discovery remains mysterious: The supergiant elliptical cD galaxy A2261-BCG has not been found to contain an active supermassive black hole of at least 1010 M\u2609, despite the galaxy being one of the largest galaxies known; over six times the size and one thousand times the mass of the Milky Way. Despite that, several studies gave very large mass values for a possible central black hole inside A2261-BGC, such as about as large as 6.5+10.9\u22124.1\u00d71010 M\u2609 or as low as (6\u201311)\u00d7109 M\u2609. Since a supermassive black hole will only be visible while it is accreting, a supermassive black hole can be nearly invisible, except in its effects on stellar orbits. This implies that either A2261-BGC has a central black hole that is accreting at a low level or has a mass rather below 1010 M\u2609.In December 2017, astronomers reported the detection of the most distant quasar known by this time, ULAS J1342+0928, containing the most distant supermassive black hole, at a reported redshift of z = 7.54, surpassing the redshift of 7 for the previously known most distant quasar ULAS J1120+0641.\n\nIn February 2020, astronomers reported the discovery of the Ophiuchus Supercluster eruption, the most energetic event in the Universe ever detected since the Big Bang. It occurred in the Ophiuchus Cluster in the galaxy NeVe 1, caused by the accretion of nearly 270 million M\u2609 of material by its central supermassive black hole. The eruption lasted for about 100 million years and released 5.7 million times more energy than the most powerful gamma-ray burst known. The eruption released shock waves and jets of high-energy particles that punched the intracluster medium, creating a cavity about 1.5 million light-years wide \u2013 ten times the Milky Way's diameter.In February 2021, astronomers released, for the first time, a very high-resolution image of 25,000 active supermassive black holes, covering four percent of the Northern celestial hemisphere, based on ultra-low radio wavelengths, as detected by the Low-Frequency Array (LOFAR) in Europe.\n\n\n== See also ==\n\n\n== Notes ==\n\n\n== References ==\n\n\n== Further reading ==\nFulvio Melia (2003). The Edge of Infinity. Supermassive Black Holes in the Universe. Cambridge University Press. ISBN 978-0-521-81405-8.\nLaura Ferrarese & David Merritt (2002). \"Supermassive Black Holes\". Physics World. 15 (1): 41\u201346. arXiv:astro-ph/0206222. Bibcode:2002astro.ph..6222F. doi:10.1088/2058-7058/15/6/43. S2CID 5266031.\nMerritt, David (2013). Dynamics and Evolution of Galactic Nuclei. Princeton University Press. ISBN 978-0-691-12101-7.\nJulian Krolik (1999). Active Galactic Nuclei. Princeton University Press. ISBN 978-0-691-01151-6.\nChakraborty, Amlan; Chanda, Prolay K.; Pandey, Kanhaiya Lal; Das, Subinoy (2022). \"Formation and Abundance of Late-forming Primordial Black Holes as Dark Matter\". The Astrophysical Journal. 932 (2): 119. arXiv:2204.09628. Bibcode:2022ApJ...932..119C. doi:10.3847/1538-4357/ac6ddd. S2CID 248266315.\nCarr, Bernard; K\u00fchnel, Florian (2022). \"Primordial black holes as dark matter candidates\". SciPost Physics Lecture Notes. arXiv:2110.02821. doi:10.21468/SciPostPhysLectNotes.48. S2CID 238407875.\n\n\n== External links ==\n\nBlack Holes: Gravity's Relentless Pull Award-winning interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute\nImages of supermassive black holes\nNASA images of supermassive black holes\nThe black hole at the heart of the Milky Way\nESO video clip of stars orbiting a galactic black hole\nStar Orbiting Massive Milky Way Centre Approaches to within 17 Light-Hours ESO, October 21, 2002\nImages, Animations, and New Results from the UCLA Galactic Center Group\nWashington Post article on Supermassive black holes\nVideo (2:46) \u2013 Simulation of stars orbiting Milky Way's central massive black hole\nVideo (2:13) \u2013 Simulation reveals supermassive black holes (NASA, October 2, 2018)\nSeptember 2020, Paul Sutter 29 (September 29, 2020). \"Black holes so big we don't know how they form could be hiding in the universe\". Space.com. Retrieved February 6, 2021.", "content_traditional": "external links black holes gravitys relentless pull awardwinning interactive multimedia web site physics astronomy black holes space telescope science institute images supermassive black holes nasa images supermassive black holes black hole heart milky way eso video clip stars orbiting galactic black hole star orbiting massive milky way centre approaches within 17 lighthours eso october 21 2002 images animations new results ucla galactic center group washington post article supermassive black holes video 246 \u2013 simulation stars orbiting milky ways central massive black hole video 213 \u2013 simulation reveals supermassive black holes nasa october 2 2018 september 2020 paul sutter 29 september 29 2020. february 28 2013 astronomers reported use nustar satellite accurately measure spin supermassive black hole first time ngc 1365 reporting event horizon spinning almost speed lightin september 2014 data different xray telescopes shown extremely small dense ultracompact dwarf galaxy m60ucd1 hosts 20 million solar mass black hole center accounting 10 total mass galaxy. eruption released shock waves jets highenergy particles punched intracluster medium creating cavity 15 million lightyears wide \u2013 ten times milky ways diameterin february 2021 astronomers released first time highresolution image 25000 active supermassive black holes covering four percent northern celestial hemisphere based ultralow radio wavelengths detected lowfrequency array lofar europe. appenzeller fricke 1972 built models behavior found resulting star would still undergo collapse concluding nonrotating 075\u00d7106 \u2609 sms escape collapse black hole burning hydrogen cno cycleedwin e salpeter yakov zeldovich made proposal 1964 matter falling onto massive compact object would explain properties quasars. since volume spherical object event horizon nonrotating black hole directly proportional cube radius density black hole inversely proportional square mass thus higher mass black holes lower average densitythe schwarzschild radius event horizon nonrotating uncharged supermassive black hole around 1 billion \u2609 comparable semimajor axis orbit planet uranus 19 ausome astronomers refer black holes greater 5 billion \u2609 ultramassive black holes umbhs ubhs term broadly used. examples quasars large estimated black hole masses hyperluminous quasar apm 082795255 estimated mass 1\u00d71010 10 billion \u2609 quasar smss j215728213602151 mass 34\u00b106\u00d71010 34 billion \u2609 nearly 10000 times mass black hole milky ways galactic centersome galaxies galaxy 4c 3711 appear two supermassive black holes centers forming binary system. study concluded radius innermost stable circular orbit isco smbh masses limit exceeds selfgravity radius making disc formation longer possiblea larger upper limit around 270 billion \u2609 represented absolute maximum mass limit accreting smbh extreme cases example maximal prograde spin dimensionless spin parameter 1 although maximum limit black holes spin paramater slightly lower 09982. supermassive black holes quasar active galactic nucleus agn appear theoretical upper limit physically around 50 billion \u2609 typical parameters anything slows growth crawl slowdown tends start around 10 billion \u2609 causes unstable accretion disk surrounding black hole coalesce stars orbit. independently specific formation channel black hole seed given sufficient mass nearby could accrete become intermediatemass black hole possibly smbh accretion rate persistsdistant early supermassive black holes j0313\u20131806 ulas j13420928 hard explain soon big bang. martin ryle malcolm longair peter scheuer proposed 1973 compact central nucleus could original energy source relativistic jetsarthur wolfe geoffrey burbidge noted 1970 large velocity dispersion stars nuclear region elliptical galaxies could explained large mass concentration nucleus larger could explained ordinary stars. milky way evidence indicates milky way galaxy supermassive black hole center 26000 lightyears solar system region called sagittarius star s2 follows elliptical orbit period 152 years pericenter closest distance 17 lighthours 18\u00d71013 120 au center central object. unlike stellar mass black holes one would experience significant tidal force deep black holes event horizonit somewhat counterintuitive note average density smbh within event horizon defined mass black hole divided volume space within schwarzschild radius less density water. although galaxies supermassive black holes small dwarf galaxies one discovery remains mysterious supergiant elliptical cd galaxy a2261bcg found contain active supermassive black hole least 1010 \u2609 despite galaxy one largest galaxies known six times size one thousand times mass milky way. noted however black holes close limit likely rather even rarer would requires accretion disc almost permanently prograde black hole grows spindown effect retrograde accretion larger spinup prograde accretion due isco therefore lever arm. known astronomical object black hole contain 40 million \u2609 volume spaceinfrared observations bright flare activity near sagittarius show orbital motion plasma period 45\u00b115 min separation six ten times gravitational radius candidate smbh. implies either a2261bgc central black hole accreting low level mass rather 1010 \u2609 december 2017 astronomers reported detection distant quasar known time ulas j13420928 containing distant supermassive black hole reported redshift z 754 surpassing redshift 7 previously known distant quasar ulas j11200641. supergiant elliptical galaxy ngc 4889 distance 336 million lightyears away coma berenices constellation contains black hole measured 2135\u221213\u00d71010 21 billion \u2609 masses black holes quasars estimated via indirect methods subject substantial uncertainty. agn divided two main groups radiative mode agn output form electromagnetic radiation optically thick accretion disk jet mode relativistic jets emerge perpendicular diskthe interaction pair smbhhosting galaxies lead merger events. meanwhile 1967 martin ryle malcolm longair suggested nearly sources extragalactic radio emission could explained model particles ejected galaxies relativistic velocities meaning moving near speed light.", "custom_approach": "On February 28, 2013, astronomers reported on the use of the NuSTAR satellite to accurately measure the spin of a supermassive black hole for the first time, in NGC 1365, reporting that the event horizon was spinning at almost the speed of light.In September 2014, data from different X-ray telescopes have shown that the extremely small, dense, ultracompact dwarf galaxy M60-UCD1 hosts a 20 million solar mass black hole at its center, accounting for more than 10% of the total mass of the galaxy. The technique of reverberation mapping uses variability of these lines to measure the mass and perhaps the spin of the black hole that powers active galaxies.Evidence indicates that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because: The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light-hours (1.8\u00d71013 m or 120 AU) from the center of the central object. The eruption released shock waves and jets of high-energy particles that punched the intracluster medium, creating a cavity about 1.5 million light-years wide \u2013 ten times the Milky Way's diameter.In February 2021, astronomers released, for the first time, a very high-resolution image of 25,000 active supermassive black holes, covering four percent of the Northern celestial hemisphere, based on ultra-low radio wavelengths, as detected by the Low-Frequency Array (LOFAR) in Europe. Since the volume of a spherical object (such as the event horizon of a non-rotating black hole) is directly proportional to the cube of the radius, the density of a black hole is inversely proportional to the square of the mass, and thus higher mass black holes have lower average density.The Schwarzschild radius of the event horizon of a nonrotating and uncharged supermassive black hole of around 1 billion M\u2609 is comparable to the semi-major axis of the orbit of planet Uranus, which is 19 AU.Some astronomers refer to black holes of greater than 5 billion M\u2609 as 'ultramassive black holes' (UMBHs or UBHs), but the term is not broadly used. A more recent theory proposes that SMBH seeds were formed in the very early universe each from the collapse of a supermassive star with mass of around 100,000 M\u2609.Large, high-redshift clouds of metal-free gas, when irradiated by a sufficiently intense flux of Lyman\u2013Werner photons, can avoid cooling and fragmenting, thus collapsing as a single object due to self-gravitation. Appenzeller and Fricke (1972) built models of this behavior, but found that the resulting star would still undergo collapse, concluding that a non-rotating 0.75\u00d7106 M\u2609 SMS \"cannot escape collapse to a black hole by burning its hydrogen through the CNO cycle\".Edwin E. Salpeter and Yakov Zeldovich made the proposal in 1964 that matter falling onto a massive compact object would explain the properties of quasars. Other examples of quasars with large estimated black hole masses are the hyperluminous quasar APM 08279+5255, with an estimated mass of 1\u00d71010 (10 billion) M\u2609, and the quasar SMSS J215728.21-360215.1, with a mass of (3.4\u00b10.6)\u00d71010 (34 billion) M\u2609, or nearly 10,000 times the mass of the black hole at the Milky Way's Galactic Center.Some galaxies, such as the galaxy 4C +37.11, appear to have two supermassive black holes at their centers, forming a binary system. All of these considerations suggested that SMBHs usually cross the critical theoretical mass limit at modest values of their spin parameters, so that 5\u00d71010 M\u2609 in all but rare cases.Gravitation from supermassive black holes in the center of many galaxies is thought to power active objects such as Seyfert galaxies and quasars, and the relationship between the mass of the central black hole and the mass of the host galaxy depends upon the galaxy type. A study concluded that the radius of the innermost stable circular orbit (ISCO) for SMBH masses above this limit exceeds the self-gravity radius, making disc formation no longer possible.A larger upper limit of around 270 billion M\u2609 was represented as the absolute maximum mass limit for an accreting SMBH in extreme cases, for example its maximal prograde spin with a dimensionless spin parameter of a = 1, although the maximum limit for a black hole's spin paramater is very slightly lower at a = 0.9982. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers.Unambiguous dynamical evidence for supermassive black holes exists only for a handful of galaxies; these include the Milky Way, the Local Group galaxies M31 and M32, and a few galaxies beyond the Local Group, such as NGC 4395. Supermassive black holes in any quasar or active galactic nucleus (AGN) appear to have a theoretical upper limit of physically around 50 billion M\u2609 for typical parameters, as anything above this slows growth down to a crawl (the slowdown tends to start around 10 billion M\u2609) and causes the unstable accretion disk surrounding the black hole to coalesce into stars that orbit it. Independently of the specific formation channel for the black hole seed, given sufficient mass nearby, it could accrete to become an intermediate-mass black hole and possibly a SMBH if the accretion rate persists.Distant and early supermassive black holes, such as J0313\u20131806, and ULAS J1342+0928, are hard to explain so soon after the Big Bang. Martin Ryle, Malcolm Longair, and Peter Scheuer then proposed in 1973 that the compact central nucleus could be the original energy source for these relativistic jets.Arthur M. Wolfe and Geoffrey Burbidge noted in 1970 that the large velocity dispersion of the stars in the nuclear region of elliptical galaxies could only be explained by a large mass concentration at the nucleus; larger than could be explained by ordinary stars. Some studies have suggested that the maximum natural mass that a black hole can reach, while being luminous accretors (featuring an accretion disk), is typically of the order of about 50 billion M\u2609.The story of how supermassive black holes were found began with the investigation by Maarten Schmidt of the radio source 3C 273 in 1963. Unlike with stellar mass black holes, one would not experience significant tidal force until very deep into the black hole's event horizon.It is somewhat counterintuitive to note that the average density of a SMBH within its event horizon (defined as the mass of the black hole divided by the volume of space within its Schwarzschild radius) can be less than the density of water. Although most galaxies with no supermassive black holes are very small, dwarf galaxies, one discovery remains mysterious: The supergiant elliptical cD galaxy A2261-BCG has not been found to contain an active supermassive black hole of at least 1010 M\u2609, despite the galaxy being one of the largest galaxies known; over six times the size and one thousand times the mass of the Milky Way. Black holes formed during the predicted collapse of superclusters of galaxies in the far future with 1\u00d71014 M\u2609 would evaporate over a timescale of up to 2.1\u00d710109 years.Some of the best evidence for the presence of black holes is provided by the Doppler effect whereby light from nearby orbiting matter is red-shifted when receding and blue-shifted when advancing. The gravitational waves from this coalescence can give the resulting SMBH a velocity boost of up to several thousand km/s, propelling it away from the galactic center and possibly even ejecting it from the galaxy.Hawking radiation is black-body radiation that is predicted to be released by black holes, due to quantum effects near the event horizon. It was noted that, however, black holes close to this limit are likely to be rather even rarer, as it would requires the accretion disc to be almost permanently prograde because the black hole grows and the spin-down effect of retrograde accretion is larger than the spin-up by prograde accretion, due to its ISCO and therefore its lever arm.", "combined_approach": "february 28 2013 astronomers reported use nustar satellite accurately measure spin supermassive black hole first time ngc 1365 reporting event horizon spinning almost speed lightin september 2014 data different xray telescopes shown extremely small dense ultracompact dwarf galaxy m60ucd1 hosts 20 million solar mass black hole center accounting 10 total mass galaxy. technique reverberation mapping uses variability lines measure mass perhaps spin black hole powers active galaxiesevidence indicates milky way galaxy supermassive black hole center 26000 lightyears solar system region called sagittarius star s2 follows elliptical orbit period 152 years pericenter closest distance 17 lighthours 18\u00d71013 120 au center central object. eruption released shock waves jets highenergy particles punched intracluster medium creating cavity 15 million lightyears wide \u2013 ten times milky ways diameterin february 2021 astronomers released first time highresolution image 25000 active supermassive black holes covering four percent northern celestial hemisphere based ultralow radio wavelengths detected lowfrequency array lofar europe. since volume spherical object event horizon nonrotating black hole directly proportional cube radius density black hole inversely proportional square mass thus higher mass black holes lower average densitythe schwarzschild radius event horizon nonrotating uncharged supermassive black hole around 1 billion \u2609 comparable semimajor axis orbit planet uranus 19 ausome astronomers refer black holes greater 5 billion \u2609 ultramassive black holes umbhs ubhs term broadly used. recent theory proposes smbh seeds formed early universe collapse supermassive star mass around 100000 \u2609 large highredshift clouds metalfree gas irradiated sufficiently intense flux lyman \u2013 werner photons avoid cooling fragmenting thus collapsing single object due selfgravitation. appenzeller fricke 1972 built models behavior found resulting star would still undergo collapse concluding nonrotating 075\u00d7106 \u2609 sms escape collapse black hole burning hydrogen cno cycleedwin e salpeter yakov zeldovich made proposal 1964 matter falling onto massive compact object would explain properties quasars. examples quasars large estimated black hole masses hyperluminous quasar apm 082795255 estimated mass 1\u00d71010 10 billion \u2609 quasar smss j215728213602151 mass 34\u00b106\u00d71010 34 billion \u2609 nearly 10000 times mass black hole milky ways galactic centersome galaxies galaxy 4c 3711 appear two supermassive black holes centers forming binary system. considerations suggested smbhs usually cross critical theoretical mass limit modest values spin parameters 5\u00d71010 \u2609 rare casesgravitation supermassive black holes center many galaxies thought power active objects seyfert galaxies quasars relationship mass central black hole mass host galaxy depends upon galaxy type. study concluded radius innermost stable circular orbit isco smbh masses limit exceeds selfgravity radius making disc formation longer possiblea larger upper limit around 270 billion \u2609 represented absolute maximum mass limit accreting smbh extreme cases example maximal prograde spin dimensionless spin parameter 1 although maximum limit black holes spin paramater slightly lower 09982. unusual event may caused breaking apart asteroid falling black hole entanglement magnetic field lines within gas flowing sagittarius according astronomersunambiguous dynamical evidence supermassive black holes exists handful galaxies include milky way local group galaxies m31 m32 galaxies beyond local group ngc 4395. supermassive black holes quasar active galactic nucleus agn appear theoretical upper limit physically around 50 billion \u2609 typical parameters anything slows growth crawl slowdown tends start around 10 billion \u2609 causes unstable accretion disk surrounding black hole coalesce stars orbit. independently specific formation channel black hole seed given sufficient mass nearby could accrete become intermediatemass black hole possibly smbh accretion rate persistsdistant early supermassive black holes j0313\u20131806 ulas j13420928 hard explain soon big bang. martin ryle malcolm longair peter scheuer proposed 1973 compact central nucleus could original energy source relativistic jetsarthur wolfe geoffrey burbidge noted 1970 large velocity dispersion stars nuclear region elliptical galaxies could explained large mass concentration nucleus larger could explained ordinary stars. studies suggested maximum natural mass black hole reach luminous accretors featuring accretion disk typically order 50 billion \u2609 story supermassive black holes found began investigation maarten schmidt radio source 3c 273 1963. unlike stellar mass black holes one would experience significant tidal force deep black holes event horizonit somewhat counterintuitive note average density smbh within event horizon defined mass black hole divided volume space within schwarzschild radius less density water. although galaxies supermassive black holes small dwarf galaxies one discovery remains mysterious supergiant elliptical cd galaxy a2261bcg found contain active supermassive black hole least 1010 \u2609 despite galaxy one largest galaxies known six times size one thousand times mass milky way. black holes formed predicted collapse superclusters galaxies far future 1\u00d71014 \u2609 would evaporate timescale 21\u00d710109 yearssome best evidence presence black holes provided doppler effect whereby light nearby orbiting matter redshifted receding blueshifted advancing. gravitational waves coalescence give resulting smbh velocity boost several thousand kms propelling away galactic center possibly even ejecting galaxyhawking radiation blackbody radiation predicted released black holes due quantum effects near event horizon. noted however black holes close limit likely rather even rarer would requires accretion disc almost permanently prograde black hole grows spindown effect retrograde accretion larger spinup prograde accretion due isco therefore lever arm."}, {"topic": "Micro black hole", "summary": "Micro black holes, also called mini black holes or quantum mechanical black holes, are hypothetical tiny (<1 M\u2609) black holes, for which quantum mechanical effects play an important role. The concept that black holes may exist that are smaller than stellar mass was introduced in 1971 by Stephen Hawking.It is possible that such black holes were created in the high-density environment of the early Universe (or Big Bang), or possibly through subsequent phase transitions (referred to as primordial black holes). They might be observed by astrophysicists through the particles they are expected to emit by Hawking radiation.Some hypotheses involving additional space dimensions predict that micro black holes could be formed at energies as low as the TeV range, which are available in particle accelerators such as the Large Hadron Collider. Popular concerns have then been raised over end-of-the-world scenarios (see Safety of particle collisions at the Large Hadron Collider). However, such quantum black holes would instantly evaporate, either totally or leaving only a very weakly interacting residue. Beside the theoretical arguments, cosmic rays hitting the Earth do not produce any damage, although they reach energies in the range of hundreds of TeV.\n\n", "content": "\n== Minimum mass of a black hole ==\nIn an early speculation, Stephen Hawking conjectured that a black hole would not form with a mass below about 10\u22128 kg (roughly the Planck mass). To make a black hole, one must concentrate mass or energy sufficiently that the escape velocity from the region in which it is concentrated exceeds the speed of light.\nSome extensions of present physics posit the existence of extra dimensions of space. In higher-dimensional spacetime, the strength of gravity increases more rapidly with decreasing distance than in three dimensions. With certain special configurations of the extra dimensions, this effect can lower the Planck scale to the TeV range. Examples of such extensions include large extra dimensions, special cases of the Randall\u2013Sundrum model, and string theory configurations like the GKP solutions. In such scenarios, black hole production could possibly be an important and observable effect at the Large Hadron Collider (LHC).\nIt would also be a common natural phenomenon induced by cosmic rays.\nAll this assumes that the theory of general relativity remains valid at these small distances. If it does not, then other, currently unknown, effects might limit the minimum size of a black hole. Elementary particles are equipped with a quantum-mechanical, intrinsic angular momentum (spin). The correct conservation law for the total (orbital plus spin) angular momentum of matter in curved spacetime requires that spacetime is equipped with torsion. The simplest and most natural theory of gravity with torsion is the Einstein\u2013Cartan theory. Torsion modifies the Dirac equation in the presence of the gravitational field and causes fermion particles to be spatially extended. In this case the spatial extension of fermions limits the minimum mass of a black hole to be on the order of 1016 kg, showing that micro black holes may not exist. The energy necessary to produce such a black hole is 39 orders of magnitude greater than the energies available at the Large Hadron Collider, indicating that the LHC cannot produce mini black holes. But if black holes are produced, then the theory of general relativity is proven wrong and does not exist at these small distances. The rules of general relativity would be broken, as is consistent with theories of how matter, space, and time break down around the event horizon of a black hole. This would prove the spatial extensions of the fermion limits to be incorrect as well. The fermion limits assume a minimum mass needed to sustain a black hole, as opposed to the opposite, the minimum mass needed to start a black hole, which in theory is achievable in the LHC under some conditions.\n\n\n== Stability ==\n\n\n=== Hawking radiation ===\n\nIn 1975, Stephen Hawking argued that, due to quantum effects, black holes \"evaporate\" by a process now referred to as Hawking radiation in which elementary particles (such as photons, electrons, quarks and gluons) are emitted. His calculations showed that the smaller the size of the black hole, the faster the evaporation rate, resulting in a sudden burst of particles as the micro black hole suddenly explodes.\nAny primordial black hole of sufficiently low mass will evaporate to near the Planck mass within the lifetime of the Universe. In this process, these small black holes radiate away matter. A rough picture of this is that pairs of virtual particles emerge from the vacuum near the event horizon, with one member of a pair being captured, and the other escaping the vicinity of the black hole. The net result is the black hole loses mass (due to conservation of energy). According to the formulae of black hole thermodynamics, the more the black hole loses mass, the hotter it becomes, and the faster it evaporates, until it approaches the Planck mass. At this stage, a black hole would have a Hawking temperature of TP/8\u03c0 (5.6\u00d71030 K), which means an emitted Hawking particle would have an energy comparable to the mass of the black hole. Thus, a thermodynamic description breaks down. Such a micro black hole would also have an entropy of only 4\u03c0 nats, approximately the minimum possible value. At this point then, the object can no longer be described as a classical black hole, and Hawking's calculations also break down.\nWhile Hawking radiation is sometimes questioned, Leonard Susskind summarizes an expert perspective in his book The Black Hole War: \"Every so often, a physics paper will appear claiming that black holes don't evaporate. Such papers quickly disappear into the infinite junk heap of fringe ideas.\"\n\n\n=== Conjectures for the final state ===\nConjectures for the final fate of the black hole include total evaporation and production of a Planck-mass-sized black hole remnant. Such Planck-mass black holes may in effect be stable objects if the quantized gaps between their allowed energy levels bar them from emitting Hawking particles or absorbing energy gravitationally like a classical black hole. In such case, they would be weakly interacting massive particles; this could explain dark matter.\n\n\n== Primordial black holes ==\n\n\n=== Formation in the early Universe ===\nProduction of a black hole requires concentration of mass or energy within the corresponding Schwarzschild radius. It was hypothesized by Zel'dovich and Novikov first and independently by Hawking that, shortly after the Big Bang, the Universe was dense enough for any given region of space to fit within its own Schwarzschild radius. Even so, at that time, the Universe was not able to collapse into a singularity due to its uniform mass distribution and rapid growth. This, however, does not fully exclude the possibility that black holes of various sizes may have emerged locally. A black hole formed in this way is called a primordial black hole and is the most widely accepted hypothesis for the possible creation of micro black holes. Computer simulations suggest that the probability of formation of a primordial black hole is inversely proportional to its mass. Thus, the most likely outcome would be micro black holes.\n\n\n=== Expected observable effects ===\nA primordial black hole with an initial mass of around 1012 kg would be completing its evaporation today; a less massive primordial black hole would have already evaporated. Under optimal conditions, the Fermi Gamma-ray Space Telescope satellite, launched in June 2008, might detect experimental evidence for evaporation of nearby black holes by observing gamma ray bursts. It is unlikely that a collision between a microscopic black hole and an object such as a star or a planet would be noticeable. The small radius and high density of the black hole would allow it to pass straight through any object consisting of normal atoms, interacting with only few of its atoms while doing so. It has, however, been suggested that a small black hole of sufficient mass passing through the Earth would produce a detectable acoustic or seismic signal.\nOn the moon, it may leave a distinct type of crater, still visible after billions of years.\n\n\n== Human-made micro black holes ==\n\n\n=== Feasibility of production ===\n\nIn familiar three-dimensional gravity, the minimum energy of a microscopic black hole is 1016 TeV (equivalent to 1.6 GJ or 444 kWh), which would have to be condensed into a region on the order of the Planck length. This is far beyond the limits of any current technology. It is estimated that to collide two particles to within a distance of a Planck length with currently achievable magnetic field strengths would require a ring accelerator about 1,000 light years in diameter to keep the particles on track.\nHowever, in some scenarios involving extra dimensions of space, the Planck mass can be as low as the TeV range. The Large Hadron Collider (LHC) has a design energy of 14 TeV for proton\u2013proton collisions and 1,150 TeV for Pb\u2013Pb collisions. It was argued in 2001 that, in these circumstances, black hole production could be an important and observable effect at the LHC or future higher-energy colliders. Such quantum black holes should decay emitting sprays of particles that could be seen by detectors at these facilities. A paper by Choptuik and Pretorius, published in 2010 in Physical Review Letters, presented a computer-generated proof that micro black holes must form from two colliding particles with sufficient energy, which might be allowable at the energies of the LHC if additional dimensions are present other than the customary four (three spatial, one temporal).\n\n\n=== Safety arguments ===\n\nHawking's calculation and more general quantum mechanical arguments predict that micro black holes evaporate almost instantaneously. Additional safety arguments beyond those based on Hawking radiation were given in the paper, which showed that in hypothetical scenarios with stable micro black holes massive enough to destroy Earth, such black holes would have been produced by cosmic rays and would have likely already destroyed astronomical objects such as planets, stars, or stellar remnants such as neutron stars and white dwarfs.\n\n\n== Black holes in quantum theories of gravity ==\nIt is possible, in some theories of quantum gravity, to calculate the quantum corrections to ordinary, classical black holes. Contrarily to conventional black holes, which are solutions of gravitational field equations of the general theory of relativity, quantum gravity black holes incorporate quantum gravity effects in the vicinity of the origin, where classically a curvature singularity occurs. According to the theory employed to model quantum gravity effects, there are different kinds of quantum gravity black holes, namely loop quantum black holes, non-commutative black holes, and asymptotically safe black holes. In these approaches, black holes are singularity-free.Virtual micro black holes were proposed by Stephen Hawking in 1995 and by Fabio Scardigli in 1999 as part of a Grand Unified Theory as a quantum gravity candidate.\n\n\n== See also ==\nBlack hole electron\nBlack hole starship\nBlack holes in fiction\nER=EPR\nKugelblitz (astrophysics)\nStrangelet\n\n\n== Notes ==\n\n\n== References ==\n\n\n== Bibliography ==\n\n\n== External links ==\nAstrophysical implications of hypothetical stable TeV-scale black holes\nMini Black Holes Might Reveal 5th Dimension \u2013 Ker Than. Space.com June 26, 2006 10:42am ET\nDoomsday Machine Large Hadron Collider? \u2013 A scientific essay about energies, dimensions, black holes, and the associated public attention to CERN, by Norbert Frischauf (also available as Podcast)", "content_traditional": "additional safety arguments beyond based hawking radiation given paper showed hypothetical scenarios stable micro black holes massive enough destroy earth black holes would produced cosmic rays would likely already destroyed astronomical objects planets stars stellar remnants neutron stars white dwarfs. paper choptuik pretorius published 2010 physical review letters presented computergenerated proof micro black holes must form two colliding particles sufficient energy might allowable energies lhc additional dimensions present customary four three spatial one temporal. humanmade micro black holes feasibility production familiar threedimensional gravity minimum energy microscopic black hole 1016 tev equivalent 16 gj 444 kwh would condensed region order planck length. see also black hole electron black hole starship black holes fiction erepr kugelblitz astrophysics strangelet notes references bibliography external links astrophysical implications hypothetical stable tevscale black holes mini black holes might reveal 5th dimension \u2013 ker. hypothesized zeldovich novikov first independently hawking shortly big bang universe dense enough given region space fit within schwarzschild radius. estimated collide two particles within distance planck length currently achievable magnetic field strengths would require ring accelerator 1000 light years diameter keep particles track. planckmass black holes may effect stable objects quantized gaps allowed energy levels bar emitting hawking particles absorbing energy gravitationally like classical black hole. hawking radiation sometimes questioned leonard susskind summarizes expert perspective book black hole war every often physics paper appear claiming black holes nt evaporate. stability hawking radiation 1975 stephen hawking argued due quantum effects black holes evaporate process referred hawking radiation elementary particles photons electrons quarks gluons emitted. approaches black holes singularityfreevirtual micro black holes proposed stephen hawking 1995 fabio scardigli 1999 part grand unified theory quantum gravity candidate. contrarily conventional black holes solutions gravitational field equations general theory relativity quantum gravity black holes incorporate quantum gravity effects vicinity origin classically curvature singularity occurs. rough picture pairs virtual particles emerge vacuum near event horizon one member pair captured escaping vicinity black hole. small radius high density black hole would allow pass straight object consisting normal atoms interacting atoms. case spatial extension fermions limits minimum mass black hole order 1016 kg showing micro black holes may exist. argued 2001 circumstances black hole production could important observable effect lhc future higherenergy colliders. energy necessary produce black hole 39 orders magnitude greater energies available large hadron collider indicating lhc produce mini black holes. make black hole one must concentrate mass energy sufficiently escape velocity region concentrated exceeds speed light. expected observable effects primordial black hole initial mass around 1012 kg would completing evaporation today less massive primordial black hole would already evaporated. rules general relativity would broken consistent theories matter space time break around event horizon black hole. optimal conditions fermi gammaray space telescope satellite launched june 2008 might detect experimental evidence evaporation nearby black holes observing gamma ray bursts. fermion limits assume minimum mass needed sustain black hole opposed opposite minimum mass needed start black hole theory achievable lhc conditions. minimum mass black hole early speculation stephen hawking conjectured black hole would form mass 10\u22128 kg roughly planck mass. stage black hole would hawking temperature tp8\u03c0 56\u00d71030 k means emitted hawking particle would energy comparable mass black hole. black holes produced theory general relativity proven wrong exist small distances. even time universe able collapse singularity due uniform mass distribution rapid growth. however suggested small black hole sufficient mass passing earth would produce detectable acoustic seismic signal. calculations showed smaller size black hole faster evaporation rate resulting sudden burst particles micro black hole suddenly explodes. examples extensions include large extra dimensions special cases randall \u2013 sundrum model string theory configurations like gkp solutions. according theory employed model quantum gravity effects different kinds quantum gravity black holes namely loop quantum black holes noncommutative black holes asymptotically safe black holes. correct conservation law total orbital plus spin angular momentum matter curved spacetime requires spacetime equipped torsion. black hole formed way called primordial black hole widely accepted hypothesis possible creation micro black holes. point object longer described classical black hole hawkings calculations also break. \u2013 scientific essay energies dimensions black holes associated public attention cern norbert frischauf also available podcast primordial black holes formation early universe production black hole requires concentration mass energy within corresponding schwarzschild radius. scenarios black hole production could possibly important observable effect large hadron collider lhc. unlikely collision microscopic black hole object star planet would noticeable. according formulae black hole thermodynamics black hole loses mass hotter becomes faster evaporates approaches planck mass. quantum black holes decay emitting sprays particles could seen detectors facilities. however scenarios involving extra dimensions space planck mass low tev range. torsion modifies dirac equation presence gravitational field causes fermion particles spatially extended. certain special configurations extra dimensions effect lower planck scale tev range. micro black hole would also entropy 4\u03c0 nats approximately minimum possible value. however fully exclude possibility black holes various sizes may emerged locally. primordial black hole sufficiently low mass evaporate near planck mass within lifetime universe.", "custom_approach": "Additional safety arguments beyond those based on Hawking radiation were given in the paper, which showed that in hypothetical scenarios with stable micro black holes massive enough to destroy Earth, such black holes would have been produced by cosmic rays and would have likely already destroyed astronomical objects such as planets, stars, or stellar remnants such as neutron stars and white dwarfs.It is possible, in some theories of quantum gravity, to calculate the quantum corrections to ordinary, classical black holes. A paper by Choptuik and Pretorius, published in 2010 in Physical Review Letters, presented a computer-generated proof that micro black holes must form from two colliding particles with sufficient energy, which might be allowable at the energies of the LHC if additional dimensions are present other than the customary four (three spatial, one temporal).Hawking's calculation and more general quantum mechanical arguments predict that micro black holes evaporate almost instantaneously. The fermion limits assume a minimum mass needed to sustain a black hole, as opposed to the opposite, the minimum mass needed to start a black hole, which in theory is achievable in the LHC under some conditions.In 1975, Stephen Hawking argued that, due to quantum effects, black holes \"evaporate\" by a process now referred to as Hawking radiation in which elementary particles (such as photons, electrons, quarks and gluons) are emitted. On the moon, it may leave a distinct type of crater, still visible after billions of years.In familiar three-dimensional gravity, the minimum energy of a microscopic black hole is 1016 TeV (equivalent to 1.6 GJ or 444 kWh), which would have to be condensed into a region on the order of the Planck length. It was hypothesized by Zel'dovich and Novikov first and independently by Hawking that, shortly after the Big Bang, the Universe was dense enough for any given region of space to fit within its own Schwarzschild radius. It is estimated that to collide two particles to within a distance of a Planck length with currently achievable magnetic field strengths would require a ring accelerator about 1,000 light years in diameter to keep the particles on track. Such Planck-mass black holes may in effect be stable objects if the quantized gaps between their allowed energy levels bar them from emitting Hawking particles or absorbing energy gravitationally like a classical black hole. While Hawking radiation is sometimes questioned, Leonard Susskind summarizes an expert perspective in his book The Black Hole War: \"Every so often, a physics paper will appear claiming that black holes don't evaporate. In these approaches, black holes are singularity-free.Virtual micro black holes were proposed by Stephen Hawking in 1995 and by Fabio Scardigli in 1999 as part of a Grand Unified Theory as a quantum gravity candidate. In such case, they would be weakly interacting massive particles; this could explain dark matter.Production of a black hole requires concentration of mass or energy within the corresponding Schwarzschild radius. Contrarily to conventional black holes, which are solutions of gravitational field equations of the general theory of relativity, quantum gravity black holes incorporate quantum gravity effects in the vicinity of the origin, where classically a curvature singularity occurs. Thus, the most likely outcome would be micro black holes.A primordial black hole with an initial mass of around 1012 kg would be completing its evaporation today; a less massive primordial black hole would have already evaporated. A rough picture of this is that pairs of virtual particles emerge from the vacuum near the event horizon, with one member of a pair being captured, and the other escaping the vicinity of the black hole. The small radius and high density of the black hole would allow it to pass straight through any object consisting of normal atoms, interacting with only few of its atoms while doing so. In this case the spatial extension of fermions limits the minimum mass of a black hole to be on the order of 1016 kg, showing that micro black holes may not exist. It was argued in 2001 that, in these circumstances, black hole production could be an important and observable effect at the LHC or future higher-energy colliders. The energy necessary to produce such a black hole is 39 orders of magnitude greater than the energies available at the Large Hadron Collider, indicating that the LHC cannot produce mini black holes. To make a black hole, one must concentrate mass or energy sufficiently that the escape velocity from the region in which it is concentrated exceeds the speed of light. The rules of general relativity would be broken, as is consistent with theories of how matter, space, and time break down around the event horizon of a black hole. Under optimal conditions, the Fermi Gamma-ray Space Telescope satellite, launched in June 2008, might detect experimental evidence for evaporation of nearby black holes by observing gamma ray bursts. At this stage, a black hole would have a Hawking temperature of TP/8\u03c0 (5.6\u00d71030 K), which means an emitted Hawking particle would have an energy comparable to the mass of the black hole. But if black holes are produced, then the theory of general relativity is proven wrong and does not exist at these small distances. Even so, at that time, the Universe was not able to collapse into a singularity due to its uniform mass distribution and rapid growth. It has, however, been suggested that a small black hole of sufficient mass passing through the Earth would produce a detectable acoustic or seismic signal. His calculations showed that the smaller the size of the black hole, the faster the evaporation rate, resulting in a sudden burst of particles as the micro black hole suddenly explodes. Examples of such extensions include large extra dimensions, special cases of the Randall\u2013Sundrum model, and string theory configurations like the GKP solutions. In an early speculation, Stephen Hawking conjectured that a black hole would not form with a mass below about 10\u22128 kg (roughly the Planck mass). According to the theory employed to model quantum gravity effects, there are different kinds of quantum gravity black holes, namely loop quantum black holes, non-commutative black holes, and asymptotically safe black holes. The correct conservation law for the total (orbital plus spin) angular momentum of matter in curved spacetime requires that spacetime is equipped with torsion. A black hole formed in this way is called a primordial black hole and is the most widely accepted hypothesis for the possible creation of micro black holes. At this point then, the object can no longer be described as a classical black hole, and Hawking's calculations also break down. In such scenarios, black hole production could possibly be an important and observable effect at the Large Hadron Collider (LHC). It is unlikely that a collision between a microscopic black hole and an object such as a star or a planet would be noticeable. According to the formulae of black hole thermodynamics, the more the black hole loses mass, the hotter it becomes, and the faster it evaporates, until it approaches the Planck mass. Such quantum black holes should decay emitting sprays of particles that could be seen by detectors at these facilities. However, in some scenarios involving extra dimensions of space, the Planck mass can be as low as the TeV range. Torsion modifies the Dirac equation in the presence of the gravitational field and causes fermion particles to be spatially extended. \"Conjectures for the final fate of the black hole include total evaporation and production of a Planck-mass-sized black hole remnant. With certain special configurations of the extra dimensions, this effect can lower the Planck scale to the TeV range. This, however, does not fully exclude the possibility that black holes of various sizes may have emerged locally. Such a micro black hole would also have an entropy of only 4\u03c0 nats, approximately the minimum possible value. Any primordial black hole of sufficiently low mass will evaporate to near the Planck mass within the lifetime of the Universe. In higher-dimensional spacetime, the strength of gravity increases more rapidly with decreasing distance than in three dimensions.", "combined_approach": "additional safety arguments beyond based hawking radiation given paper showed hypothetical scenarios stable micro black holes massive enough destroy earth black holes would produced cosmic rays would likely already destroyed astronomical objects planets stars stellar remnants neutron stars white dwarfsit possible theories quantum gravity calculate quantum corrections ordinary classical black holes. paper choptuik pretorius published 2010 physical review letters presented computergenerated proof micro black holes must form two colliding particles sufficient energy might allowable energies lhc additional dimensions present customary four three spatial one temporalhawkings calculation general quantum mechanical arguments predict micro black holes evaporate almost instantaneously. fermion limits assume minimum mass needed sustain black hole opposed opposite minimum mass needed start black hole theory achievable lhc conditionsin 1975 stephen hawking argued due quantum effects black holes evaporate process referred hawking radiation elementary particles photons electrons quarks gluons emitted. moon may leave distinct type crater still visible billions yearsin familiar threedimensional gravity minimum energy microscopic black hole 1016 tev equivalent 16 gj 444 kwh would condensed region order planck length. hypothesized zeldovich novikov first independently hawking shortly big bang universe dense enough given region space fit within schwarzschild radius. estimated collide two particles within distance planck length currently achievable magnetic field strengths would require ring accelerator 1000 light years diameter keep particles track. planckmass black holes may effect stable objects quantized gaps allowed energy levels bar emitting hawking particles absorbing energy gravitationally like classical black hole. hawking radiation sometimes questioned leonard susskind summarizes expert perspective book black hole war every often physics paper appear claiming black holes nt evaporate. approaches black holes singularityfreevirtual micro black holes proposed stephen hawking 1995 fabio scardigli 1999 part grand unified theory quantum gravity candidate. case would weakly interacting massive particles could explain dark matterproduction black hole requires concentration mass energy within corresponding schwarzschild radius. contrarily conventional black holes solutions gravitational field equations general theory relativity quantum gravity black holes incorporate quantum gravity effects vicinity origin classically curvature singularity occurs. thus likely outcome would micro black holesa primordial black hole initial mass around 1012 kg would completing evaporation today less massive primordial black hole would already evaporated. rough picture pairs virtual particles emerge vacuum near event horizon one member pair captured escaping vicinity black hole. small radius high density black hole would allow pass straight object consisting normal atoms interacting atoms. case spatial extension fermions limits minimum mass black hole order 1016 kg showing micro black holes may exist. argued 2001 circumstances black hole production could important observable effect lhc future higherenergy colliders. energy necessary produce black hole 39 orders magnitude greater energies available large hadron collider indicating lhc produce mini black holes. make black hole one must concentrate mass energy sufficiently escape velocity region concentrated exceeds speed light. rules general relativity would broken consistent theories matter space time break around event horizon black hole. optimal conditions fermi gammaray space telescope satellite launched june 2008 might detect experimental evidence evaporation nearby black holes observing gamma ray bursts. stage black hole would hawking temperature tp8\u03c0 56\u00d71030 k means emitted hawking particle would energy comparable mass black hole. black holes produced theory general relativity proven wrong exist small distances. even time universe able collapse singularity due uniform mass distribution rapid growth. however suggested small black hole sufficient mass passing earth would produce detectable acoustic seismic signal. calculations showed smaller size black hole faster evaporation rate resulting sudden burst particles micro black hole suddenly explodes. examples extensions include large extra dimensions special cases randall \u2013 sundrum model string theory configurations like gkp solutions. early speculation stephen hawking conjectured black hole would form mass 10\u22128 kg roughly planck mass. according theory employed model quantum gravity effects different kinds quantum gravity black holes namely loop quantum black holes noncommutative black holes asymptotically safe black holes. correct conservation law total orbital plus spin angular momentum matter curved spacetime requires spacetime equipped torsion. black hole formed way called primordial black hole widely accepted hypothesis possible creation micro black holes. point object longer described classical black hole hawkings calculations also break. scenarios black hole production could possibly important observable effect large hadron collider lhc. unlikely collision microscopic black hole object star planet would noticeable. according formulae black hole thermodynamics black hole loses mass hotter becomes faster evaporates approaches planck mass. quantum black holes decay emitting sprays particles could seen detectors facilities. however scenarios involving extra dimensions space planck mass low tev range. torsion modifies dirac equation presence gravitational field causes fermion particles spatially extended. conjectures final fate black hole include total evaporation production planckmasssized black hole remnant. certain special configurations extra dimensions effect lower planck scale tev range. however fully exclude possibility black holes various sizes may emerged locally. micro black hole would also entropy 4\u03c0 nats approximately minimum possible value. primordial black hole sufficiently low mass evaporate near planck mass within lifetime universe. higherdimensional spacetime strength gravity increases rapidly decreasing distance three dimensions."}]