Topic
stringclasses
9 values
News_Title
stringlengths
10
120
Citation
stringlengths
18
4.58k
Paper_URL
stringlengths
27
213
News_URL
stringlengths
36
119
Paper_Body
stringlengths
11.8k
2.03M
News_Body
stringlengths
574
29.7k
DOI
stringlengths
3
169
Medicine
MicroRNA helps cancer evade immune system
Min-Zu Wu et al. miR-25/93 mediates hypoxia-induced immunosuppression by repressing cGAS, Nature Cell Biology (2017). DOI: 10.1038/ncb3615 Journal information: Nature Cell Biology
http://dx.doi.org/10.1038/ncb3615
https://medicalxpress.com/news/2017-09-microrna-cancer-evade-immune.html
Abstract The mechanisms by which hypoxic tumours evade immunological pressure and anti-tumour immunity remain elusive. Here, we report that two hypoxia-responsive microRNAs, miR-25 and miR-93, are important for establishing an immunosuppressive tumour microenvironment by downregulating expression of the DNA sensor cGAS. Mechanistically, miR-25/93 targets NCOA3, an epigenetic factor that maintains basal levels of cGAS expression, leading to repression of cGAS during hypoxia. This allows hypoxic tumour cells to escape immunological responses induced by damage-associated molecular pattern molecules, specifically the release of mitochondrial DNA. Moreover, restoring cGAS expression results in an anti-tumour immune response. Clinically, decreased levels of cGAS are associated with poor prognosis for patients with breast cancer harbouring high levels of miR-25/93. Together, these data suggest that inactivation of the cGAS pathway plays a critical role in tumour progression, and reveal a direct link between hypoxia-responsive miRNAs and adaptive immune responses to the hypoxic tumour microenvironment, thus unveiling potential new therapeutic strategies. Main The ability of cancer cells to evade immune responses is critical for the emergence of malignant phenotypes 1 . Cancer cells can escape both innate and adaptive immune responses by deregulating immune effector cells and immunosuppressive cells, as well as molecules responsible for cancer cell recognition and elimination 2 . Hypoxia contributes to immunosuppression within tumours, helping shield cancer cells from immune attack and inhibiting immune killing functions 3 , 4 , 5 . For example, immunosuppressive cells (for example, regulatory T cells) accumulate within hypoxic regions of tumours where they promote tumour progression 6 , 7 , 8 , 9 and activate immune tolerance mechanisms that enable cancer cells to evade host immunosurveillance. Despite recent advances, the molecular mechanisms underlying hypoxia-induced immune escape are not yet fully understood. MicroRNAs (miRNAs) regulate a variety of cellular processes, including immune cell differentiation, immune responses to infection, and the development of immune disorders 10 , 11 . In the context of tumorigenesis, dysregulated miRNAs promote tumour progression by regulating cancer pathways associated with tumour malignancy 12 . A recent screen for hypoxia-regulated miRNAs in breast cancer cells revealed that hypoxia modulates miRNA function during breast cancer progression 13 . Hypoxia-responsive miRNAs regulate a complex spectrum of candidate target genes, including those involved in proliferation, apoptosis, metabolism and migration 14 . Here we identify hypoxia-responsive miR-25 and miR-93 as critical factors in suppressing the expression of cyclic GMP-AMP synthase (cGAS) through a pathway that requires TET1 and NCOA3. In this way, these miRNAs promote immune escape of hypoxic tumours from damage-associated molecular pattern molecule (DAMP)-induced immunological stress, ultimately leading to the formation of an immunosuppressive tumour microenvironment. RESULTS Hypoxia signalling induces immunosuppressive phenotypes Recent findings suggest that hypoxia/HIF-1 signalling suppresses host defence against cancer 9 , 15 . To prove this concept, we examined associations between hypoxic tumours and immunosuppression. We identified hypoxic breast tumour samples on the basis of levels of HIF-1α and assessed the expression of an array of inflammatory and immune mediators. This analysis revealed decreased levels of Ifn - g , an anti-tumour factor, as well as decreased levels of several factors known to drive anti-tumour immunity, such as Il-12 , Cxcl10 , Ifit1 and Tbx21 (ref. 16 ). In contrast, Il-10 , a tumour-promoting factor, as well as Il-1β and Il-17 , were increased ( Fig. 1a–c ). As a second means of studying hypoxic cancer cells, we subcutaneously inoculated wild-type mice with murine breast cancer cells (E0771) that overexpress a constitutively active form of HIF-1α (ref. 17 ). Because hypoxia results in the recruitment of immunosuppressive cells 4 , we characterized tumour-infiltrating immune cells (TICs). Fluorescence-activated cell sorting (FACS) analysis of tumour biopsies indicated an increase in CD11b + Ly6G − Ly6C high monocytic myeloid-derived suppressor cells (MDSCs) in HIF-1α-overexpressing tumours (compared with control tumours), whereas HIF-1α did not affect levels of CD11b + F4/80 + tumour-associated macrophages or CD11b + Ly6G + Ly6C low granulocytic MDSCs ( Supplementary Fig. 1a–c ). The population of effector CD8 + T cells was decreased in HIF-1α-overexpressing tumours, accompanied by a change in the percentage of tumour-infiltrating CD8 + T cells, whereas the percentage of CD4 + T cells remained the same ( Fig. 1d and Supplementary Fig. 1d, e ). Furthermore, we found greater numbers of CD4 + Foxp3 + regulatory T cells (Tregs) in HIF-1α-overexpressing tumours ( Fig. 1e ), consistent with previous reports showing enhanced recruitment of Tregs by hypoxia 9 , 18 . We also analysed these tumours and consistently observed immunosuppressive gene expression profiles, as shown by decreased levels of anti-tumour immunity factors and increased levels of tumour-promoting factors ( Fig. 1f, g ). Collectively, these results indicate that hypoxia/HIF-1α promotes immunosuppression during breast cancer formation in vivo . Figure 1: miR-25/93 induced by hypoxia drives immunosuppression. ( a ) Top: correlation between regions of hypoxia (defined by hypoxyprobe) and regions of high HIF-1α in murine tumour samples. Bottom: screening for hypoxic murine tumour samples. Endogenous HIF-1α was used as an indicator for hypoxia. E0771 cells were used for tumour formation. Scale bars, 50 μm. N, non-hypoxic sample; H, hypoxic sample. ( b , c ) Gene expression analysis for selected genes responsible for either anti-tumour immunity or tumour-promoting effects in hypoxic tumour samples, compared with non-hypoxic tumour samples. Anti-tumour immunity factors include Ifn - g , Il-12 , Cxcl10 , lfit1 and Tbx21 . Tumour-promoting factors include Il-10 , Il-1 and Il-17 . ( d ) The percentage of effector CD8 + cytotoxic T cells was decreased in HIF-1α-overexpressing tumours. Changes in effector CD8 + cytotoxic T cells were normalized to total CD8 + T-cell proportions. Effector CD8 + cytotoxic T cells were defined by the production of TNF (also known as TNF-α) and IFN-γ. E0771 cells expressing HIF-1α or empty vector were used for tumour formation. ( e ) HIF-1α-overexpressing tumours showed significantly higher recruitment of CD4 + Foxp3 + Tregs. ( f ) Gene expression analysis for anti-tumour immunity factors in whole-tumour homogenates derived from HIF-1α-overexpressing tumours or control tumours. ( g ) The level of tumour-promoting factors was increased in HIF-1α-overexpressing tumours versus control tumours. For b , c , control: n = 8 tumours from 8 different animals, hypoxic: n = 5 tumours from 5 different animals. For d , e , control: n = 5 tumours from 5 different animals, HIF-1α: n = 7 tumours from 7 different animals. For f , g , control: n = 8 tumours from 8 different animals; HIF-1α: n = 8 tumours from 8 different animals. Samples were compared using two-tailed Student’s t -test. ∗ P < 0.05, ∗ ∗ P < 0.01, ∗ ∗ ∗ P < 0.001. The image shown in a is representative of three independent experiments. Unprocessed original scans of western blots are available in Supplementary Fig. 9 . Statistics source data are available in Supplementary Table 4 . Data shown in b , c , e – g represent the mean. For d , data are represented as mean ± s.d. Full size image Hypoxia-responsive miR-25/93 functions as an immunosuppressive factor To identify bona fide hypoxia-regulated miRNAs, we performed small RNA sequencing using different cellular models, including hypoxic cells and HIF-1α-overexpressing cells, as described previously 19 ( Supplementary Fig. 2a ). Ingenuity pathway analysis for selected miRNA candidates revealed correlations between these miRNAs and breast cancer formation, as well as oncogenic signalling pathways. ( Supplementary Fig. 2b, c ). These data led us to focus on the miR-106b-25/miR-17-92 clusters, which are deregulated in human cancer 20 . It is well established that the miR-17-92 cluster plays roles in the development of the immune system and cancer; however, the role of the miR-106b-25 cluster remains largely unknown. We identified miR-25 and miR-93 as potential targets of hypoxia, as expression of these miRNAs was markedly increased in hypoxic conditions ( Fig. 2a–c and Supplementary Fig. 2d ). However, hypoxia did not affect levels of miR-106b, another member of the miR-106b-25 cluster. Short hairpin RNA (shRNA)-mediated knockdown of HIF-1α abolished the ability of hypoxia to induce miR-25/93 expression ( Supplementary Fig. 2e ), thereby indicating that miR-25/93 are hypoxia-responsive miRNAs. Figure 2: Hypoxia-responsive miR-25/93 functions as an immunosuppressive factor. ( a ) Upregulation of miR-25/93 in hypoxic and HIF-1α-overexpressing E0771 cells. ( b ) Elevated levels of miR-25/93 in hypoxic murine breast tumours. ( c ) miR-25/93 levels in HIF-1α-overexpressing tumours versus control tumours. ( d ) Inhibiting miR-25/93 reduced tumour growth in wild-type mice. ( e ) Reduced tumour growth by the inhibition of miR-25/93 was diminished in immune-deficient mice (NOD- Rag1 null IL2rg null ). ( f ) Suppression of miR-25/93 reduced HIF-1α-enhanced tumour growth. ( g ) Restoration of effector CD8 + cytotoxic T cells in HIF-1α-overexpressing tumours by suppressing miR-25/93. ( h ) Levels of CD4 + Foxp3 + Tregs in control tumours, HIF-1α-overexpressing tumours, and HIF-1α–miR-25/93-shtumours. ( i ) Overexpression of miR-25/93 led to increased tumour proliferation. ( j ) The percentage of effector CD8 + cytotoxic T cells was decreased in miR-25/93-overexpressing tumours. Changes in effector CD8 + cytotoxic T cells were normalized to total CD8 + T cell proportions. Effector CD8 + cytotoxic T cells were defined by the production of TNF and IFN-γ. ( k ) miR-25/93-overexpressing tumours attracted more CD4 + Foxp3 + Tregs than control tumours. For a , n = 3 independent experiments. For b , c , n = 3 tumours from 3 different animals. For g , h , n = 5 tumours from 5 different animals. For j , k , control: n = 10 tumours from 10 different animals; miR-25/93: n = 13 tumours from 13 different animals. For i , n = 8 tumours from 8 different mice for each group. Data shown in a – g , i , j represent the mean ± s.d. of indicated sample numbers. Data in h , k represent the mean. Samples were compared using two-tailed Student’s t -test. ∗ P < 0.05, ∗ ∗ P < 0.01, ∗ ∗ ∗ P < 0.001. Statistics source data are available in Supplementary Table 4 . Full size image Inhibiting miR-25/93 slowed tumour growth in wild-type mice. Importantly, this effect was compromised in immune-deficient mice, suggesting that miR-25/93 inhibits host immune responses against the tumour ( Fig. 2d, e ). To explore the role of miR-25/93 in hypoxia-relevant tumour immune responses, we inhibited miR-25/93 in tumours overexpressing HIF-1α. This compromised the ability of HIF-1α to: enhance tumour growth ( Fig. 2f ); decrease levels of effector CD8 + T cells; and increase recruitment of Tregs ( Fig. 2g, h ). Furthermore, we analysed the profile of TICs in tumours constitutively overexpressing miR-25/93. Monocytic MDSCs consistently accumulated in miR-25/93-overexpressing tumours; this was accompanied by an increased rate of tumour growth ( Fig. 2i and Supplementary Fig. 3a, b ). In contrast, miR-25/93 overexpression did not affect tumour-associated macrophages or granulocytic MDSCs ( Supplementary Fig. 3c, d ). Overexpression of miR-25/93 decreased the population of effector CD8 + T cells, alongside changes in the percentage of tumour-infiltrating CD4 + T cells and CD8 + T cells ( Fig. 2j and Supplementary Fig. 3e, f ). Similar to hypoxic tumours, miR-25/93 overexpression resulted in a greater number of Tregs ( Fig. 2k ). Gene expression analysis of miR-25/93-overexpressing tumours consistently indicated an immune regulatory gene signature associated with hypoxia( Supplementary Fig. 3g, h ). Together, these results indicate that elevated levels of miR-25/93 in tumour cells drive the establishment of an immunosuppressive environment. TET1 is important for the upregulation of miR-25/93 in response to hypoxia To investigate the molecular mechanism underlying hypoxia-induced miR-25/93 expression, we first examined the role of HIF-1α. HIF-1α expression was required for hypoxia to induce the expression of miR-25/93 ( Supplementary Fig. 2e ). However, we did not observe significant HIF-1α binding to the miR-25/93 genomic region, as compared with the VEGF promoter, which HIF-1α bound tightly in response to hypoxia ( Supplementary Fig. 4a ). This suggests that hypoxia regulates miR-25/93 in a HIF-1α-independent manner. It is known that hypoxia can regulate miRNA biogenesis via epigenetic mechanisms 21 . Recent findings indicate that TET-mediated local 5-hydroxymethylcytosine (5hmC) changes are important for the regulation of hypoxia-responsive genes 19 , 22 . Thus, we examined the role of 5hmC and TET proteins in this context. hMeDIP-seq analysis as well as conventional hydroxymethylated DNA immunoprecipitation (hMeDIP) quantitative PCR (qPCR) analysis showed an increase in 5hmC levels within the miR-25/93 genomic loci in hypoxic cells ( Supplementary Fig. 4b, c ). Chromatin immunoprecipitation (ChIP)–qPCR analysis for TET1 and TET3 indicated increased binding of TET1, but not TET3, to these regions ( Supplementary Fig. 4d ). In support of these findings, silencing TET1, but not TET3, prevented hypoxia-induced upregulation of miR-25/93 expression ( Supplementary Fig. 4e–h ). Furthermore, the expression of wild-type TET1, but not an inactive form of TET1, restored the hypoxic epigenetic phenotype ( Supplementary Fig. 4i ). Together, these results indicate an epigenetic layer of regulation in determining miR-25/93 levels in hypoxic conditions. A hypoxia–TET1–miR-25/93 signalling axis represses cGAS-dependent immunity Next, we sought to identify the downstream signalling pathways that contribute to miR-25/93-driven immunosuppression. RNA-seq analysis in breast cancer cells overexpressing miR-25/93 was conducted to identify miRNA-responsive genes ( Fig. 3a ). Gene ontology analysis of these downstream genes highlighted biological processes related to transcription factor activity, cell adhesion and so on ( Fig. 3b ). Validation of these potential target genes by qPCR indicated that cGAS, which is critical for the innate immune system to detect cytosolic DNA 23 , 24 , was a promising downstream target for miR-25/93. Decreased levels of cGAS were detected in breast cancer cells overexpressing miR-25/93 or in hypoxic conditions ( Fig. 3c–f ). In line with this, shRNA-mediated inhibition of miR-25/93 restored cGAS expression in hypoxic cancer cells ( Fig. 3g, h ). Because TET1 regulates miR-25/93 expression in hypoxia, we examined the expression of cGAS in TET1-deficient breast cancer cells. As expected, loss of TET1 inhibited hypoxia-induced downregulation of cGAS expression ( Fig. 3i, j ). Furthermore, in tumours overexpressing HIF-1α or miR-25/93, cGAS levels were consistently downregulated ( Fig. 3k, l ). Collectively, these results indicate a role for the hypoxia-TET1-miR-25/93 signalling axis in regulating cGAS. Figure 3: Identification of cGAS as a target of miR-25/93. ( a ) Comparison of gene expression profiles in miR-25/93-overexpressing cells versus control cells. MCF7 cells expressing miR-25/93 or empty vector were used in RNA-seq analysis. Data are presented by normalized log 2 -fold change. Upregulated genes (2,277) are highlighted in red; downregulated genes (1,330) are highlighted in blue. ( b ) Gene ontology results for up- and downregulated genes show the top ranked functional gene cluster correlated with miR-25/93. ( c ) cGAS mRNA levels were downregulated in miR-25/93-overexpressing cells versus control cells. ( d ) Western blot analysis for cGAS protein levels in MCF7 or MDA-MB-231 cells stably expressing miR-25/93. ( e , f ) Decreased level of cGAS in hypoxic, HIF-1α-overexpressing, and BTIC cells, compared with normoxic cells. ( g , h ) shRNA-mediated miR-25/93 inhibition reduced the ability of hypoxia to repress cGAS. ( i , j ) Gene expression analysis for cGAS in TET1-deficient breast cancer cells during hypoxia. Control cells express shRNA targeting EGFP. ( k ) Decreased level of cGAS in mouse breast tumours derived from E0771 stable transfectants expressing HIF-1α. ( l ) Overexpression of miR-25/93 correlated with decreased levels of cGAS. N, normoxia; H, hypoxia. Sorted cells are BTICs isolated from MCF7 or MDA-MB-231 cells. All gene expression analysis data ( c , e , g , i ) are presented as means ± s.d. ( n = 3 independent experiments). Data shown in k , l represent the mean of n = 8 tumours from 8 different animals. Samples were compared using two-tailed Student’s t -test. ∗ P < 0.05, ∗ ∗ ∗ P < 0.001. The images shown in f , h , j are representative of three independent experiments. Unprocessed original scans of western blots are available in Supplementary Fig. 9 . Statistics source data are available in Supplementary Table 4 . Full size image cGAS serves as a sensor for cytosolic DNA, inducing the production of type I interferons, which lead to innate immunity 24 . Hence, we investigated whether hypoxia attenuates the innate immune response against cytosolic DNA. We used qPCR analysis to assess levels of IFN-β expression, an indicator of cGAS-mediated immune response 24 . Hypoxia treatment inhibited the induction of IFN-β in breast cancer cells ( Fig. 4a ). Given that the TET1–miR-25/93 signalling pathway is required to repress cGAS expression in response to hypoxia, this pathway may also be important for hypoxia-induced repression of innate immunity responses to cytosolic DNA. Indeed, we observed the restoration of hypoxia-induced immune responses in TET1-deficient cells ( Fig. 4b, c ). Likewise, inhibition of miR-25/93 in hypoxic cells restored the induction of IFN-β in the presence of cytosolic DNA ( Fig. 4d, e ). In support of these observations, we demonstrated that downregulation of cGAS both decreased the innate immune response to cytosolic DNA and enhanced hypoxia-induced immunosuppression ( Fig. 4f, g ). Furthermore, the cytosolic DNA-induced immune response was significantly reduced by knocking down cGAS expression in either MCF7 or MDA-MB-231 cells undergoing miR-25/93 inactivation ( Fig. 4h, i ). Together, these results suggest that repression of cGAS by the TET1–miR-25/93 pathway is crucial for hypoxia to suppress the immune response to cytosolic DNA. Figure 4: Hypoxia–TET1–miR-25/93 signalling axis reduces cGAS-mediated immune response. ( a ) MCF7 or MDA-MB-231 cells were transfected with herring testes (HT)-DNA, followed by hypoxia treatment for 18 h. IFN-β expression was measured by RT–qPCR as an indicator of cGAS-mediated immune response. ( b , c ) Downregulation of TET1 restored IFN-β expression in the presence of HT-DNA during hypoxia. ( d , e ) The inhibitory effect of hypoxia on immune response was reduced by inactivation of miR-25/93. ( f , g ) cGAS deficiency significantly impaired HT-DNA-induced immuneresponse. ( h , i ) The immune phenotype of MCF7 or MDA-MB-231 cells stably expressing shRNAs against miR-25/93 was reversed by knocking down cGAS expression. Control cells express shRNA targeting EGFP. N, normoxia; H, hypoxia. All data presented here are mean ± s.d. ( n = 3 independent experiments). Samples were compared using two-tailed Student’s t -test. ∗ P < 0.05. Statistics source data are available in Supplementary Table 4 . Full size image Levels of endogenous DNAs, considered DAMPs, are increased by hypoxia during cancer progression and can trigger the rejection of tumours by the immune system 25 , 26 . Recent studies have demonstrated that the cGAS-dependent DNA sensing pathway can also recognize mitochondrial DNA (mtDNA) released by apoptosis 27 , 28 . In line with this, hypoxia induces mitophagy and the extensive fragmentation of mitochondria 26 , 29 , 30 , which may lead to the release of mtDNA into the cytoplasm. We therefore hypothesized that to relieve immune pressure driven by the accumulation of cytosolic mtDNA, which occurs in the hypoxic tumour microenvironment during cancer progression, cancer cells must suppress the cGAS-dependent DNA sensing pathway via TET1 and miR-25/93. Thus, we examined whether hypoxia leads to the accumulation of cytosolic mtDNA in cancer cells. qPCR analysis demonstrated increased levels of cytosolic mtDNA in hypoxic breast cancer cells compared with normoxic controls ( Supplementary Fig. 5a ). Consistent with previous findings 29 , hypoxia treatment induced LC3-II and decreased levels of p62, markers of autophagy/mitophagy ( Supplementary Fig. 5b ). To ensure that mtDNA activates the cGAS-dependent immune response, we transfected mtDNA into breast cancer cells and analysed IFN-β levels. The induction of IFN-β in this context indicated that cytosolic mtDNA serves as a trigger for innate immunity ( Supplementary Fig. 5c ). Notably, repression of miRNA-25/93 restored the induction of IFN-β by cytosolic mtDNA in hypoxic cells ( Supplementary Fig. 5d, e ). Furthermore, downregulation of cGAS attenuated mtDNA-induced immune response ( Supplementary Fig. 5f, g ), confirming that cGAS acts as an important sensor for cytosolic mtDNA. We reasoned that manipulating miR-25/93 or cGAS levels may alter levels of cytosolic mtDNA, leading to compromised immunosuppression. Thus, we analysed cytoplasmic levels of mtDNA and found similar levels of hypoxia-induced cytoplasmic mtDNA in either miRNA-25/93-inactivated or cGAS-deficient cells, compared with control cells ( Supplementary Fig. 6a, b ). Together, these results reveal that the TET1–miR-25/93 signalling pathway represses a cGAS-dependent immune response that results from hypoxia-mediated release of mtDNA. To ensure that cGAS functions to dampen the immunosuppressive microenvironment, we constitutively expressed cGAS in miR-25/93-overexpressing tumours. FACS analysis demonstrated that re-expression of cGAS reversed immunosuppressive phenotypes, as indicated by significantly decreased populations of monocytic MDSCs, along with an increase in effector CD8 + T cells ( Fig. 5a, b ). Likewise, Tregs were decreased in cGAS-re-expressing tumours compared with miR-25/93-overexpressing tumours ( Fig. 5c ). Furthermore, gene expression analysis indicated increased levels of anti-tumour immunity factors in cGAS-re-expressing tumours, whereas levels of tumour-promoting factors were significantly decreased ( Fig. 5d, e ). As a key effector of cGAS signalling, type I interferons (IFNs) mediate anti-tumour immunity via their immunostimulatory functions 31 . As the immunosuppressive function of miR-25/93 may rely on restrained IFN-stimulated immunosurveillance, we examined levels of INF-β in vivo . We observed decreased levels of INF-β and cGAS in hypoxic tumours and HIF-1α-overexpressing tumours ( Fig. 5f ). INF-β levels were restored in HIF-1α-overexpressing tumours in which miR-25/93 were inhibited, as well as in miR-25/93-overexpressing tumours in which cGAS was re-expressed ( Fig. 5g ). To ascertain the role of cGAS–IFN signalling in tumour development, we assessed tumour growth in mice lacking the interferon (alpha and beta) receptor 1(IFNAR1). Forced expression of miR-25/93 increased tumour cell proliferation in wild-type mice; this effect was reversed by re-expressing cGAS ( Fig. 5h ). Loss of IFNAR1 compromised the ability of miR-25/93 to promote tumour progression, supporting the notion that cGAS–IFN signalling inhibits tumour progression ( Fig. 5i ). Figure 5: miR-25/93 signalling axis dampens anti-tumour immunity by downregulating the cGAS/IFNs pathway. ( a ) FACS analysis of CD11b + Ly6G − Ly6C high monocytic MDSCs in miR-25/93-overexpressing, cGAS-re-expressing and control tumours. E0771 cells were used for these gene manipulations. ( b ) Constitutive expression of cGAS restored the percentage of effector CD8 + cytotoxic T cells in miR-25/93-overexpressing tumours. Changes in effector CD8 + cytotoxic T cells were normalized to total CD8 + T cell proportions. Effector CD8 + cytotoxic T cells were defined by the production of TNF and IFN-γ. ( c ) Decreased levels of CD4 + Foxp3 + Tregs in cGAS-re-expressing tumours, as compared with miR-25/93-overexpressing tumours. ( d ) Reverted gene expression profile for anti-tumour immunity factors in cGAS-re-expressing tumours versus miR-25/93-overexpressing tumours. ( e ) The expression of tumour-promoting factors was downregulated by cGAS re-expression. ( f ) Western blot analysis for IFN-β and cGAS levels in either hypoxic tumours (top) or HIF-1α-overexpressing tumours (bottom). ( g ) Levels of IFNβ were regulated by the HIF-1α–miR-25/93–cGAS signalling axis. ( h ) Tumour growth rate in E0771 cells overexpressing miRNA-25/93 was higher than either cGAS-re-expression cells or control cells in wild-type mice. ( i ) Tumour growth analysis in IfnabR −/− mice injected with control cells or cells overexpressing miR-25/93 with or without cGAS re-expression. N, normoxia; H, hypoxia. For a – c , control: n = 4 tumours from 4 different animals, miR-25/93: n = 5 tumours from 5 different animals; cGAS: n = 5 tumours from 5 different animals. For b , data represent the mean ± s.d. For a , c – e , data represent the mean. Data shown in h , i are mean ± s.d. of n = 10 tumours from 10 different animals for each group. Samples were compared using two-tailed Student’s t -test. ∗ P < 0.05, ∗ ∗ P < 0.01, ∗ ∗ ∗ P < 0.001. Images shown in f , g are representative of three independent experiments. Unprocessed original scans of western blots are available in Supplementary Fig. 9 . Full size image In addition, we found that tumours lacking cGAS were more proliferative than control tumours. ( Supplementary Fig. 7a ). As expected, the analysis of TIC profiles in cGAS-deficient tumours revealed a pattern similar to miR-25/93- or HIF-1α-overexpressing tumours ( Supplementary Fig. 7b–d ). Silencing cGAS decreased the percentage of effector CD8 + T cells; this is accompanied by changes in the levels of tumour-infiltrating CD4 + T cells and CD8 + T cells ( Supplementary Fig. 7e–g ). Likewise, FACS analysis indicated a significant increase in Tregs in cGAS-deficient tumours ( Supplementary Fig. 7h ). Gene expression analysis further showed that downregulation of cGAS recapitulated the immune gene signature that characterizes hypoxic tumours ( Supplementary Fig. 7i–k ). miR-25/93 indirectly regulates cGAS expression by targeting NCOA3 under hypoxic conditions Next, we sought to characterize the molecular mechanisms by which miR-25/93 represses cGAS. Analysis of the 3′-UTR of the cGAS gene did not identify a putative miR-25/93target sequence, which led us to conclude that miR-25/93 suppresses cGAS expression by targeting other regulators. In our search for additional factors that regulate cGAS, we noticed that the cGAS promoter region contains an AP-1 biding site. It is unlikely that AP-1 is a key factor in the hypoxia-mediated repression of cGAS, but our RNA-seq data indicated that NCOA3, a member of the nuclear receptor co-activator (Ncoa) family that coordinates with AP-1 (ref. 32 ), was downregulated in miR-25/93-overexpressing cells ( Fig. 6a ). Likewise, hypoxia treatment resulted in decreased levels of NCOA3 ( Fig. 6b ). Analysis of the 3′-UTR associated with the NCOA3 transcript identified conserved target sequences for miR-25/93. A 3′-UTR reporter assay showed that miR-25/93 suppressed NCOA3 reporter activity. Mutating the miR-25/93 target sequences within the NCOA3 3′-UTR abrogated miR-25/93-mediated repression ( Fig. 6c ). Together, these results indicate that NCOA3 is a downstream target of miR-25/93. Figure 6: Hypoxia–miR-25/93 regulates cGAS signalling via targeting NCOA3. ( a ) NCOA3 mRNA and protein levels in miR-25/93-overexpressingcells versus control cells. Left: RT–qPCR analysis. Right: western blot analysis. MCF7 and MDA-MB-231 cells were used, as indicated. ( b ) Hypoxia downregulated the expression of NCOA3 in MCF7 and MDA-MB-231 breast cancer cells. ( c ) Top: multiple species sequence alignment of the NCOA3 3′-UTR for putative miR-25 (right) and miR-93 (left) target sites, as indicated. Bottom: relative reporter activity of the wild-type and mutant NCOA3 reporter in HEK 293T cells transfected with miR-25/93 expression vector or empty vector following hypoxia treatment. ( d ) Re-expression of NCOA3 restored the cGAS level in response to hypoxia. Left: real-time PCR analysis. Right: western blot analysis. MCF7 and MDA-MB-231 cells were used, as indicated. ( e ) ChIP analysis for NCOA3 binding to the proximal promoter region ( ∼ −1 kbp) and transcription start site (TSS) of the cGAS gene during hypoxia. ( f ) Decreased binding of CBP to cGAS genomic regions in hypoxic cells. ( g ) RNA pol II binding at the TSS of cGAS was decreased under hypoxic conditions. ( h ) Levels of H3K27Ac in the cGAS genomic region were reduced by hypoxia. ( i ) Hypoxia downregulated H3K4me2 levels in cGAS genomic regions. ( j ) IgG control for ChIP assay. N, normoxia; H, hypoxia. For ChIP assay, two primer sets were used to target the proximal promoter region and TSS of the cGAS gene. All data presented here are mean ± s.d. ( n = 3 independent experiments). Samples were compared using two-tailed Student’s t -test. ∗ P < 0.05, compared with controls. Images shown in a , b , d are representative of three independent experiments. Unprocessed original scans of western blots are available in Supplementary Fig. 9 . Statistics source data are available in Supplementary Table 4 . Full size image Because NCOA3 functions as a lysine acetyltransferase involved in transcriptional activation 33 , we asked whether NCOA3 regulates cGAS expression in response to hypoxia. We found that ectopic expression of NCOA3 restored the expression of cGAS in response to hypoxia ( Fig. 6d ). ChIP–qPCR analysis revealed decreased levels of NCOA3 binding to the cGAS promoter in hypoxic cancer cells, compared with normoxic controls. This was associated with decreased levels of CBP binding, a protein that interacts with NCOA3, as well as decreased binding of RNA pol II to the cGAS transcription start site ( Fig. 6e–g ). Moreover, two epigenetic modifications indicative of active transcription, H3K27Ac and H3K4me2, are associated with NCOA3 function 34 and were consistently downregulated in hypoxic cells ( Fig. 6h–j ). Re-expression of NCOA3 restored levels of CBP binding to cGAS genomic regions in hypoxic cancer cells ( Fig. 7a, b, f, g ). Likewise, levels of H3K27Ac and H3K4Me2 were restored by re-expression of NCOA3 ( Fig. 7c–e, h–j ). Collectively, these results indicate that repression of NCOA3 by miR-25/93 is required for the suppression of cGAS in response to hypoxia. Figure 7: Re-expression of NCOA3 restores the epigenetic context required for cGAS expression. ( a , b ) ChIP assay of MCF7-NCOA3 cells for NCOA3 and CBP protein binding on genome regions of the cGAS gene compared with control cells. ( c – e ) H3K27Ac and H3K4Me2 level within genome regions of the cGAS gene in MCF7-NCOA3 cells versus control cells. IgG was used as a control for ChIP assay. ( f , g ) NCOA3 and CBP binding were restored in MDA-MB-231-NCOA3 cells, compared to control cells. ( h – j ) ChIP analysis for H3K27Ac, H3K4Me2 and IgG on genome regions of the cGAS gene in MDA-MB-231-NCOA3 cells versus control cells. Two primer sets were used, as indicated in Fig. 6 . N, normoxia; H, hypoxia. All data presented here are mean ± s.d. ( n = 3 independent experiments). Samples were compared using two-tailed Student’s t -test. ∗ P < 0.05, compared with controls. Statistics source data are available in Supplementary Table 4 . Full size image The biological significance for miR-25/93 and cGAS in clinical tissues To test the relevance of these findings to human cancer, we first examined the prognostic impact of miR-25/93. Levels of these miRNAs were correlated with poor prognosis for patients with different types of hypoxia-relevant tumour, such as brain, colon and breast ( Fig. 8a ). Notably, higher levels of miR-25 and miR-93 were associated with reduced overall survival in patients with invasive breast carcinoma ( Fig. 8b ). We next examined the prognostic impact of cGAS levels for 148 patients with invasive ductal carcinoma (IDC). Levels of cGAS were inversely associated with poor prognosis for these patients ( Fig. 8c, d ). Consistent with this result, lower levels of cGAS were correlated with poor survival for cancer patients harbouring elevated levels of miR-93 ( Fig. 8e ), whereas other identified miRNA targets (for example, Bim and PTEN) did not exhibit clinical significance ( Supplementary Fig. 8a, b ), suggesting that the downregulation of cGAS is a biologically relevant factor in miR-25/93-driven tumour progression. Collectively, these results support our hypothesis that repression of cGAS by hypoxia-responsive miR-25/93 is critical for establishing an immunosuppressive environment that is associated with cancer progression. Figure 8: The correlation of miR-25/93–cGAS with prognosis for patients with cancer. ( a ) The Cancer Genome Atlas (TCGA) analysis for miR-25 or miR-93 indicated that levels of these miRNAs correlated with poor survival of patients with brain, colon or breast tumours. ( b ) The clinical relevance of levels of miR-25/93 in cancer patients with breast invasive carcinoma. ( c ) Immunohistochemical staining for cGAS levels for different grades of IDC revealed an inverse correlation between cGAS levels and tumour grades. Scale bars, 50 μm. ( d ) Overall survival analysis for 148 patients with IDC showed an inverse correlation between cGAS levels and poor survival of cancer patients. The intensity of cGAS staining from low to high was divided into two groups (0 and 1+, 2+ and 3+). n = 12 for 0, negative; n = 56 for 1+, low expression; n = 74 for 2+, intermediate expression; n = 6 for 3+, high expression. n represents the number of patient samples for each group. ( e ) TCGA analysis indicated reduced overall survival of breast cancer patients harbouring lower levels of cGAS and higher levels of miR-93. ( f ) A model of miR-25/93-mediated immune escape of hypoxic tumours from DAMP-induced immune stress. Sample numbers and P values are indicated in the panel for TCGA analysis and immunohistochemical study. The prognostic significance was assessed by Kaplan–Meier analysis. Full size image DISCUSSION The mechanisms by which tumour cells maintain immunological self-tolerance and hinder effective tumour immunity are critical for tumour progression. Here, we uncover a mechanism for hypoxia-induced escape from immunological stress caused by DAMPs ( Fig. 8f ). Hypoxia induced the expression of miR-25/93 by increasing 5hmC levels (that is, DNA demethylation) near the miR-25/93 gene via the TET1 protein. Differential regulation of the miR-106b-25 cluster in response to hypoxia, as shown by our data and previous reports 35 , 36 , suggests coordination between epigenetic factors and HIF-mediated post-transcriptional machinery 37 to differentially regulate miR-25/93 levels. These data also reveal the complexity of the role played by 5hmC and TET in regulating cellular responses to hypoxia. This is also supported by our observations ( Supplementary Fig. 8c ), as well as recent reports 38 , 39 , 40 , indicating an uncoupling of expression of the MCM7 gene (host gene for the miR-106b-25 cluster) with the differential expression of members of the miR-106b-25 cluster. Recently, miR-25/93 has emerged as an important onco-miRNA during tumorigenesis 41 , and here we further reveal its function in inducing immunosuppressive phenotypes. To promote immune avoidance, miR-25/93 repressed cGAS by disrupting the epigenetic machinery that maintains basal levels of cGAS (this machinery includes NCOA3). Activation of the cGAS-dependent cytosolic DNA sensing pathway within tumour-resident immune cells results in anti-tumour immunity, which contributes to tumour regression 42 . We observed consistent repression of cGAS by hypoxia in different cell types, although miR-25/93 is probably not essential for this effect in every case. Thus, we conclude that the repression of cGAS–IFN signalling in tumour cells is a critical event during cancer progression, indicating a critical role for hypoxia-induced cell-intrinsic immune tolerance in the development of tumours. Interestingly, differences in tumour-infiltrating immune cells (for example, CD4 + T cells and MDSCs) between different experimental settings (for example, Supplementary Figs 1 and 3 ) suggest that hypoxia-responsive molecules regulate a diversity of immunosuppressive pathways. Thus, further analyses are required to understand the interplay between the tumour microenvironment and tumour immune responses. Excessive DAMPs influence immunity and are associated with inflammatory responses, which contribute to many chronic diseases, including cancer 43 . mtDNA represents one type of DAMP. During cancer progression, hypoxia induces the intracellular translocation and release of mtDNA 26 , which may contribute to immunological stress. We observed increased levels of cytoplasmic mtDNA in hypoxic cells, leading to the induction of an immune response. This immune response was inhibited by the hypoxia–TET1–miR-25/93 signalling axis, suggesting a direct link between hypoxia-responsive miRNAs and immune tolerance to DAMP-induced immunological stress. In summary, our results highlight the link between cGAS downregulation and tumour immunosuppression, and implicate miR-25/93 as a central regulator that acts in concert with hypoxia to regulate immune escape from DAMP-induced immunological pressure. These findings have potential implications for anti-cancer immunotherapies.□ Methods Cells, plasmids, stable transfection and oxygen deprivation. The E0771 murine breast cancer cell line (CH3 BioSystems) was maintained in RPMI medium supplemented with 10% FBS and 1% penicillin/streptomycin. MCF7 and MDA-MB-231 cell lines (ATCC) were cultured in DMEM medium supplemented with 10% FBS at 37 °C in the presence of 5% CO 2 . Cells were routinely tested for mycoplasma contamination (once every month). A pcDNA3 mHIF-1α-MYC (P402A/P577A/N813A) (Addgene) plasmid was used for generating HIF-1α overexpression clones in E0771 cells. A pBabe-NCOA3 expression vector was used for NCOA3 re-expression in human breast cancer cells. The plasmids for gene knockdown experiments were generated by inserting an oligonucleotide targeting a specific gene sequence into the pSUPER vector (Oligoengine). These vectors were then used to establish stable transfectants. For miRNA inhibition, MISSION LentimiRNA inhibitors (Sigma) targeting miR-25 and miR-93 were used to generate stable transfectants in MCF7 or MDA-MB-231 cells. For miRNA overexpression, the MDH1-PGK-GFP vector (Addgene) was used to express human or mouse miR-25/93 in human breast cancer cells or murine breast cancer cell, respectively. Oxygen deprivation was conducted in a hypoxic incubator with 1% O 2 , 5% CO 2 and 94% N 2 for 18 or 24 h. Western blot analysis, RNA extraction and quantitative real-time PCR. Western blot was performed following standard protocols, as described previously 19 . Briefly, cells were harvested and lysed using RIPA buffer (50 mM Tris, 150 mM NaCl, 0.1% SDS, 0.5% deoxycholate, 1% NP-40). Protein extracts were subjected to SDS–PAGE analysis. The membranes were blocked with 5% non-fat milk followed by antibody hybridization and the signals were visualized on X-ray film. For RNA extraction, total RNAs from cultured cells were extracted using the Trizol reagent (Invitrogen Life Technologies), and 1 μg of RNA was used for cDNA synthesis. Quantitative real-time PCR was carried out to quantify gene expression levels by using the CFX384 Touch Real-Time PCR Detection System (BIO-RAD Laboratories). For measuring mtDNA in cytosol, as described previously 44 , the Mitochondria Isolation Kit (Thermo Fisher Scientific) was used to isolate cytosolic, mitochondrial and nuclear fractions from cancer cells. Primers for human cytochrome B, human cytochrome C oxidase subunit III and human NADH dehydrogenase were used in qPCR analysis. Cytoplasmic mtDNA levels were normalized to nuclear DNA encoding 18S ribosomal RNA. The antibodies and primers used in the experiments are listed in Supplementary Table 3 . Luciferase reporter assays. Cells were seeded onto 6-well plates and transfected with the following: psiCheck2 luciferase reporter plasmid containing wild-type or mutated Ncoa3 3′-UTR and expression vector encoding human miR-25/93. Transfected cells were exposed to 20% or 1% O 2 for 24 h. Luciferase activity in cell lysates was measured using the Dual-Luciferase Reporter Assay System (Promega). qPCR-ChIP and hMeDIP. qPCR-ChIP assay was performed as described previously 45 . Briefly, crosslinked cell lysate was sonicated and subjected to an immunoprecipitation reaction with specific-antibody-conjugated beads. The immunoprecipitated DNA was purified through a phenol–chloroform DNA extraction protocol and then subjected to qPCR analysis. For the hMeDIP assay, genomic DNA was prepared with the genomic DNA extraction kit (Promega) and sonicated into fragments of ∼ 500 base pairs. Fragmented DNA was denatured for 10 min at 95 °C and immunoprecipitated overnight at 4 °C with anti-5hmC antibody (Active Motif) in 500 μl IP buffer (10 mM sodium phosphate, 140 mM NaCl, 0.05% Triton X-100). DNA was eluted from the beads followed by purification with the chromatin IP DNA purification kit (Active Motif). Flow cytometry and cytokines secretion assays. Cell surface marker staining and flow cytometric analysis for CD3, CD4, CD8, CD45, F4/80 and CD11b (Biolegend) expression was performed as described previously 46 . For MDSC analysis, mouse MDSC flow cocktail kit (Biolegend) was used. For intracellular staining for Foxp3 (Biolegend), True-Nuclear Transcription Factor Buffer Set (Biolegend) was used. To measure T-cell cytokine production for effector CD8 + cytotoxic T cells, TICs were treated with Cell Activation Cocktail (Biolegend) for at least 4 h at 37° before staining for TNF and IFN-γ (Biolegend). In vivo models. All animal studies were performed under protocols approved by the Institutional Review Board of the Salk Institute for Biological Studies, La Jolla, USA. This study is compliant with all relevant ethical regulations regarding animal research. Cells were trypsinized and resuspended in PBS/Matrigel (1:1; BD Biosciences). Suspended cells were then subjected to mammary fat pad injection or subcutaneous injection of C57BL/6 mice or IfnαβR −/− mice. Tumour size was measured by calliper. Tumour biospecimens were collected at a similar size for immune cell profiling analysis or gene expression analysis. After six weeks, all of the tumours were collected and the animals were euthanized. High-throughput whole-transcriptome (mRNA-seq) and small RNA-seq. Total RNA was isolated by the Trizol reagent (Invitrogen Life Technologies) and treated with DNase. Invitrogen Qubit and Agilent Tape Station were used to determine RNA concentration and RNA integrity (RIN) numbers respectively, prior to library preparation. Stranded mRNA-seq libraries were prepared using the Illumina TruSeq Stranded mRNA Library Prep Kit according to the manufacturer’s instructions. Briefly, RNA with a poly-A tail was isolated using magnetic beads conjugated to poly-T oligonucleotides. mRNA was then fragmented and reverse-transcribed into cDNA. dUTPs were incorporated, followed by second-strand cDNA synthesis. The dUTP-incorporated second strand was not amplified. cDNA was then end-repaired, index adaptor-ligated and PCR amplified. AMPure XP beads (Beckman Coulter) were used to purify nucleic acid after each step of the library preparation. Small RNA libraries were generated using the NEB Next Multiplex Small RNA Library Prep Set for Illumina according to the vendor’s instructions (New England Biolabs). Briefly, 3′ SR Adaptor was ligated to short RNAs at 25 °C for 1 h. SR RT Primer was added to hybridize any unligated 3′SR adaptor to prevent adaptor–dimer formation. 5′ SR Adaptor was then ligated, followed by reverse transcription and PCR amplification to add indexes to each library. Amplified libraries were TBE-PAGE gel-purified (130–150 bp). All sequencing libraries were then quantified, pooled and sequenced at single-end 50 base-pair (bp) on Illumina HiSeq 2500 at the Salk NGS Core. Raw sequencing data was de-multiplexed and converted into FASTQ files using CASAVA (v1.8.2). RNA-seq analysis. Sequenced reads were quality-tested using FASTQC and aligned to the hg19 human genome using the STAR aligner. Mapping was carried out using default parameters (up to ten mismatches per read, and up to nine multi-mapping locations per read). The genome index was constructed using the gene annotation supplied with the hg19 Illumina iGenomes collection and a sjdbOverhang value of 100. FPKM normalized expression was quantified across all gene exons (RNA-seq), using the top-expressed isoform as a proxy for gene expression, and differential genes were defined as having a log 2 -fold change >0.5 after the addition of a pseudocount of 5 to control for low-expressing noisy genes. The normalized expression table for all genes, and lists of differential genes is provided as Supplementary Data . Clinical sample collection and evaluation of immunohistochemistry. Adequate surgical specimens from 148 breast invasive ductal carcinoma patients with relevant pathological information were collected from 1992 to 2005. This study was approved by The Institutional Review Board, Salk Institute for Biological Studies. The study is compliant with all relevant ethical regulations and informed consent was obtained from all participants. Tissue microarray slides were comprised of 45 grade I cases, 53 grade II cases and 50 grade III cases. The histopathological differentiation of breast invasive ductal carcinoma was determined on the basis of the WHO classification criteria for tumours. The pathological diagnosis for each case was under blind examination by at least two senior pathologists. Immunohistochemistry was performed using an EnVision Dual Link System-HRP (DAB+) kit. To evaluate the immunoreactivity of cGAS, cases were scored from 0 to 3+ (0, negative; 1+, low expression; 2+, intermediate expression; 3+, high expression) on the basis of the intensity of cGAS staining. Statistics and reproducibility. For animal experiments, no statistical method was used to predetermine sample size and the experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment. All data were reported as mean ± s.d. Statistical analysis was performed by using Student’s t -test and P < 0.05 was considered significant. The prognostic significance of the clinicopathological parameters related to survival rate was assessed by Kaplan–Meier analysis. All statistical analyses were performed using SPSS version 20 (SPSS, Inc., Chicago, Illinois, USA). The results relative to miRNA expression in human breast cancer are based on data generated by The Cancer Genome Atlas pilot project (TCGA) established by the NCI and NHGRI. Information about TCGA can be found at . Data availability. Sequencing data, including small RNA sequencing and RNA sequencing, have been deposited at the Gene Expression Omnibus (GEO) with accession number GSE79789 . Source data for Figs 1 – 4 , 6 and 7 and Supplementary Figs 1 , 4 and 5 have been provided as Supplementary Table 4 . Previously published hMeDIP-seq data are available under accession GSE60434 . All other data supporting the findings of this study are available from the corresponding authors on reasonable request. Additional Information Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Accession codes Primary accessions Gene Expression Omnibus GSE79789
The immune system automatically destroys dysfunctional cells such as cancer cells, but cancerous tumors often survive nonetheless. A new study by Salk scientists shows one method by which fast-growing tumors evade anti-tumor immunity. The Salk team uncovered two gene-regulating molecules that alter cell signaling within tumor cells to survive and subvert the body's normal immune response, according to a September 18, 2017, paper in Nature Cell Biology. The discovery could one day point to a new target for cancer treatment in various types of cancer. "The immunological pressure occurring during tumor progression might be harmful for the tumor to prosper," says Salk Professor Juan Carlos Izpisua Belmonte, senior author of the work and holder of the Roger Guillemin Chair. "However, the cancer cells find a way to evade such a condition by restraining the anti-tumor immune response." Cancerous tumors often grow so fast that they use up their available blood supply, creating a low-oxygen environment called hypoxia. Cells normally start to self-destruct under hypoxia, but in some tumors, the microenvironment surrounding hypoxic tumor tissue has been found to help shield the tumor. "Our findings actually indicate how cancer cells respond to a changing microenvironment and suppress anti-tumor immunity through intrinsic signaling," says Izpisua Belmonte. The answer was through microRNAs. MicroRNAs—small, noncoding RNA molecules that regulate genes by silencing RNA—have increasingly been implicated in tumor survival and progression. To better understand the connection between microRNAs and tumor survival, the researchers screened different tumor types for altered levels of microRNAs. They identified two microRNAs—miR25 and miR93— whose levels increased in hypoxic tumors. The team then measured levels of those two microRNAs in the tumors of 148 cancer patients and found that tumors with high levels of miR25 and miR93 led to a worse prognosis in patients compared to tumors with lower levels. The reverse was true for another molecule called cGAS: the lower the level of cGAS in a tumor, the worse the prognosis for the patient. Previous research has shown that cGAS acts as an alarm for the immune system by detecting mitochondrial DNA floating around the cell—a sign of tissue damage—and activating the body's immune response. "Given these results, we wondered if these two microRNA molecules, miR25 and miR93, could be lowering cGAS levels to create a protective immunity shield for the tumor," says Min-Zu (Michael) Wu, first author of the paper and a research associate in Salk's Gene Expression Laboratory. That is exactly what the team confirmed with further experiments. Using mouse models and tissue samples, the researchers found that a low-oxygen (hypoxia) state triggered miR25 and miR93 to set off a chain of cell signaling that ultimately lowered cGAS levels. If the researchers inhibited miR25 and miR93 in tumor cells, then cGAS levels remained high in low-oxygen (hypoxic) tumors. Researchers could slow tumor growth in mice if they inhibited miR25 and miR93. Yet, in immune-deficient mice, the effect of inhibiting miR25 and miR93 was diminished, further indicating that miR25 and miR93 help promote tumor growth by influencing the immune system. Identifying miR25 and miR93 may help researchers pinpoint a good target to try to boost cGAS levels and block tumor evasion of the immune response. However, the team says directly targeting microRNA in treatment can be tricky. Targeting the intermediate players in the signaling between the two microRNAs and cGAS may be easier. "To follow up this study, we're now investigating the different immune cells that can contribute to cancer anti-tumor immunity," adds Wu.
10.1038/ncb3615
Medicine
Why you drink black coffee: It's in your genes
Marilyn C. Cornelis et al, Genetic determinants of liking and intake of coffee and other bitter foods and beverages, Scientific Reports (2021). DOI: 10.1038/s41598-021-03153-7 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-021-03153-7
https://medicalxpress.com/news/2021-12-black-coffee-genes.html
Abstract Coffee is a widely consumed beverage that is naturally bitter and contains caffeine. Genome-wide association studies (GWAS) of coffee drinking have identified genetic variants involved in caffeine-related pathways but not in taste perception. The taste of coffee can be altered by addition of milk/sweetener, which has not been accounted for in GWAS. Using UK and US cohorts, we test the hypotheses that genetic variants related to taste are more strongly associated with consumption of black coffee than with consumption of coffee with milk or sweetener and that genetic variants related to caffeine pathways are not differentially associated with the type of coffee consumed independent of caffeine content. Contrary to our hypotheses, genetically inferred caffeine sensitivity was more strongly associated with coffee taste preferences than with genetically inferred bitter taste perception. These findings extended to tea and dark chocolate. Taste preferences and physiological caffeine effects intertwine in a way that is difficult to distinguish for individuals which may represent conditioned taste preferences. Introduction Coffee and tea are among the most widely consumed beverages in the world 1 . The consumption of these plant-based beverages has been associated with a lower risk of chronic diseases such as type 2 diabetes, cardiovascular diseases, and several types of cancer 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 . Although plausible underlying biological mechanisms have been identified, more research is needed to establish the causal role of these beverages in human health 10 . Thus, understanding determinants of beverage choice and consumption level is important to inform research and public health strategies. Genome-wide association studies (GWAS) of coffee and tea drinking behavior have identified genetic variants involved in the metabolism and physiological effects of caffeine as determinants of the amount of these beverages consumed (Supplementary Table S1 ) 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 . Furthermore, a variant near a gene encoding an olfactory receptor ( OR5M8 ) has been associated with coffee intake. As coffee and tea have a bitter taste, it is also plausible that genetic variants related to bitter taste perception affect coffee and tea consumption. However, none of the loci identified in GWAS of coffee and tea intake overlap with TAS2R loci associated with taste perception of bitter compounds including propylthiouracil (PROP)/phenylthiocarbamide (PTC), caffeine, and quinine in GWAS (Supplementary Table S1 ) 22 , 23 , 24 , 25 . Caffeine seeking behavior might explain the persistent consumption of coffee and tea despite their bitter taste 21 , 26 . However, the taste of these beverages is easily manipulated by the addition of sweetener and milk; behaviors not previously accounted for in GWAS or the vast majority of epidemiological studies. The current study uses genetic, dietary, and food preference (“liking”) data available from the UK Biobank (UKB) and two US cohorts, the Nurses’ Health Study (NHS) and Health Professionals Follow-up study (HPFS). We first test the hypothesis that published GWAS-confirmed variants related to taste are more strongly associated with black coffee consumption than with the consumption of total coffee or coffee with added sweetener or milk because their effects would not be masked by coffee taste manipulation. We focus on coffee but extend this hypothesis to tea since we expect similar but weaker associations of taste-related genetic variants with tea consumption, because tea is reportedly less bitter than coffee 27 , 28 . As a negative control, we also examine published GWAS-confirmed loci involved in the metabolism and physiology of caffeine which we would not expect to be differentially associated with type of coffee and tea consumed independent of caffeine content. Second, we examine whether there are shared genetic determinants of coffee and tea traits with other bitter tasting foods; specifically, beer and dark chocolate. Finally, we perform GWAS of liking or consumption of specific types of coffee and tea (e.g., with sugar versus without sugar) that may yield genetic variants not reported previously by GWAS of total coffee and tea consumption. Methods UK biobank In 2006–2010, the UKB recruited over 502,633 participants aged 37–73 years at 22 centers across England, Wales, and Scotland 29 . Participants provided informed written consent, completed touchscreen questionnaires on sociodemographic factors, lifestyle, and medical history followed by an interviewer-administered questionnaire, physical assessment, and biospecimen collection 30 . Subsets of the cohort have returned for follow-up assessments and have completed additional on-line questionnaires. The latter are the primary source of data for the current analysis. This study was covered by the generic ethical approval for UKB studies from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17th June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. Coffee and tea consumption In 2009–2012, a subset of 122,292 participants who completed the baseline assessment center visit also completed at least two of five on-line 24 h dietary recalls 31 , 32 . Detailed collection methods for coffee and tea intake and methods for estimating added milk and sweetener are provided in Supplementary Methods S2 . Briefly, participants reporting consumption of coffee were probed for details concerning quantity (0.5–6 + cups/day), brew type (instant, filtered, espresso, cappuccino, latte, and other), and whether the coffee was decaffeinated (yes, no, varied) and sweetened (half, 1, 2, 3 + teaspoons or “varied” for sugar and artificial sweetener). Categorical measures of coffee quantity were converted to cups/day by using the midpoint of each category; those reporting 6 + cups/day were assigned an intake of 6 cups/day. Those reporting consumption of instant, filtered, espresso or other coffee were additionally asked if milk was added (yes, no, varied). Participants reporting consumption of tea were probed for details concerning quantity (1–6 + cups/day), brew type (green, black, rooibos, herbal and other), and whether it was decaffeinated (black tea only) and sweetened. Those reporting consumption of black or rooibos tea were additionally asked if milk was added. We also considered responses to the following bitter tasting food items: "How many pints of beer, lager or cider?", "How many plain/dark chocolate bars (~ 50 g) did you have?", "How many servings of sprouts did you have?", and "How many servings of cabbage/greens/kale did you have?" which aligned with food preference items of interest (see below). Data collected via dietary recalls also allowed derivation of the energy content (KJ) and macronutrient composition of the diet. Unsweetened coffee consumers were defined as participants who never reported the addition of sweetener to coffee and never reported consumption of cappuccinos or lattes. No-milk coffee consumers were defined as participants who never reported the addition of milk to coffee and never reported consumption of cappuccinos or lattes. Black coffee consumers were those previously defined as both unsweetened and no-milk coffee consumers. Sweetened coffee consumers were defined as participants who always reported the addition of sweetener to coffee or who only consumed cappuccinos or lattes. Milk coffee consumers were defined as participants who always reported the addition of milk to coffee or who only consumed cappuccinos or lattes. Consumption data for coffee drinkers not defined as milk coffee consumers were set missing for genetic analysis but set to 0 for trait correlation analysis; the same approach was applied to other coffee consumption traits. Similar criteria were applied when defining different tea consumers while considering the relatively less detailed data collected for this beverage. Herein, tea-prepared-black refers to tea prepared without sweetener and milk to avoid confusion with tea type (i.e., black, green). The comprehensive list of coffee and tea traits account, in part, for cultural differences in how these beverages are prepared between UK and US cohorts. Coffee and tea preferences In 2019, UKB participants with valid emails were invited to complete a food preferences questionnaire (see Supplementary Methods S2 for details). The questionnaire included 140 items which comprise food items that reflect both sensory preferences and foodstuff preferences. Liking was measured using a 9-point hedonic scale which has good statistical properties, good discrimination between points, and linearity between each point on the scale 33 . Questionnaire items were randomized on a participant basis to reduce any bias that may occur due to tiredness. Participants were asked to rate how much he/she like each presented item on a scale from 1 (extremely dislike) to 9 (extremely like). Alternatively, they were given the option to select “Never tried” or “Prefer not to answer”. The current study focused on the following questionnaire items: liking for coffee with sugar, liking for coffee without sugar, liking for tea with sugar, and liking for tea without sugar. We also considered other bitter- and sweet-related items: liking for bitter foods, liking for dark chocolate, liking for Brussel sprouts, liking for cabbage, liking for bitter/ale and liking for sweet foods. In preliminary analysis, (1) liking for bitter vegetables was only weakly correlated with intake of bitter vegetables (r < 0.16) and liking for bitter foods (r < 0.10) and (2) liking for or intake of bitter vegetables were only weakly associated with coffee and tea traits (r < 0.15). Therefore, we did not pursue genetic analysis of bitter vegetable traits as results would unlikely be relevant for coffee and tea traits. After excluding 31 participants with liking score ranges of less than 4 across all 140 food items (an indicator of scale bias), up to 181,974 participants had data on food items of interest for the current analysis. Genetic data All UKB participants were genotyped using genome-wide arrays as detailed previously 34 , 35 . QC and imputation to the HRC v1.1 and UK10K reference panels was performed by the Wellcome Trust Centre for Human Genetics 35 . We excluded sample outliers based on heterozygosity and missingness, participants with sex discrepancies between the self-reported and X-chromosome heterozygosity, and those potentially related to other participants, based on estimated kinship coefficients for all pairs of samples. To avoid bias due to population stratification, genetic analysis performed in the current study were limited to unrelated individuals who self-report as “British” and who have very similar ancestral backgrounds based on results of principal component (PC) analysis 35 . Of this UKB genetic sample, up to 126,599 individuals completed coffee- or tea-related liking scales and up to 86,006 participants had detailed coffee or tea consumption data based on 24-h recalls. Up to 61,955 participants had both liking and dietary intake data. Other covariates Self-reported smoking status, physical activity, Townsend deprivation index, education, income, employment status as well as technician-measured body weight and height were collected during the UKB baseline assessment as described in detail previously 29 , 36 . US cohorts In 1986, the HPFS enrolled 51,529 U.S. male health professionals aged 40–75 years 37 . In 1976, the NHS enrolled 121,700 U.S. female registered nurses aged 30–55 years 38 . Participants completed a mailed questionnaire on medical history and lifestyle characteristics every 2 years and a validated semi-quantitative food frequency questionnaire (FFQ) every 2–4 years 39 . All participants provided informed consent and study protocols were approved by the institutional review boards of Brigham and Women's Hospital and Harvard School of Public Health. Coffee and tea consumption We considered diet data collected by the FFQ administered closest to and before the 2018 supplementary questionnaire described below. For NHS this was the 2010 FFQ and for HPFS the 2014 FFQ. For each FFQ item, participants were asked how often, on average, they had consumed a specified amount of each beverage or food over the past year. The participants could choose from nine frequency categories (never, 1–3 per month, 1 per week, 2–4 per week, 5–6 per week, 1 per day, 2–3 per day, 4–5 per day, and 6 or more per day). Categorical measures of intake were converted to servings or cups/day by using the midpoint of each category; those reporting 6 or more servings or cups/day were assigned a daily intake of 6. The current analysis focused on coffee (regular or decaf) and tea (regular or decaf, not herbal) but also considered beer and dark chocolate as done for UKB. Total dietary energy intake and macronutrient composition of the diet were also derived from FFQs. Coffee and tea drinking behaviors In 2018, more detailed questions regarding coffee and tea drinking behavior were included on a supplementary questionnaire mailed to NHS and HPFS participants previously selected for GWAS whom had not completed a supplementary questionnaire in 2010 40 . Participants were asked “How do you usually drink your coffee or tea?” and for each beverage he/she could mark all response items as appropriate: “I do not drink this beverage”, “Black (nothing added)”, “Milk or cream”, “Non-dairy creamer/whitener”, “Sweetener (e.g., sugar, honey, syrup)”, “Non-caloric sweetener (e.g., Splenda, Equal, stevia)”. Participants were also asked “Do you avoid or drink less coffee because it tastes bitter?” (yes or no), “Do you avoid or drink less tea because it tastes bitter?” (yes or no). After one mailing, 5173 NHS (80% response) and 2940 HPFS (70%) returned the questionnaire. Genetic data Genetic data contributing to the current study were obtained from independent GWAS case–control studies nested within the cohorts, initially designed for outcomes of type 2 diabetes, coronary heart disease, gout, kidney stone, open-angle glaucoma, venous thromboembolism, prostate cancer (HPFS only), pancreatic cancer, colon cancer, mammographic density (NHS only), endometrial cancer (NHS only), ovarian cancer (NHS only) and breast cancer (NHS only). Studies were genotyped on Affymatrix, Illumina, Omni, OncoArray or HumanCoreExome platforms 41 . To allow for maximum efficiency and power, we pooled HPFS and NHS samples genotyped on the same platforms and for each of the resulting datasets we imputed SNPs based on the 1000 Genomes (version 1.1 2016) cosmopolitan reference panel. Detailed methods and quality assurance pertaining to these genetic datasets have been reported elsewhere 41 . Any samples that had substantial genetic similarity to non-European reference samples were excluded from genetic analysis. Of the participants with high-quality genetic data, 4295 NHS and 2447 HPFS participants had survey data on coffee or tea behaviors. Because these data were used in conjunction with FFQ data to derive a quantity of specific type of coffee consumed that aligned with that derived for UKB (see above for definitions), we excluded 286 HPFS and 175 NHS who did not complete an FFQ. We also excluded coffee data from 130 HPFS and 246 NHS who changed coffee drinking status (yes/no) between the FFQ and survey. The same approach was taken for tea data resulting in the exclusion of 497 HPFS and 836 NHS participants. In total, up to 2123 HPFS and 4064 NHS participants with genetic and coffee or tea data were included for the current analysis. Other covariates Smoking status, physical activity, height, and body weight of participants were self-reported and obtained from questionnaires administrated to the entire NHS (2014) and HPFS (2016) cohorts preceding the 2018 supplementary questionnaire. Candidate SNP selection We first selected GWAS-confirmed SNPs for taste perception of PROP/PTC, caffeine, and quinine 22 , 23 , 24 , 25 (Supplementary Table S1 ); herein referred to as “Taste-loci”. Genetically inferred PROP-taster status was defined using rs1726866, rs713598 and rs10246939: AVI/AVI, PAV/PAV were coded as 0 (non-taster) and 2 (super taster), respectively, while all other haplotypes were coded 1 (medium taster). We next selected GWAS confirmed SNPs for liking or consumption of coffee and tea 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , herein referred to as “Behavior-loci”. Because ADORA2A variants were among this list we additionally included ADORA2A rs5751876, an often-cited variant associated with caffeine consumption and caffeine-induced anxiety and wakefulness 42 , 43 , 44 , 45 . We also included a CYP2A6 variant associated with paraxanthine/caffeine plasma levels in GWAS and also with coffee consumption in UKB 46 . Statistical analysis All statistical analyses were performed using the SAS statistical package (version 9.1 for UNIX; SAS Institute, Cary, NC) unless indicated otherwise. Potential bias in use of the 9-point liking scale (unrelated to the content of the items) was evaluated by comparing mean scores of all 140 food items by age (at or above/below the median of 67 years), sex, smoking and BMI (at or above/below 25 kg/m 2 ) using ANOVA. Significantly higher mean food liking scores were reported by participants who were younger, male, and had lower BMI compared to their respective counterparts (P < 0.0001). Because mean score differences were small, ranging from 0.19 (sex) to 0.03 (age), we only adjusted for these factors in our primary analyses as opposed to performing stratified analyses. The distributions of all coffee- and tea-related traits of interest were highly skewed and thus non-parametric tests were applied. Bivariate Spearman correlations were used to evaluate correlations among traits. For UKB, multivariable-adjusted generalized linear modelling (GLM) was used to examine the association between each SNP (independent variable) and each continuous coffee/tea trait (dependent variable), adjusting for age, sex, smoking status, genotyping array and the top 20 PCs. Additional adjustments for BMI, physical activity, education level, Townsend index of socio-economic status, employment status, self-reported diabetes and heart disease and intake of total energy, alcohol, and other macronutrients (expressed as a proportion of energy) did not substantially change the results and thus we present results for the more basic model only. For NHS and HPFS, GLM was also used to examine the association between each SNP and each continuous coffee/tea trait, adjusting for age, smoking status, genotyping array and GWAS-specific case–control status. Results for NHS and HPFS were meta-analyzed with fixed effects using METAL 47 . We applied the same statistical models defined above to the analysis of coffee (tea) avoiders due to bitter taste (yes vs no) using a logistic regression analysis. Given the highly correlated coffee and tea traits as well as the confirmatory and hypothesis testing nature of the current analysis, statistical significance was defined as P < 0.002 (0.05/23 SNPs) for both UKB and US (NHS/HPFS) cohorts; correcting only for the number of independent SNPs tested. Differences between beta-coefficients (i.e. β SNP-total coffee vs β SNP-black coffee or β SNP-black coffee vs β SNP-sweetened coffee ) in UKB were declared significantly different when their corresponding 95% confidence intervals did not overlap; an approach considered highly conservative 48 . Nevertheless, emphasis is placed on results consistent across UK and US cohorts. We performed GWAS of continuous coffee and tea traits in UKB (excluding total coffee/tea). The rank-based inverse (blom) normal transformation was applied to each trait prior to GWAS and we excluded SNPs with MAF < 0.05 and INFO scores < 0.4. We performed genome-wide linear regressions using PLINK2 assuming an additive genetic model and adjusting for age, sex, smoking status and top 20 PCs. FUMA was used for displaying, pruning and annotating UKB summary-level results 49 . Genome-wide significant (P < 5 × 10 –8 ) SNP-trait associations were followed up in NHS and HPFS when possible and using statistical models as described above for candidate SNP-analysis. Results Participant characteristics Table 1 presents the characteristics of UKB, NHS and HPFS participants by coffee drinking status; 82, 86, and 85% of the cohort participants, respectively, were coffee drinkers (consuming more than 0 cups/day). Across cohorts, coffee drinkers were 1–2 years older and more likely to be male and consume more alcohol and beer than non-coffee drinkers. In the UKB and NHS, coffee drinkers were also more likely to be current smokers. HPFS non-coffee drinkers were more likely to be current smokers. Supplementary Table S2 presents corresponding data by tea drinking status; 87, 82 and 65% of UKB, NHS, and HPFS participants, respectively, were tea drinkers (consuming more than 0 cups/day). Across cohorts, tea drinkers were more likely to be female and non-smokers and to consume less alcohol and beer. In the UKB, tea drinkers were also less likely to be overweight. Table 1 Characteristics by coffee drinking status. Full size table The consumption of coffee and tea prepared black was more common in the US cohorts than the UKB, because US participants were less likely to add milk to these beverages than UKB participants (Table 1 , Supplementary Table S2 ). Correlations among coffee, tea and other diet behavior traits for UKB are presented in Fig. 1 (details in Supplementary Table S3 ). Corresponding correlations for NHS and HPFS are presented in Supplementary Tables S4 and S5 , respectively. In UKB, liking coffee and tea traits were generally moderately (r = 0.5–0.7) correlated with the respective intake trait (e.g., liking coffee with sugar correlated with sweetened coffee intake). Total coffee and total tea intake were each more strongly associated with liking of coffee and tea without sugar (r > 0.3), than with liking coffee and tea with sugar (|r|< 0.01). In the UKB, liking bitter foods was more strongly correlated with liking coffee without sugar (r = 0.17) and intakes of black (r = 0.10) or unsweetened (r = 0.11) coffee than with liking coffee with sugar (r = − 0.07) and intakes of sweetened (r = − 0.10) or milk (r = − 0.04) coffee. Liking bitter foods was also positively correlated with liking dark chocolate (r = 0.16) and negatively correlated with sweetened tea intake (r = − 0.10). Figure 1 UKB trait spearman correlations. Full size image Candidate “Taste” and “Behavior” loci selected for the current analysis are presented in Supplementary Table S1 and, with the exceptions of rs5751876 ( ADORA2A) , are previously reported GWAS-confirmed loci for indicated traits. Following, we present associations between our novel consumption/liking traits and i) GWAS-confirmed loci for bitter taste perception and ii) GWAS-confirmed loci for coffee/tea consumption behavior . We annotate statistically significant (P < 0.002) associations in Tables and note nominally significant (0.002 < P < 0.05) associations below. We then present results from GWAS of our novel consumption/liking traits in UKB applying the traditional GW-significance threshold (P < 5 × 10 –8 ) along with replication in NHS/HPFS. GWAS-confirmed loci for bitter taste perception Associations between genetic variants related to taste perception and the liking and intake of different types of coffee, tea, and other bitter foods in UKB are shown in Table 2 (details in Supplementary Table S6 ). Related results for the US cohorts, NHS and HPFS, were pooled and shown in Supplementary Table S7 . The quinine-taste sensitive variant near TAS2R19 (rs10772420 A) was significantly (P < 0.002) inversely associated with liking coffee with sugar (β = − 0.04) in UKB. Similarly, the variant was nominally (0.002 < P < 0.05) associated with less liking of tea with sugar (β = − 0.03) and greater liking of tea without sugar (β = 0.03). In UKB, the variant was nominally associated with lower coffee intake (β = − 0.02), regardless of the type of coffee. In the US cohorts, the variant was nominally significantly associated with higher intake of black (β = 0.12), unsweetened (β = 0.07), and no-milk (β = 0.10) coffee, but not substantially with total or other types of coffee. Although the TAS2R19 variant was significantly associated with less liking of dark chocolate (β = − 0.03) in UKB, it was not associated with intake of tea, beer, or dark chocolate in any of the cohorts. Table 2 Bitter taste perception loci associations with coffee and other traits in UKB. Full size table PROP-taste sensitive TAS2R38 variants were also inversely associated with liking dark chocolate (β = − 0.03, P < 0.002), but not associated with liking coffee, tea, or bitter foods. However, the same variants were significantly associated with lower coffee intake (β = − 0.02), and higher tea intake (β = 0.03). The effect estimates were greatest for sweetened tea (β = 0.06) but not significantly different from other tea types. Results in the US cohorts were directionally consistent for coffee and tea intake but were not significant. The caffeine-taste sensitive TAS2R14 alleles were nominally associated with higher liking for coffee and intake of caffeinated coffee, regardless its preparation type. In the US cohorts this variant was nominally associated with higher intake of dark chocolate (β = 0.01, P < 0.05) but not with intake of coffee or tea. GWAS-confirmed loci for coffee/tea consumption behavior Variants near TMEM18, GCKR, POR, ADORA2A ( rs2330783 ), CYP1A2 ( rs2472297, rs762551 ), AHR, CYP2A6, SEC16B, OR5M7P, ENSA , and MLXIPL were significantly associated with total coffee intake (P < 0.002, Table 3 , details in Supplementary Table S8 ); consistent with previous GWAS (Supplementary Table S1 ). In addition, variants near ABCG2 , MC4R and AKAP6 were nominally associated with total coffee intake (0.002 < P < 0.05). Variants near AHR (rs4410790 C) , CYP1A2 (rs2472297 C), and OR5M7P (rs597045 A) were more strongly associated with higher intakes of caffeinated than decaffeinated coffee intake. Variants near AHR (rs4410790 C) , CYP1A2 (rs2472297 C), ABCG2 (rs1481012 A) , ADORA2A (rs2330783 G) , CYP2A6 (rs56113850 C) , MC4R (rs66723169 A) , SEC16B (rs574367 T) , POR (rs17685 A) and TMEM18 (rs10865548 G) were exclusively or more strongly associated with liking coffee without sugar (P < 0.002) than liking coffee with sugar, but their association with higher coffee intake did not vary substantially by preparation type. Variants near ALDH2, EFCAB, FIBIN, NRCAM, PDSS2 , and BDNF were not associated with coffee consumption in the current UKB sample, regardless of how coffee was usually prepared. Table 3 Coffee consumption behavior loci associations with coffee and other traits in UKB. Full size table Variants near AHR (rs4410790 C) and CYP1A2 (rs2472297 C) were also significantly associated with higher total tea intake (P < 0.002). This association was stronger for tea with milk than for tea-prepared black, and stronger for caffeinated than for decaffeinated tea. TMEM18 (rs10865548 G) was significantly associated with lower tea intake and ADORA2A (rs5751876 C) was significantly associated with higher tea intake regardless of how it was prepared. ABCG2 (rs1481012 A) was significantly associated with liking tea without (β = 0.06) but not with sugar, and was associated with higher tea intake regardless the type. MC4R (rs66723169 A) and SEC16B (rs574367 T) were significantly associated with less liking of tea with sugar (β = − 0.04), but neither was significantly associated with tea intake. Variants near ADORA2A (rs2330783 G) , AHR (rs4410790 C) , CYP1A2 (rs2472297 C) , CYP2A6 (rs56113850 C) , and POR (rs17685 A) associated with higher coffee intake in published GWAS were also significantly (P < 0.002) associated with greater liking of dark chocolate with effect estimates similar to those for liking coffee without sugar. The ADORA2A rs5751876 C variant previously associated with caffeine-induced anxiety and wake promotion was nominally inversely associated with all coffee-liking traits (β ~ − 0.02, 0.002 < P < 0.05) and more strongly and significantly with less liking of dark chocolate (β = − 0.11, P < 0.0001). The same pattern of results was observed for the correlated ADORA2A rs5760444 C variant (r 2 = 0.90, EUR) previously linked to coffee intake in GWAS of Asians 17 . ADORA2A (rs2330783 G) was also significantly associated with higher dark chocolate intake and lower beer intake. No other behavior-loci were associated with dark chocolate intake in UKB. In post-hoc analysis we examined SNP associations with milk chocolate in UKB and observed statistically significant associations of ADORA2A (rs2330783 G), AHR , CYP1A2 , and POR with liking milk chocolate, but not milk chocolate intake . The directions of associations with liking milk chocolate were opposite to those reported for liking dark chocolate. None of the loci was associated with liking of bitter foods. Few associations of the evaluated genetic variants with coffee intake met statistical significance (P < 0.002) in US cohorts (Supplementary Table S9 ). Variants near AHR (rs4410790 C) and CYP1A2 (rs2472297 T) were significantly associated with higher total coffee intake and effect sizes tended to be larger for black, unsweetened, or no-milk coffee than for coffee with added milk or sugar. CYP1A2 (rs2472297 T) was significantly associated with caffeinated but not decaffeinated coffee. A similar but smaller difference in association between caffeinated and decaffeinated coffee was observed for the AHR variant. The odds ratio (95% CI) of reporting ‘avoiding coffee because of its bitterness’ for each additional allele was 0.83 (0.66, 0.99) for CYP2A6 (rs56113850 C), 0.64 (0.25, 1.03) for ADORA2A (rs2330783 G), and 1.13 (1.01, 1.25) for ADORA2A (rs5751876 C). Genome-wide analysis Table 4 presents novel genome-wide significant (P < 5 × 10 –8 ) loci based on a GWAS leveraging the new and more refined coffee and tea phenotypes in UKB (Supplementary Fig. S1 , see details in Supplementary Table S10 and S11 ). Low (λ = 1.04) to moderate (λ = 1.15) genomic inflation was observed across traits. Table 4 Genome-wide associations of coffee and tea traits in UKB. Full size table Only associations of new variants at 22q11.23 (near ADORA2A, Supplementary Fig. S2 ) and 12p13 ( TAS2R- locus, Supplementary Fig. S3 ) were genome-wide significant in UKB and were also nominally (0.01 < P < 0.04) associated with coffee and tea traits in the US cohorts (Supplementary Table S12 ). ADORA2A (rs3788372 G) was associated with lower unsweetened tea intake in the UKB (P = 2.3 × 10 –8 ) and US cohorts (P = 0.02). TAS2R (rs2418224 G) was associated with greater liking of tea with sugar (P = 1.6 × 10 –8 ) and coffee with sugar (P = 4.3 × 10 –6 ) and less liking of tea without sugar (P = 0.0003) and coffee without sugar (P = 0.001) in the UKB. The same variant was nominally associated with liking sweet foods (β = 0.02, P = 0.02) but not bitter foods, dark chocolate or beer (results not shown). While not associated with consumption behavior in UKB, this TAS2R variant was associated with lower intakes of black (β = − 0.11, P = 0.02), unsweetened (β = − 0.09, P = 0.03), and no milk (β = − 0.10, P = 0.02) coffee in the US cohorts. Discussion The current study aimed to gain causal insight to the role that taste plays in coffee drinking behavior. Genetically inferred caffeine and bitter taste perception contributed to coffee drinking behavior but, contrary to our hypothesis, to a weaker extent than genetically inferred caffeine sensitivity. Specifically, a greater preference for caffeine inferred by genetic differences in the physiological effects of caffeine leads to a stronger preference for the taste/smell of coffee inferred by liking-scales and reported intake. Similar findings were reported for tea but also dark chocolate. We examined genetic variants that have previously been associated with bitter taste perception in relation to coffee and tea related traits. In previous GWAS, variants in the TAS2R gene have been associated with caffeine (rs2708377 C) and quinine (rs10772420 A) perception; explaining about 2% and 6% of the phenotype variance, respectively 23 , 25 . The quinine-taste sensitive variant (rs10772420 A) has also been associated with caffeine perception but the effect is weaker and in the opposite direction to that for quinine 25 . We previously examined taste-related variants in relation to total coffee and tea intake in UKB. In that analysis, the quinine-taste sensitive TAS2R variant was associated with lower coffee intake and higher tea intake, whereas the caffeine-taste sensitive TAS2R variant was associated with higher coffee intake and lower tea intake 19 , 21 . We now extend that genetic research by using data on food liking and intake of different types of coffee and tea available for a subset of UKB participants. We show that the caffeine-taste sensitive variant (rs2708377 C) is associated with consumption of coffee regardless of how it is prepared but tends to be more strongly associated with caffeinated than with decaffeinated coffee. This result reiterates that caffeine-learned behavior (i.e., experience with its post-ingestive effects), may explain the preference for this naturally bitter tasting chemical 21 , 51 , 52 . Coffee, tea, and chocolate do not contain quinine. We observed significant to nominal associations between several of our novel traits and quinine-taste variation (rs10772420) but in directions difficult to interpret. The opposing effects of this variant on quinine and caffeine perception 25 or complex bitter-sweet taste interactions 53 may underlie the unusual pattern of associations but merits further study. The well-studied PROP-sensitive TAS2R38 variants were associated with lower coffee and higher tea intake as we reported previously 19 , 21 . The direct association between these PROP-sensitive variants and tea consumption was especially strong for sweetened tea suggesting a bitterness-threshold at which tea becomes unpleasant tasting for individuals with these variants. The specificity of these associations with coffee/tea intake (vs. coffee/tea liking), which was not observed for rs2708377 and rs10772420, may be due to chance or perhaps a latent trait distinguishing intake from hedonic traits 54 , 55 . Our GWAS identified an additional independent and novel TASR2 variant in the 12p13.2 region (rs2418224 G) that was associated with greater liking of tea with sugar. The nominally significant and opposite directions of effect on liking sweetened and non-sweetened versions of the beverages, the lack of association with other bitter taste traits, and the association with liking sweet foods, together suggest an effect of this SNP on general sweet -perception or preference. Our GWAS analysis in UKB also pointed to FTO , a well-established obesity locus. An FTO variant (i.e., rs1421085 C), previously associated with higher BMI, was associated with higher unsweetened coffee intake and liking and less liking of sweetened coffee and tea. In our previous GWAS of bitter and sweet beverages the same FTO variant was associated with higher coffee intake in the UKB but this result was not replicated in US cohorts 19 . However, the same variant was associated with lower SSB intake in both the UKB and US cohorts. A recent GWAS in UKB also observed associations between FTO variants and total sugar intake 56 . Taken together, our current study findings are likely attributed to sweet taste as opposed to coffee or bitter taste and the inability to replicate UKB FTO -coffee associations in US cohorts may be due to population differences in the social or food environment. Variants in MC4R , SEC16B and TMEM18 that associated with coffee consumption in previous GWAS are also GWAS-confirmed obesity loci. In the current study, the coffee-increasing variants (also the obesity-increasing variants) were inversely associated with liking sweetened coffee and tea. None of these loci were associated with SSB or sweet taste perception in GWAS 19 , 56 . Most SNPs identified in previous GWAS of coffee consumption were replicated in the UKB subsample used in the current study. Replicated SNPs were generally more associated with liking coffee without sugar than coffee with sugar but their association with increased coffee intake did not vary substantially by preparation type. The stronger association with liking coffee without sugar was unexpected for loci with known roles in caffeine metabolism or physiological effects, but not in taste perception, such as AHR, CYP1A2, POR, CYP2A6 and ADORA2A . Again, these findings suggest that taste/smell and caffeine effects are not as distinct as expected. Individuals consuming more coffee because of a genetic predisposition to increased caffeine metabolism or tolerance may learn to associate coffee/caffeine bitter taste with the favorable physiological effects of caffeine. Our genetic findings align with results of a small clinical study by Masi et al. 57 . Individuals with a higher caffeine metabolism rate (determined by change in salivary caffeine concentrations following intake of caffeine) gave lower bitterness ratings for espresso coffee samples and caffeine, but not quinine, solutions and added less sugar to coffee 57 . SNPs in AHR and CYP1A2 are the strongest and most robust signals in GWAS of coffee and caffeine intake 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 . Consistent with the role of the enzymes encoded by these genes in caffeine metabolism 58 , we observed stronger associations of these variants with caffeinated than with decaffeinated coffee in all cohorts included in the current study. We also observed that OR5M7P (rs597045 A) was more strongly associated with caffeinated than decaffeinated coffee in UKB. This variant is not associated with caffeine perception in GWAS 23 , 25 . OR5M7P is a pseudogene upstream of OR5M8, one of many genes encoding specific olfactory receptors which function in the perception of smell. Smell and taste are highly related but why this variant affects caffeine-and not taste-related traits is unclear but may be another case in-point of conditioned taste preferences. We investigated three SNPs in ADORA2A , encoding the adenosine 2A receptor, a target for caffeine which mediates the psychostimulant effect of the drug 58 . As such, we did not expect SNPs in ADORA2A to be differentially associated with coffee and tea traits defined independent of caffeine content. ADORA2A (rs5751876 C) is thought to increase sensitivity to caffeine as it has been associated with greater caffeine-induced anxiety and alertness and lower caffeine intake in several candidate gene studies 45 , 58 , 59 . In the current study, this variant was associated with less liking of coffee (with or without sugar) and while not associated with coffee consumption it was associated with higher tea intake in the UKB. Taken together, these results suggest that individuals with ADORA2A (rs5751876 C) avoid heavy caffeine intake. Since coffee contains twice the amount of caffeine than tea, these individuals may prefer tea over coffee. In US cohorts, rs5751876 C carriers were more likely to avoid coffee because it tastes bitter; a finding that further illustrates how some individuals are unable to separate the physiological effects of caffeine from taste preferences. ADORA2A (rs2330783 G) was associated with liking coffee without sugar and higher coffee intake without differences in association by type of coffee. In US cohorts, rs2330783 G carriers were less likely to avoid coffee because of its bitter taste. Our GWAS of unsweetened tea in UKB identified variants near ADORA2A (i.e., rs3788372) not in LD with those described above that was also associated with unsweetened tea in the US cohorts. Unlike AHR and CYP1A2 , none of the SNPs in ADORA2A differentially associated with caffeinated and decaffeinated coffee or tea. To our knowledge, adenosine receptors do not play a role in taste perception. Whether our results for ADORA2A rs2330783 and rs3788372 are mediated by taste or caffeine is unclear and warrants further investigation. ADORA2A, CYP1A2, CYP2A6, AHR and, to a weaker extent, POR variants associated with higher coffee intake (US cohorts) and liking (UKB) were also associated with increased dark chocolate intake (US cohorts) and liking (UKB). No association or associations in the opposite direction were observed with liking and intake of milk chocolate and other bitter foods, suggesting that caffeine (its psychostimulant effects, taste, or both) may be underlying the observed associations with dark chocolate. Dark chocolate contains more caffeine per weight than milk chocolate 60 and while the amount is still less than the content in coffee and tea it may be detected by individuals sensitive to caffeine. Dark chocolate is also a unique source of theobromine, another methylxanthine with psychostimulant effects 60 , 61 , 62 , 63 . Strengths of the current study include the use of novel and comprehensive coffee and tea traits in independent cohorts. Nevertheless, several limitations need to be considered. The liking and dietary intake measures used are subject to bias and measurement error as discussed in detail previously 64 , 65 , 66 . Specifically, the 24-h diet collections in UKB may not reflect usual intake and the FFQs used in the US cohorts may be prone to reporting errors. In addition, liking traits were only available for UKB and not for the US cohorts. The US cohorts included a non-representative group of elderly whose sense of taste and small may be reduced and beverage choices more likely affected by medical issues. Several genetic variants that were associated with coffee or caffeine intake in previous GWAS studies were not replicated in the current study. That probably reflects the smaller sample size in the sub-cohorts with more detailed information used in the current analysis. In addition, the association between ALDH2 variation and coffee consumption has only been reported in Japanese 14 and given the low frequency of these variants in populations of European ancestry the lack of replication in the current study was expected. Finally, there are currently no published GWAS-SNPs for perception of other bitter compounds in coffee such as maillard reaction products, cafestol, chlorogenic acid derivatives or other uncharacterized coffee-bitter compounds 67 , 68 , 69 thus limiting our candidate SNP-approach. Genetic markers of coffee and caffeine consumption are increasingly used as instrumental variables to seek causal insight to coffee/caffeine and health 70 . Whether a genetic instrument captures total coffee/caffeine intake, only certain types of coffee, or not only coffee but a broader characteristic, impacts the interpretation and translation of studies. For example, evidence for a causal relationship between black coffee and type 2 diabetes is very different than a causal relationship between coffee and type 2 diabetes. A cautionary approach to genetic instrumental variable studies is particularly relevant now that weaker non-genome-wide significant variants are included in such studies. In summary, our genetic analysis suggests the psychostimulant effects of caffeine outweighs the bitterness of caffeine. A greater preference for caffeine based on genetic differences in the physiological effects of caffeine leads to a stronger preference for the taste/smell of coffee and dark chocolate. Similarly, greater sensitivity to the adverse physiological effects of caffeine was associated with avoiding the taste of coffee. Taste preferences and physiological caffeine effects thus seem to become entangled in a way that is difficult to distinguish for individuals. These potential examples of conditioned taste preferences or aversions merit further clinical investigation. This apparent disruption of an innate aversion to bitter taste and its genetic correlation with coffee preferences has important relevance to food and beverage development as well as genetic epidemiological studies of coffee. Data availability Data described in the manuscript is available to all researchers and can be accessed upon approval of the UK Biobank , HPFS and NHS boards. Code availability Statistical code used in this study is available upon reasonable request from the corresponding author.
People who like to drink their coffee black also prefer dark chocolate, a new Northwestern Medicine study found. The reason is in their genes. Northwestern scientists have found coffee drinkers who have a genetic variant that reflects a faster metabolism of caffeine prefer bitter, black coffee. And the same genetic variant is found in people who prefer the more bitter dark chocolate over the more mellow milk chocolate. The reason is not because they love the taste, but rather because they associate the bitter flavor with the boost in mental alertness they expect from caffeine. "That is interesting because these gene variants are related to faster metabolism of caffeine and are not related to taste," said lead study author Marilyn Cornelis, associate professor of preventive medicine in nutrition. "These individuals metabolize caffeine faster, so the stimulating effects wear off faster as well. So, they need to drink more." "Our interpretation is these people equate caffeine's natural bitterness with a psycho-stimulation effect," Cornelis said. "They learn to associate bitterness with caffeine and the boost they feel. We are seeing a learned effect. When they think of caffeine, they think of a bitter taste, so they enjoy dark coffee and, likewise, dark chocolate." The paper was published Dec. 13 in Scientific Reports. The dark chocolate connection also may be related to the fact that dark chocolate contains a small amount of caffeine but predominantly theobromine, a caffeine-related compound, also a psychostimulant. Why does this matter? Coffee and dark chocolate consumption have been shown to lower the risk of certain diseases. Moderate coffee consumption lowers the risk of Parkinson's disease, cardiovascular diseases, type 2 diabetes and several types of cancer. Dark chocolate appears to lower the risk of heart disease. Currently, when scientists study the health benefits of coffee and dark chocolate, they must rely on epidemiological studies, which only confer an association with health benefits rather than a stronger causal link. Cornelis's new research shows these genetic variants can be used more precisely to study the relationship between coffee and health benefits. Previously, scientists were using the genetic markers for coffee drinkers in general. The new findings suggest they are stronger markers for particular types of coffee drinkers—black coffee drinkers. This impacts the interpretation of these genetic studies of coffee and health. "Drinking black coffee versus coffee with cream and sugar is very different for your health," Cornelis said. "The person who wants black coffee is different from a person who wants coffee with cream and sugar. Based on our findings, the person who drinks black coffee also prefers other bitter foods like dark chocolate. So, we are drilling down into a more precise way to measure the actual health benefits of this beverage and other food." The benefits of black coffee are based on moderate consumption of two to three cups a day, Cornelis said. The current study used genetic, dietary and food preference data available from the UK Biobank and two U.S. cohorts, the Nurses' Health Study and Health Professionals Follow-up study. The paper is titled "Genetic determinants of liking and intake of coffee and other bitter foods and beverages."
10.1038/s41598-021-03153-7
Biology
Researchers develop dyes for 'live' extremophile labeling
Ivan Maslov et al, Efficient non-cytotoxic fluorescent staining of halophiles, Scientific Reports (2018). DOI: 10.1038/s41598-018-20839-7 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-018-20839-7
https://phys.org/news/2018-03-dyes-extremophile.html
Abstract Research on halophilic microorganisms is important due to their relation to fundamental questions of survival of living organisms in a hostile environment. Here we introduce a novel method to stain halophiles with MitoTracker fluorescent dyes in their growth medium. The method is based on membrane-potential sensitive dyes, which were originally used to label mitochondria in eukaryotic cells. We demonstrate that these fluorescent dyes provide high staining efficiency and are beneficial for multi-staining purposes due to the spectral range covered (from orange to deep red). In contrast with other fluorescent dyes used so far, MitoTracker does not affect growth rate, and remains in cells after several washing steps and several generations in cell culture. The suggested dyes were tested on three archaeal ( Hbt. salinarum, Haloferax sp., Halorubrum sp .) and two bacterial ( Salicola sp., Halomonas sp .) strains of halophilic microorganisms. The new staining approach provides new insights into biology of Hbt. salinarum . We demonstrated the interconversion of rod-shaped cells of Hbt. salinarium to spheroplasts and submicron-sized spheres, as well as the cytoplasmic integrity of giant rod Hbt. salinarum species. By expanding the variety of tools available for halophile detection, MitoTracker dyes overcome long-standing limitations in fluorescence microscopy studies of halophiles. Introduction Halophiles are organisms that can thrive in extreme conditions of high salt concentrations. Most of these organisms belong to the archaea or bacteria life domains. Exposed to a hostile environment, halophiles evolved unique biomechanisms, which make them very exciting scientific objects. Probably, the most important is the example of bacteriorhodopsin – a photoactivatable retinal membrane protein with unique properties, including stability at high concentration of salt (up to 5M), protease resistance, tolerance to high temperatures (upto 140 °C when dry) 1 and to a broad range of pH (at least 3–10) 2 . Bacteriorhodopsin has become the most studied membrane protein, often serving as a “model” for all other membrane proteins (see for instance 3 , 4 , 5 , 6 , 7 , 8 ). In recent decades, halophiles have attracted additional scientific interest due to their isolation from Earth’s subsurface halites, in some cases >250 million years old 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 . Being buried in brine-filled fluid inclusions, halophiles are able to survive for millennia under conditions of extremely high ionic strength, elevated temperatures and nutrients depletion. Genome studies showed that spore formation is not possible for halophilic archaea 21 , thus, their survival mechanisms remain unclear 22 . Halites were discovered on the surface of Mars by the Opportunity and Spirit rovers and in meteorite samples 23 , 24 , 25 , 26 , 27 , 28 , 29 . It was shown that they can hypothetically play protective role against radiation 30 or biochemical degradation 19 and potentially harbour halobacteria 31 . Thus, sensitive detection of living halophiles in hypersaline medium with minimal influence on their viability is a question of great interest for further exploration of microbial life on Earth and elsewhere. An additional interest associated with halophiles arises from their role in microbial communities of natural and man-made hypersaline environments 32 , especially in the case of hydrocarbonoclastic communities, which are potentially applicable for bioremediation of oil pollutions 33 . Fluorescent staining can provide a sensitive method for detection of halophilic species (i.e. via flow cytometry) in environmental samples, and can also be used for their analysis via fluorescence microscopy. The main challenge in relation to the fluorescent staining of halophiles is the high ionic strength of the medium and in some cases hostile pH 34 of their natural environment, which are inappropriate for conventional antibodies and dyes 35 . The low permeability of the halobacterial S-layer to macromolecules can additionally complicate immunostaining. The list of staining approaches for halophiles is currently rather limited. Most of them require fixation (i.e. immunostaining 36 , 37 , DAPI 38 , 39 , and FISH 38 , 40 ) or the use of DNA-targeted intercalating dyes (i.e. LIVE&DEAD kit 35 , 39 , 41 , 42 , 43 , 44 , 45 , Acridine Orange 43 , 46 , Hoechst 47 ). The latter approach is unsafe due to interference with the cellular DNA processing machinery, which can result in complete loss of cell viability (for review see 48 , 49 ). It has been shown that Acridine Orange and its derivatives can additionally stain archaeal phospholipids 50 . Several other approaches for staining halophiles that have been recently suggested 44 , 51 remain limited to particular applications. Genetic modification of archaea with GFP for its fluorescent visualization 44 , 52 is also worth mentioning, although it is not straightforward for detection purposes. Notably, no dyes for long-term staining without cytotoxic side-effects have been found; the field is dominated by DNA-staining dyes, which has cytotoxicity as an intrinsic limitation. Here we suggest a new method for staining living halobacteria cells with fluorescent dyes in their growth medium. We show that dyes designed to specifically stain mitochondria in mammalian eukaryotic cells (MitoTracker Orange CMTMRos, MitoTracker Red CMXRos and MitoTracker Deep Red FM) easily permeate the S-layer of Hbt. salinarum and cause stable bright staining of the archaeon. Even without removal of the residual, non-incorporated MitoTracker dye, the signal-to-noise ratio (SNR) is sufficient for clear cell detection and fluorescence microscopy. To test our staining procedure we investigate the conversion of Hbt. salinarum cells into spheroplasts via exposure to an EDTA-containing solution. In the case of MitoTracker Orange CMTMRos we also show that it does not have an effect on the cell’s growth rate. The staining remains bright during long observations (hours to days) and is inherited in cell division. The new method of staining allowed us to detect the development of viable spheres of submicron size (herein called “microspheres” to highlight the difference between them and spheroplasts) from rod-shaped Hbt. salinarum after the exposure to EDTA, which was previously assumed to result only in spheroplast formation. In addition, we demonstrate the cytosolic integrity of giant Hbt. salinarum cells via fluorescence loss in the photobleaching (FLIP) approach. In addition, the dyes were successfully applied to three halophilic microorganisms isolated from environmental samples from hypersaline lakes representing both the bacterial and archaeal domains of life. Therefore, we propose that MitoTracker dyes can be used as an effective and non-harmful dyes for halophile staining. Materials and Methods Cell culture The Hbt. salinarum strain S9 was used as test sample for halophilic microorganism. The growth medium contained 250 g of NaCl, 20 g of MgSO 4 ·7H 2 O, 3 g of Na 3 C 6 H 5 O 7 ·2H 2 O, 2 g of KCl and 10 g of peptone (Helicon, Russia, H-1906-0.5) (per 1 litre), at pH 6.5. Cells were routinely cultured in 100 mL flasks at 37 °C with shaking (200 rpm). Sample preparation and staining MitoTracker Green FM (M7514), MitoTracker Orange-CMTMRos (M7510), MitoTracker Red-CMXRos (M7512) and MitoTracker Deep Red FM (M22426) (Thermo Fisher Scientific, USA) were dissolved in DMSO to obtain 1 mM stock solutions. Solutions were stored at −20 °C. To stain Hbt. salinarum (Fig. 1 ) or species isolated from environmental samples, MitoTracker solution was added to the growth medium at a 1:1000 dilution (to a final concentration of 1 μM). Figure 1 Living Hbt. salinarum cells in growth medium stained with MitoTracker Orange CMTMRos ( A ), MitoTracker Red CMXRos ( B ) and MitoTracker Deep Red FM ( C ). Full size image Confocal fluorescence microscopy Measurements were performed in thin-bottom 8-well chambered cover glass (Thermo Fisher Scientific, USA, 155409), using an apochromate objective (420792-9800-720, Zeiss) with oil immersion (ISO 8036, Immersol 518F, Zeiss). MitoTrackers were excited by 561 nm laser for MitoTrackers Orange-CMTMRos and Red-CMXRos, and by 633 nm laser for Deep Red FM. Emission was recorded in λ-mode with 34-channel QUASAR detector unit from Zeiss. All images were processed using Zen 2012 (Zeiss, Germany) and Fiji 53 software. Supplementary Video 2 was drift corrected using a StackReg -plugin for Fiji 54 . Growth rate measurements Hbt. salinarum cells were diluted in growth medium to an optical density of approximately 0.05 at 600 nm (OD 600nm ) (with growth medium used for blank). 5 mL cell cultures were incubated in 15 mL flasks for a week at 37 °C with shaking (200 rpm). MitoTracker Orange CMTMRos was added twice a day to the first flask (sample), to keep the ratio between number of bacteria and MitoTracker molecules approximately constant and to maintain conditions similar to those in the fluorescence microscopy experiments during the entire observation period, namely, 1 μM of dye per OD 600nm unit. No MitoTracker Orange CMTMRos was added to the second flask, so it could serve as a control. The OD 600nm was measured for eight days in each flask. Growth medium with and without MitoTracker were used as a reference for sample and control. Aliquots used for the OD 600nm determination were returned to the respective flask after each measurement. The growth curves in Fig. 2 represent the average of 3 independent samples. Figure 2 Growth curves of stained and control cells of Hbt. salinarum . The cells were grown with (sample group; downward-pointing triangles) and without (control group; upward-pointing triangles) MitoTracker Orange CMTMRos. Full size image MitoTracker durability during cultivation Hbt. salinarum cells were stained with MitoTracker Orange CMTMRos and after a 24-h incubation in a dye-containing medium (time point “a” in Fig. 3 ), the cells were sedimented via centrifugation (at 2000 g 37 °C for 5 min) and resuspended into MitoTracker-free growth medium to a final OD 600nm of 0.2 (“b”, Fig. 3 ). Cells were cultured at growth conditions for two days (to OD 600nm of 0.8; “d”, Fig. 3 ) and then diluted again to an OD 600nm of 0.2 (time point “e”, Fig. 3 ). This way, cells always remained in the exponential growth phase. Each day a sample of the cell culture was taken and a fluorescence image was acquired. The dilution-growth cycle was repeated three times. The plot of OD 600nm as a function of time from the start of the experiment as well as images obtained using fluorescence microscopy of the samples are shown in Fig. 3 . Images were analyzed to estimate SNR. SNR was calculated for 20 cells for each time point as a ratio of difference in the mean signal in the cells and outside divided by the standard deviation of the signal outside the cells (see Fig. 3B ). Figure 3 Durability of MitoTracker staining during several growth and dilution cycles. ( A ) Schematic plot of OD 600nm change during the experiment: 2-day exponential growth periods correspond to incubation time, while a 4-fold drop corresponds to the dilution. Particular time-points are labelled with lower-case Latin letters (a–j). ( B ) SNR decrease in the experiment. ( C ) (a) Microphotograph of cell culture at the day one; (d) two days after staining; (g) four days after staining and (j) six days after staining. Lower-case Latin letters in B and C correspond to the particular measurement time, shown in ( A ). Halobacteria were stained with MitoTracker Orange CMTMRos. The dye was washed out via centrifugation after staining. Cell cultures were incubated in a dye-free growth medium for 6 days with dilution every 2 days to maintain the active growth phase. The intensity of excitation laser was the same for all images. The detector gain was equal for images d,g,j , but it was reduced for image a to avoid saturation. The variation of the background in each image is due to different distances from the focal plane to the surface of the cover glass. Contrast stretching was applied to achieve the highest contrast of each image. Full size image Conversion to spheroplasts Spheroplasts were prepared as previously described 46 . Following 15 min of incubation in a staining solution cells in a 2 mL sample of culture (OD 600nm 0.8–1.0) were sedimented via centrifugation (at 2000 g 37 °C for 5 min). The supernatant was removed by pipetting. Cells were resuspended in 150 μL spheroplast-forming solution (2 M NaCl, 27 mM KCl, 50 mM Tris hydrochloride pH 8.75, 15% sucrose, 15% glycerol) and transfered to one well of the thin-bottom 8-well chambered cover glass (Thermo Fisher Scientific, USA, 155409). The excitation laser was focused slightly above the bottom of the cover glass. The first measurement (time point 0) was recorded immediately after 15 μL of 0.5 M EDTA was added into the well. The obtained data are shown in Fig. 4 and Supplementary Video 1 . Figure 4 Conversion of Hbt. salinarum cells stained with MitoTracker Orange CMTMRos into spheroplasts by EDTA treatment. The entire process lasts approximately 10 min: in the beginning the cells have a clear rod shape ( A ; t = 0 s). During the conversion, the short cells swell at one end ( B , arrows; t = 200 s), while long cells first bend around their midpoint ( C , arrows; t = 260 s). Finally, both types of cells form spheroplasts ( D , t = 480 s). Cells with intermediate shapes (swollen or bent) are indicated with arrows. Full size image For a more detailed observation of the conversion process, the Hbt. salinarum cells were stained with MitoTracker Orange CMTMRos. Cells were concentrated by sedimentation (4000 g for 10 min at room temperature) and entrapped in a 2% agar solution (Helicon, Russia, H0102) with growth medium. The mixture was distributed on the bottom of microscopy dish to form a thin layer. After solidification the gel was washed 3 times with a spheroplast forming solution and then observed by fluorescence microscopy for up to 10 hours in an excess of spheroplast forming solution containing 50 mM EDTA (see Fig. 5 and Supplementary Video 2 ). Figure 5 Appearance of microspheres during conversion of Hbt. salinarum cells into spheroplasts by EDTA treatment. Hbt. salinarum cells were stained with MitoTracker Orange CMTMRos, entrapped in 2% agar and exposed to EDTA in a spheroplast-forming solution, as described in Materials and Methods. In addition to the spheroplasts formation, the generation of microspheres (marked by arrows) was observed. In contrast to rod-shaped cells and spheroplasts, microspheres are mobile in agar, but fluctuate only in proximity of spheroplasts originating from the same rod-shaped cells. This can be explained by fluctuation inside cavities formed in the agar after the contraction of the rod-shaped cells. Full size image Recovery of rod-shaped Hbt. salinarum from spheroplasts and microspheres Hbt. salinarum cells were stained with MitoTracker Orange CMTMRos in the growth medium and then converted to spheroplasts as described above. Microspheres were isolated from the mixture by filtration through a 0.45-µm PTFE filter (4555, Pall Corporation, Russia). Sample purity was confirmed by fluorescent microscopy (see Supplementary Figure 2C ). In total, 500 µL of sample containing filtered microspheres was mixed with 2 mL of rich growth medium (250 g/L NaCl, 2 g/L MgSO 4 ·7H 2 O, 3 g/L KCl, 3 g/L Na 3 C 6 H 5 O 7 , 0.2 g/L CaCl 2 , 5.0 g/L tryptone (Serva, Germany, 48647.01), 2.0 g/L yeast extract (Organotechnie, France, 19512), 1.25 g/L glycerol, 1 mg/L ZnSO 4 ·7H 2 O, 50 mg/L FeSO 4 ·7H 2 O, 0.3 mg/L MnSO 4 ·H 2 O, pH 7.0.) with 15% sucrose and incubated at 37 °C for 10 days in the dark. Spheroplasts were isolated from the microsphere-containing mixture by centrifugation in a spheroplast forming solution at 20000 g for 20 min at room temperature. At these conditions, spheroplasts were dominantly sedimented, while microspheres remained in the supernatant. Centrifugation was repeated 4 times. Each time, the supernatant was discarded and the pellet was resuspended in fresh spheroplast forming solution. Examination by confocal fluorescence microscopy (see Supplementary Figure 2B ) confirmed that both the supernatant and the resuspended pellet after the last centrifugation were free of microspheres. After the 4 th centrifugation the pellet was resuspended in rich growth medium with 15% sucrose and incubated for 8 days. After the incubation both samples were examined by DIC microscopy (Supplementary Figure 2D and E ). FLIP of giant rod Hbt. salinarum cells After 15–20 days of incubation at growth conditions Hbt. salinarum cells were stained with MitoTracker Orange CMTMRos and then incubated for several (1–4) days without stirring at room temperature. Rod-shaped cells longer than 30 µm were observed by fluorescence microscopy (see Fig. 6 and Supplementary Video 4 ) and iteratively bleached (several seconds each cycle) using a high-intensity (100% power) 561-nm laser in the region (typically, 10 × 10 µm 2 ) close to one of the rod’s ends, in intervals of approximately 30 s. Figure 6 Bleaching of stained giant rod Hbt. salinarum cells. The cytosol integrity of the cells (approximately 40 µm long in this case) was demonstrated using FLIP of MitoTracker Orange CMTMRos in Hbt. salinarum cells. The bleaching region is marked with a white square, the intensity of the fluorescence signal is shown as a heatmap (pseudocolour, intensity increases from purple to red according to the scale below image). The loss of fluorescence intensity along the entire length of the cell shows that the majority of the dye can freely diffuse along the cell, demonstrating the existence of one cellular compartment not separated by septal membranes. Full size image Isolates from environmental samples Four samples of extremely halophilic microorganisms were isolated from various hypersaline lakes using standard microbiological techniques 55 and then identified as follows: Halorubrum sp . – euryarchaeota from the Alikes salt lake (Kos island, Greece); Halomonas sp . and Haloferax sp . – proteobacteria and euryarchaeota from the Elton salt lake (Volgograd region, Russia), respectively; and Salicola sp . – proteobacteria from the Chott el Djerid salt lake (Tunisia). The cultures were cultivated in a nutrient medium composed of 250 g/L NaCl, 20 g/L MgSO 4 ·7H 2 O, 3 g/L KCl, 3 g/L Na 3 C 6 H 5 O 7 , 0.2 g/L CaCl 2 , 5.0 g/L tryptone (Serva, Germany, 48647.01), 2.0 g/L yeast extract (Organotechnie, France, 19512), 1.25 g/L glycerol, 1 mg/L ZnSO 4 ·7H 2 O, 50 mg/L FeSO 4 ·7H 2 O,0.3 mg/L MnSO 4 ·H 2 O, pH 7.0. The medium was sterilized at 121 °C for 30 min. Cultivation in liquid medium was carried out at 38.5 °C and 150 rpm using a Unimax 2010 orbital platform shaker (Heidolph, Germany) with illumination (Philips TL-D 18W/33–640 lamps) for 7 days in 100 mL conical flasks with 5 mL of inoculum added to each flask. The halophilic microorganisms from the environmental samples were identified by comparative phylogenetic analysis of their 16S rRNA gene sequences. For this purpose DNA was extracted using the Wizard technique combined with the modified Birnboim-Doli method 56 . The concentration of DNA was 30–50 µg/mL, RNA was present in trace quantities (<1 %). A universal primer system (Univ11f-Univ1492r for bacteria and 8fa-1492r for archaea; Evrogen, Russia) was used for PCR and further sequencing of PCR-fragments of 16S rRNA genes 57 , 58 . PCR products were purified by electrophoresis in a 1% agarose gel followed by extraction with the Wizard PCR Preps DNA Purification System (Promega, Madison, WI, USA) as described in the manufacturer’s protocol. PCR-fragments of 16S rRNA genes were sequenced by Sanger’s method using a Big Dye Terminator v.3.1 reagent kit (Applied Biosystems, Inc., USA) on an ABI PRIZM 3730 system (Applied Biosystems, Inc., USA) according to standard instructions from the manufacturer. Sequence similarities were identified in BLAST 59 . 16S rRNA sequences were deposited to the GenBank database (MF148853 – Haloferax sp ., KY781161 – Halorubrum sp ., KY781162 – Halomonas sp ., KY781163 – Salicola sp .). Results and Discussion Staining of Hbt. salinarum under growth conditions To find an appropriate method to stain Hbt. salinarum we started with the idea that an important intrinsic property of a halobacterial cell is its negative membrane potential. This led us to try to stain halobacteria with MitoTracker cationic probes, which are normally used to stain mitochondria. It is the mitochondrial membrane potential which drives cationic MitoTrackers 60 into organelles. Halobacteria have membrane potentials of the same sign and can potentially be stained using the same approach. Indeed, our first experiments showed that living Hbt. salinarum cells can be stained in their growth medium using three MitoTracker dyes (MitoTracker Orange CMTMRos, MitoTracker Red-CMXRos, and MitoTracker Deep Red FM), as described in Materials and Methods and observed using a confocal fluorescence microscope (see Fig. 1 ). Fluorescence signal accumulated from cells is much higher than the background level (SNR >100) and it can be inferred that free dye is drawn into the cells. Apparently, MitoTrackers are driven by membrane potential in the same way as in mitochondria. We have also tried staining halobacteria with MitoTracker Green FM, which has been shown to stain mitochondria regardless of membrane potential 61 (data not shown). In this case, no accumulation of the dye in the cells was observed. This additionally supports our hypothesis that membrane potential plays a crucial role in the staining of halophiles with MitoTracker dyes. Effects on growth rate and staining durability To determine whether staining has a negative effect on growth rate, we studied cells stained with MitoTracker Orange CMTMRos (see Materials and Methods for details). Figure 2 shows the growth curve of the stained culture compared with control non-stained one. MitoTracker staining does not alter the normal cell growth rate (with doubling time in the range of 1.2 ± 0.3 days). Cell size and shape remain for at least three generations as observed with a fluorescence microscope (data not shown). The absence of an effect of MitoTracker dyes on growth rate is highly desirable for a wide range of biological studies and is not shared by the most popular staining alternatives. LIVE&DEAD kit staining is the only approach that was systematically considered in terms of its applicability to living halophiles in their native environment 35 . One of the two examined cell lines ( Halobacterium sp. NRC-1 ) exhibited a two-fold decline in CFU (colony forming units) in the presence of the staining reagent 35 . For the other line ( H. dombrowskii H4 ), LIVE&DEAD staining has been shown to have almost no effect on cell viability 35 . However, in both cases cells incubated with dyes for long and short time periods were compared, and therefore, a rapid response occurring immediately after staining cannot be excluded and therefore can be a substantial experimental limitation. Another beneficial property of MitoTracker staining is its durability. The staining persists during washing (removal of supernatant), dilution of stained cells in a MitoTracker-free medium and cultivation of cells (see Materials and Methods, MitoTracker durability during cultivation). As depicted in Fig. 3 , the staining quality of Hbt. salinarum cells remains satisfactory and provides cell fluorescence well above background through at least 3 cycles of incubation/dilution. Conversion of Hbt. salinarum to spheroplasts To validate the staining quality, imaging of a well-known process of halobacterial cell wall removal by EDTA was carried out. In this process, rod-shaped halobacteria stained with MitoTracker Orange CMTMRos were converted to spherical particles called spheroplasts. After several minutes of incubation in a 10 mM EDTA-containing buffer, rod-shaped cells were converted to spheroplasts. The conversion process is depicted in Fig. 4 and Supplementary Video 1 . The shorter individual cells swell at one end (Fig. 4B ) and the remaining rod-like part of the bacterium merges with this roundish end-structure (Supplementary Video 1 ). In this process, cells undergo the same conversion steps as previously described based on phase-contrast microscopy of Hbt. salinarum 62 . The longer cells bend around their midpoint in the beginning and then undergo the same process (Fig. 4C ). The resulting spheroplasts are clearly visible with SNR >100. Each rod gives rise to one or two spheroplasts depending on its length. In previous studies concerned with osmotic conversions 42 , 62 , 63 the average number of spheres produced (observed by phase contrast microscopy 62 , 63 , electron microscopy 42 or fluorescence microscopy based on LIVE&DEAD staining 42 ) varied from one to four, which provides evidence that different protocols (salinity change, pH change, EDTA, other cell culture conditions before conversion etc.) can result in different conversion physiology. In addition to MitoTracker Orange CMTMRos, MitoTracker Red-CMXRos and MitoTracker Deep Red FM were used to study the conversion process, which resulted in similar observations. The quality of final spheroplast staining was equally good for all three MitoTracker dyes and is shown in Supplementary Figure 1 . Microspheres formed during spheroplast conversion Recently, it was shown that rod-shaped Halobacterium species can convert to microspheres (approximately 0.4 µm in diameter) upon a decrease in water activity below 0.75 42 . The term water activity is widely used to define the availability of water for hydration of materials. A value of 1.0 indicates pure water, 0.75 approximately corresponds to 4 M NaCl buffer. One rod forms three to four microspheres that remain viable for years and proliferate into normal rods when supplied with proper nutrients. Microspheres with similar properties form in fluid inclusions in laboratory-grown halites as well as in salt deposits isolated from millions-year-old minerals. Presumably, microspheres represent a dormant form of haloarchaea with higher resilience, which allows them to survive in unfavourable environmental conditions 22 . Surprisingly, the MitoTracker staining approach revealed similar microspheres to coexist with spheroplasts after the EDTA conversion (see Supplementary Figure 2A ) according to the standard protocol described in Materials and Methods (Conversion to spheroplasts). To identify the origin of the microspheres, we recorded the conversion of Hbt. salinarum to spheroplasts, trapped in a 2% agarose gel (see Supplementary Video 2 and Fig. 5 ). Single microspheres are formed from some halobacteria while they lose their S-layer during spheroplast conversion. Restoration of rod-shaped Hbt. salinarum from spheroplast solution has been well described in previous studies and is widely used for Hbt. salinarum transfection 46 , 64 . Since microspheres were not previously observed in a spheroplast solution, the question arises whether restored rod-shaped Hbt. salinarum cells originate from microspheres, spheroplasts or both. To tackle this question we separated spheroplasts and microspheres as described in Materials and Methods (Recovery of rod-shaped Hbt. salinarum from spheroplasts and microspheres). Interestingly, after cultivation for 10 days in nutrient-rich medium, both samples restored rod-shaped halobacteria species (see Supplementary Figure 2D and E ). This result implies that microspheres formed from halobacteria during spheroplast conversion possess all that is necessary for growing into normal rod-shaped halobacteria. Similar to the case of fluid inclusion, microspheres may serve as a halobacterial back-up to thrive in unfavourable conditions. Giant rod Hbt. salinarum cells Most Hbt. salinarum cells observed in our experiments are rod-shaped with an approximate length of 9 µm (see Supplementary Video 3 ). Мuch longer rod-shaped cells were occasionally found with a length of up to 45 µm. Notably, the length of a cell correlated with its motility: longer cells were less motile. The fraction of giant rod cells was higher when cells were incubated for a long time in a stationary phase without dilution (15–20 days). These observations are in good agreement with earlier data showing long and immobile Hbt. salinarum cells in surface-adherent biofilms (see Fig. 1 in 65 and Fig. 5 in 43 ). Little is known about the morphology of giant rod halobacterial cells. We observed that long rod cells split into several smaller ones during spheroplast formation (Fig. 4 and in Supplementary Video 1 ). Thus, it could have been expected that giant rod cells consist of several smaller individual cells that remain concatenated after division. To test whether the giant rods are single cells and whether their cytosols are not divided into sub-compartments, we investigated the diffusion of MitoTracker dyes inside single giant rod cells. As described in greater detail in Materials and Methods (FLIP of the giant rod Hbt. salinarum cells), we bleached the MitoTracker dye on one end of the giant rod stepwise, and recorded the reduced fluorescence intensity along the whole cell (Fig. 6 and Supplementary Video 4 ). Notably, the fluorescent dye is not completely bleached using this approach, which shows that a fraction of the dye does not diffuse freely but remains attached to some cell compartments. The binding of the dye’s chloromethyl group to thiol groups of immobile intracellular (or membrane-associated) proteins is a likely explanation 61 . Isolates from environmental samples To check the general applicability of our MitoTracker dye staining approach for other halophile species we tested four strains of microorganisms, isolated from environmental samples from hypersaline lakes: Halorubrum sp . – euryarchaeota from the Alikes salt lake at the Kos island, Greece; Halomonas sp . and Haloferax sp . – proteobacteria and euryarchaeota from the Elton salt lake, Volgograd region, Russia; and Salicola sp . – proteobacteria from the Chott el Djerid salt lake, Tunisia. These selected halophile species belong to different domains of life: 2 are bacteria and 2 are archaea. The samples were stained with three different MitoTracker dyes, similar to Hbt. salinarum , and were examined using a confocal fluorescence microscope. In all cases, samples showed bright homogeneous staining of cells, as expected (see Fig. 7 ). This success provides evidence that the staining mechanism is general and not restricted to Hbt. salinarum or even to a single domain of life. Thus the suggested staining technique can be applied for the detection and analysis of yet unknown microorganisms in extremely halophilic environments or in extraterrestrial samples. Figure 7 MitoTracker dyes staining of halophilic microorganisms ( Halomonas sp., Halorubrum sp., Haloferax sp., Salicola sp .) isolated from hypersaline lakes. Full size image Conclusions MitoTracker dyes of three different colours (MitoTracker Orange CMTMRos, MitoTracker Red CMXRos, MitoTracker Deep Red FM) were shown to stain five halophilic microorganisms (three archaea and two bacteria) under growth conditions with high staining efficiency and low residual free dye level in the medium. The staining procedure does not require a washing step. Staining quality of MitoTrackers is sufficiently good for studying the morphology of halophiles. The high SNR is suitable for morphological studies, cell counting and detection, which is important for future applications. Staining persists during the harsh spheroplasts conversion protocol, which implies that MitoTracker dyes can be used even when severe washing procedures are required. The MitoTracker dye staining does not affect growth rate and is inherited through several cell generations, making it a good choice for long-term studies, which require non-cytotoxic staining of live cells. Our observations suggest that MitoTracker dyes are drawn into the cell by the membrane potential. Subsequently, they become anchored inside the bacteria via chloromethyl group binding to thiol groups of intracellular proteins as was previously described for MitoTrackers 66 . Overall, MitoTracker dyes are an ideal choice for staining of halophiles and have many advantages compared with the previously applied staining techniques. Using this new approach, we demonstrated the formation of viable microspheres during Hbt. salinarum spheroplast conversion, and cytoplasmic continuity of giant rod Hbt. salinarum species.
Researchers from MIPT and their colleagues from Research Center Juelich (Germany) and Dmitry Mendeleev University of Chemical Technology of Russia have described a new method for studying microorganisms that can survive in extreme conditions. The scientists identified a fluorescent dye that enabled them to observe the life cycle of bacteria in real time. Halophiles, which is the ancient Greek for "salt-loving," are microorganisms that thrive in high salt concentrations. Their ability to survive in hostile environments makes halophiles important scientific objects, for both theoretical and applied studies. This line of enquiry may eventually facilitate the search for extraterrestrial life, shed light on the history of the Earth, and provide data sought by biotech specialists. The authors of the paper, who work at MIPT's Laboratory for Advanced Studies of Membrane Proteins, point out that these organisms can be used for many purposes, including cleaning up oil spills. However, this research is facing a number of obstacles, not least of them being that microbiological experiments are technically quite challenging. To study microorganisms in their natural environment, dyes are required—ideally, selective ones. With their help, much more data can be obtained, compared to when an unstained medium is examined. However, well-established fluorescent labels and antibodies that dyes use to bind with a given substance often fail to work in salty environments. Additional difficulties are posed by the halophiles' thick membrane. "Despite all the hard work, scientists have so far been unable to find a substance that would enable them to observe these organisms 'live," the way they really are. Instead, bacteria had to undergo harmful preparation," says Ivan Maslov, a fifth-year MIPT student and co-author of the study. Video showing how bacteria change their shape from rods to spheres in a hostile medium. Credit: Ivan Maslov et al./Scientific Reports In the new paper published in Scientific Reports, the international research team described a solution to this issue. Their experiments showed that there is no need to synthesize new types of dyes: Previously created substances for labeling mitochondria in eukaryotic cells demonstrated positive results in halophiles as well. There are two major types of cells: prokaryotes and eukaryotes. Prokaryotes, represented by bacteria, lack nuclei, and other membrane-bound structures. Eukaryotes—animal, plant, and fungal cells—have nuclei and various organelles. Among them are mitochondria, which are used to generate adenosine triphosphate molecules—a universal energy source consumed in various cellular processes. Interestingly, the modern view on the subject suggests that mitochondria were originally free-living bacteria and only later became symbionts of eukaryotic cells. Even now, they still have their own DNA. MitoTracker dyes proved to be successful in staining a wide range of microorganisms: Halobacterium salinarium, Haloferax sp., Halorubrum sp., Salicola sp., and Halomonas sp. (the letters "sp" mean "one of the species"). The experiments conducted by the researchers demonstrated that it is possible not only to obtain clear photos and keep count of the cells, but also to observe the transformation of Halobacterium salinarium. When exposed to hostile chemical treatment, the cells changed their shape: From rod-shaped, they turned into spheres. The team even made a video recording of that process. The new method will be effective in labeling microorganisms in their natural environment, be it in a saline deposit on Earth or in a Martian soil sample retrieved by a rover. It will also help study the behavior of these bacteria with minimum distortions of the results. "Halophiles are often found in ancient saline deposits that have been building up for millions of years. Our method helps locate these organisms in mineral formations and study them. This can shed light on the origin of life on Earth. According to one theory, life was brought to our planet from elsewhere in the form of bacteria," comments Valentin Borshchevskiy, lead author of the study and deputy head of the Laboratory for Advanced Studies of Membrane Proteins, MIPT.
10.1038/s41598-018-20839-7
Medicine
Using donor CAR T cells shows promise in treating myeloma patients in phase I trial
Sham Mailankody et al, Allogeneic BCMA-targeting CAR T cells in relapsed/refractory multiple myeloma: phase 1 UNIVERSAL trial interim results, Nature Medicine (2023). DOI: 10.1038/s41591-022-02182-7 Jennifer N. Brudno et al, Off-the-shelf CAR T cells for multiple myeloma, Nature Medicine (2023). DOI: 10.1038/s41591-022-02195-2 Journal information: Nature Medicine
https://dx.doi.org/10.1038/s41591-022-02182-7
https://medicalxpress.com/news/2023-02-donor-car-cells-myeloma-patients.html
Abstract ALLO-715 is a first-in-class, allogeneic, anti-BCMA CAR T cell therapy engineered to abrogate graft-versus-host disease and minimize CAR T rejection. We evaluated escalating doses of ALLO-715 after lymphodepletion with an anti-CD52 antibody (ALLO-647)-containing regimen in 43 patients with relapsed/refractory multiple myeloma as part A of the ongoing first-in-human phase 1 UNIVERSAL trial. Primary objectives included determination of the safety and tolerability of ALLO-715 and the safety profile of the ALLO-647-containing lymphodepletion regimen. Key secondary endpoints were response rate and duration of response. Grade ≥3 adverse events were reported in 38 (88.0%) of patients. Cytokine release syndrome was observed in 24 patients (55.8%), with 1 grade ≥3 event (2.3%) and neurotoxicity in 6 patients (14%), with no grade ≥3 events. Infections occurred in 23 patients (53.5%), with 10 (23.3%) of grade ≥3. Overall, 24 patients (55.8%) had a response. Among patients treated with 320 × 10 6 CAR + T cells and a fludarabine-, cyclophosphamide- and ALLO-647-based lymphodepletion regimen ( n = 24), 17 (70.8%) had a response including 11 (45.8%) with very good partial response or better and 6 (25%) with a complete response/stringent complete response. The median duration of response was 8.3 months. These initial results support the feasibility and safety of allogeneic CAR T cell therapy for myeloma. See clinicaltrials.gov registration NCT04093596 . Main Multiple myeloma (MM) remains an incurable cancer despite recent advances in treatment. The mainstay of MM treatment includes immunomodulatory drugs, proteasome inhibitors and anti-CD38 monoclonal antibodies. Although these have prolonged survival, patients eventually relapse, with each subsequent line of therapy rendering a patient more refractory to treatment 1 . Therapies targeting the B cell maturation antigen (BCMA) superfamily have emerged as a new class of therapy for treatment of myeloma. BCMA is a member of the tumor necrosis factor receptor (TNFR) superfamily (TNFRSF) that is expressed primarily on mature B lymphocytes and plasma cells 2 . BCMA maintains survival and proliferation of these cell types by binding to B cell-activating factor (BAFF) and a proliferation-inducing ligand (APRIL) and is specifically implicated in the survival and proliferation of MM cells 3 , 4 . Multiple modalities targeting BCMA have demonstrated activity including antibody–drug conjugates (ADCs) 5 , bispecific antibodies 6 , 7 and autologous chimeric antigen receptor (CAR) T cell therapy 8 , 9 . High overall response rates (ORRs) and duration of response (DOR) are seen, particularly with autologous BCMA-targeted CAR T cell therapies. As of December 2022, the US Food and Drug Administration (FDA) has approved four different BCMA-targeted therapies for treatment of patients with relapsed/refractory myeloma who have been treated with a proteasome inhibitor (PI), immunomodulator (IMiD) and anti-CD38 monoclonal antibody, including belantamab mafodotin-blmf, idecabtagene vicleucel (ide-cel), ciltacabtagene autoleucel (cilta-cel) and teclistamab ( ) 5 , 8 , 9 . Belantamab mafodotin-blmf is a BCMA-directed antibody conjugated to a microtubule inhibitor (accelerated approval; approval rescinded in December 2022) and at approved doses achieved an ORR of 32% and the median DOR (mDOR) of 11 months 5 , 10 . Teclistamab, a bispecific antibody that targets both BCMA and CD3 (accelerated approval; full approval pending) and results from the phase 1/2 MajesTEC-1 study, showed an ORR of 63% and mDOR of 18.4 months 11 , 12 . Ide-cel and cilta-cel are autologous CAR T cell therapies targeting BCMA. Ide-cel achieved an ORR of 72% and an mDOR of 11 months 8 , 13 , whereas cilta-cel has a reported ORR of 98% and the mDOR has not yet been reached 9 , 14 . Although the recent FDA approvals of autologous BCMA-targeted CAR T cell therapies mark an important treatment advance for patients with MM, there are several logistical challenges to autologous CAR T that may prevent widespread access to CAR T therapy. These include the number of cells available and suitable for collection possibly being scarce because patients are often lymphopenic 15 , 16 , manufacturing constraints and, perhaps most importantly, because of the lengthy vein-to-vein time, which makes bridging therapy necessary for most patients. These factors have resulted in wait lists for treatment and some patients die before they can receive treatment 17 . Allogeneic CAR T cell therapy aims to bridge these logistical hurdles by being an off-the-shelf CAR T product that can be accessed without the need for leukapheresis and the subsequent lengthy manufacturing times 18 . Previous studies of donor-derived, allogeneic anti-CD19 CAR T therapy have reported complete responses in patients with heavily pretreated B cell acute lymphoblastic leukemia and this efficacy has been associated with a manageable safety profile 19 , 20 . ALLO-715 contains an integrated, self-inactivating, third-generation, recombinant lentiviral vector that expresses a second-generation anti-BCMA CAR containing a single-chain variable fragment (scFv) derived from a human anti-BCMA antibody and the intracellular domains of 4-1BB and CD3ζ (Extended Data Fig. 1 ). The extracellular region of the BCMA CAR also contains two mimotopes that confer susceptibility to the anti-CD20 monoclonal antibody rituximab and functions as an intracellular off-switch in the presence of rituximab 21 , 22 . There are two additional changes using the transcription activator-like effector nuclease (TALEN) technology: (1) knockout of the T cell receptor alpha constant (TRAC) and (2) knockout of cluster of differentiation (CD) 52. Knocking out TRAC reduces the risk of graft-versus-host disease (GvHD) by reducing the expression of the T cell receptor (TCR)-αβ complex at the cell surface, to prevent TCR-αβ-mediated recognition of histocompatibility antigens. CD52 is a cell-surface glycoprotein found on a variety of host immune cell types, including lymphocytes, monocytes, macrophages, eosinophils and dendritic cells; host immune cells can mediate a host-versus-graft reaction, leading to the elimination of ALLO-715. CD52 + cells can be effectively depleted by anti-CD52 antibodies such as ALLO-647, a therapeutic immunoglobulin G1 monoclonal antibody. Inactivation of CD52 in the CAR T cells enables cell expansion and persistence of ALLO-715 in patients who are lymphodepleted with ALLO-647. UNIVERSAL is a first-in-human phase 1 trial of ALLO-715 ( NCT04093596 ) in patients who have relapsed and refractory myeloma. UNIVERSAL consists of three parts: parts A, B and C. Part A evaluates the safety, efficacy, cellular kinetics, immunogenicity and pharmacodynamics of a single dose of ALLO-715 after lymphodepletion (LD) with an ALLO-647-containing regimen. Part B evaluates ALLO-715 in combination with nirogacestat and part C evaluates a consolidation regimen of ALLO-715 in which two doses are given approximately 2 weeks apart. In the present study, we report a nonprespecified interim analysis of cohort A. Cohort A is continuing to enroll patients. Results Patients The primary objectives of part A of the phase 1 UNIVERSAL trial were safety and tolerability of ALLO-715 at increasing dose levels, and also assessing the safety profile of ALLO-647 used as LD in combination with fludarabine and/or cyclophosphamide before ALLO-715 infusion. Between 10 September 2019 and 14 October 2021, 48 patients with relapsed/refractory MM were enrolled into part A of UNIVERSAL. Eligible patients were aged ≥18 years and must have received at least 3 previous lines of therapy including a PI, IMiD or anti-CD38 monoclonal antibody. Patients were also required to be refractory to their last line of treatment (progression during or within 60 days of their last dose), and have measurable disease, an Eastern Cooperative Oncology Group (ECOG) performance status of 0 or 1 and adequate organ function. Patients who had received previous non-CAR T BCMA-targeted therapy (bispecific, ADC or other) were permitted. Previous exposure to a BCMA-directed CAR T was excluded. All patients were screened for the presence of donor (product)-specific anti-human leukocyte antigen (HLA) antibodies (DSAs) and those with positive DSA tests ( n = 6) were excluded. Five patients did not proceed to LD due to progressive disease, death or acute events (Fig. 1 ). The median time from enrollment to start of LD was 5 days (range 0–20 days) and 43 patients received both LD and ALLO-715, which was derived from 3 healthy donors in the present study. The trial followed a standard 3 + 3 dose escalation design with 4 dose levels of ALLO-715. Escalating dose levels were tested sequentially and all patients who started treatment received all doses as per their dose escalation assignment. Doses of ALLO-647 and ALLO-715 were not escalated simultaneously. In addition, the experimental design allowed for expansion cohorts of up to 12 patients to further characterize the safety and efficacy of specific LD and ALLO-715 dose levels; 43 patients received ALLO-715 and were treated at 4 target dose levels (DLs) of: DL1, 40 × 10 6 ( n = 3); DL2, 160 × 10 6 ( n = 7); DL3, 320 ×106 ( n = 27); and DL4, 480 × 10 6 ( n = 6) CAR + T cells, respectively. Several LD regimens were also evaluated including fludarabine (90 mg m −2 ) and cyclophosphamide (900 mg m −2 ) combined with ALLO-647 at a 3-day dose of 39 mg (FCA39 ( n = 21)), 60 mg (FCA60 ( n = 13)) or 90 mg (FCA90 ( n = 3)). Cyclophosphamide and ALLO-647 were also tested without fludarabine (CA39 ( n = 6)). A total of 33 patients discontinued treatment: 24 due to progressive disease, 7 due to death and 2 due to withdrawal of consent. The cutoff date for clinical analysis was 14 October 2021. Fig. 1: Consort diagram. M, 10 6 CAR + T cells. LD nomenclature: C, cyclophosphamide 300 mg m −2 on days −5, −4 and −3; F, fludarabine 30 mg m −2 on days −5, −4 and −3; A39, ALLO-647 13 mg m −2 on days −5, −4 and 0; A60, ALLO-647 20 mg per day on days −5, −4 and −3; A90, ALLO-647 30 mg per day on days −5, −4 and −3. Full size image Patient characteristics are shown in Table 1 . The median age was 64 years (range 46–77 years) and 63% were male. The median time from the diagnosis of myeloma was 4.9 years (range 0.9–26.4 years) and 37% had a high-risk cytogenetics profile defined as the presence of del(17p), t(14;16) or t(4;14). In addition, 21% of patients had extramedullary disease. Patients had received a median of 5 (range 3–11) previous treatment regimens and 91% had triple refractory disease, that is, refractory to a PI, an IMiD and an anti-CD38 monoclonal antibody, whereas 42% had penta-refractory disease, that is, refractory to 2 PIs, 2 IMiDs and an anti-CD38 monoclonal antibody. Three patients had received previous BCMA-targeted treatment. All patients were refractory to their last line of treatment. No patients received bridging therapy and all patients observed a 2-week wash-out between their last treatment regimen and the start of LD. Table 1 Baseline characteristics of patients who received ALLO-715 Full size table Safety and tolerability of ALLO-647 and ALLO-715 During dose escalation, one dose-limiting toxicity (DLT) of grade 5 fungal pneumonia (related to LD) on day 8 was observed at a dose of 160 × 10 6 CAR + T cells with an LD regimen of cyclophosphamide (total dose: 900 mg m −2 ) and ALLO-647 total dose of 39 mg (CA39). All 43 patients reported at least one adverse event (AE) (Table 2 ), with 88% of patients having AEs that were grade ≥3. Hematological events were expected given the use of lymphodepleting chemotherapy. The most frequent AEs were neutropenia in 69.8%, anemia in 55.8% and thrombocytopenia in 51.2%. Of ten deaths that occurred during the study after ALLO-715 administration, seven were related to disease progression and three were instances of grade 5 infections: fungal pneumonia, adenoviral hepatitis and sepsis. Table 2 All grade AEs occurring in ≥20% of patients, grade ≥3 events occurring in two or more patients with CRS and neurotoxic effects in patients who received ALLO-715 Full size table Infections occurred in 53.5% of patients with 23.3% of patients experiencing a grade ≥3 infection (Extended Data Table 1 ). The only infection reported in >10% of patients was cytomegalovirus (CMV) ( n = 14; 32.6%). Grade 1 or 2 CMV reactivation occurred in 12 patients, consistent with CMV viremia and 12 of the 14 patients received treatment including 10 with oral valganciclovir and 2 with letermovir (1 as treatment and 1 for viral control). Two patients had grade 3 CMV reactivation and were treated with intravenous antiviral therapy. Of these two, one patient had no sign of endorgan damage. The second patient experienced grade 3 CMV disease in the setting of adenovirus hepatitis and human herpesvirus 6 infection with subsequent death from the adenoviral hepatitis. The only other grade ≥3 infection reported in more than one person was pneumonia ( n = 3, 7%). Prolonged cytopenias (defined as cytopenias existing at study day 56 and present for the preceding ≥21 days) occurred in 8 (19%) subjects, of whom 2 had events of prolonged neutropenia and thrombocytopenia (bicytopenia) and 2 each had prolonged neutropenia or anemia. One subject had prolonged thrombocytopenia and one prolonged pancytopenia. The cytopenias were resolved in seven patients at a median time of 2.6 months (range 1.9–8.4 months), whereas the patient with prolonged pancytopenia remained cytopenic at the time of death from adenoviral hepatitis. Cytokine release syndrome (CRS) occurred in 24 (55.8%) patients, with more cases of CRS being reported at higher dose levels of ALLO-715. In 27 patients who received DL3, 19 (70%) experienced CRS. All occurrences of CRS were grade 1 or 2 except for 1 grade 3 CRS reported in a patient at DL3. Median time to onset was 7 d and median duration was 4 d. Ten (23.3%) patients received tocilizumab and six (14%) patients received steroids as treatment for CRS. Events of potential neurotoxicity were identified in 6 (14%) patients and 2 (5%) had events concurrent with CRS. All cases were grade 1 or 2 and no patients received steroids for events of neurotoxicity. The time to onset of events relative to infusion ranged from 4 d to 56 d with a median time of 8.5 d. No long-term neurotoxicity, no movement disorders or events of parkinsonism were reported. No cases of GvHD were reported. Infusion-related reactions to ALLO-647 were seen in 12 (28%) patients, with all events being grade 1 or 2. No anti-ALLO-715 scFv antibodies were detected in any subject. Anti-myeloma activity of ALLO-715 The secondary objectives of this trial included evaluating the anti-myeloma activity of ALLO-715 by assessing the response rate. At a median follow-up of 10.2 months (95% confidence interval (CI) 3.8 to not reached), 24 of 43 patients (55.8%; 95% CI: 39.9, 70.9) had a response, with 15 patients (34.9%) experiencing a very good partial response or better (VGPR + ). Responses were observed in 0 of 3 patients receiving DL1, 2 of 7 patients receiving DL2 (28.6%), 19 of 27 patients receiving DL3 (70%) and 3 of 6 patients receiving DL4 (50%). Based on clinical responses and cellular kinetics, DL3 (320 × 10 6 CAR + cells) FCA39, FCA60 or FCA90 LD was expanded to treat additional patients ( n = 24; 11 with FCA39 LD, 10 with FCA60 and 3 with FCA90). Among these patients, 17 (70.8%; 95% CI: 48.9, 87.4) achieved a partial response or better whereas 11 (46%) were VGPR + and 6 (25%) were in complete remission/stringent complete remission (CR/sCR). The median time to response for this cohort was 16 days (range 15–57 days) and the mDOR was 8.3 months (95% CI: 3.4, 11.3) (Fig. 2 ). The expression of BCMA in patients after relapse is still being analyzed and the data are not yet available. Fig. 2: Duration of response in patients who received 320 × 10 6 CAR + T cells with an FCA a LD regimen. a FCA indicates conditioning with fludarabine, cyclophosphamide and varying doses of ALLO-647. MR/SD, minimal response/stable disease; NE, not estimatable; PD, progressive disease; PR, partial response. Full size image Responses were observed in patients treated with ALLO-715 who had high-risk cytogenetic abnormalities, penta-refractory disease, a high tumor burden and extramedullary disease (Fig. 3 ). Efficacy results are summarized in Table 3 . Tumor responses to LD using regimens of cyclophosphamide plus ALLO-647 (CA) at total doses of 39 mg (13 mg per day) are presented in Extended Data Table 2 . As an exploratory objective, minimal residual disease (MRD) was evaluated in 14 patients at a best response of VGPR or better ( n = 15) and 13 obtained an MRD-negative status (93%). Fig. 3: Subgroup analysis of response in patients who received 320 × 10 6 CAR + T cells with an FCA a LD regimen. Overl n = 24 independent patients. Data are presented as median values ± the interquartile range. a FCA indicates conditioning with fludarabine, cyclophosphamide and varying doses of ALLO-647. *Presence of extramedullary disease at screening. Presence of extramedullary disease ‘not applicable’ is categorized as ‘No’. Full size image Table 3 Tumor response according to dose of ALLO-715 and LD regimen Full size table Cellular kinetics of ALLO-715 The cellular kinetics of ALLO-715 were characterized as a secondary objective. In the present study, ALLO-715 was derived from three healthy donors with similar levels of expansion observed with all donors. Flow cytometry and an anti-idiotype antibody were used to separate ALLO-715 cells from host lymphocytes (Extended Data Fig. 2 ) and ALLO-715 displayed in vivo expansion across all dose levels, with a numerically higher expansion seen at DL3 compared with lower dose levels (DL1 and DL2). No apparent increase in expansion was observed at DL4 with DL3 (Extended Data Table 3 and Extended Data Fig. 3 ). In DL3 FCA cohorts, there was expansion in 20 out of 24 patients and, in those with expansion, the median peak expansion was seen at day 10. No correlation was observed between the occurrence of CRS and the success or failure of expansion. Persistence was variable with 16 of 24 patients (67%) having no detectable CAR T levels by day 28 (Extended Data Fig. 4a,b ). CAR T levels were numerically higher in those who had a response compared with those who did not, as measured by vector copy number (Extended Data Fig. 4c ). ALLO-647 pharmacokinetics and immunogenicity The pharmacokinetic and immunogenic profiles of ALLO-647 were evaluated as a secondary objective. As expected, the concentration of ALLO-647 in the serum of patients was dependent on the dose level, with the highest peak seen in the FCA90 group (Extended Data Fig. 5a ). Concentrations decreased in all groups over time and were undetectable in all patients by month 3. Post-treatment, anti-drug antibodies (ADAs) against ALLO-647 were observed in 14% (9 of 64) of patients. Post-treatment ADAs were detected only in patients administered 39 mg of ALLO-647. The population PK model of ALLO-647 did not find the presence of ADAs affecting clearance of ALLO-647. Host immune cell depletion and reconstitution Host immune depletion and reconstitution were also evaluated as a secondary objective. The host immune cells trended lower immediately after LD on days 0 and 7 in responders compared with nonresponders, particularly apparent in host T cells on days 0 and 7 in subjects with a VGPR or better response compared with nonresponders (Extended Data Fig. 5b ). LD was dose dependent on the treatment regimen, with lower levels of total CD45 + lymphocytes seen at FCA90 compared with FCA39 and FCA60 (Extended Data Fig. 6a ). Similar dose-dependent differences were seen in B cells, T cells and natural killer (NK) cells (Extended Data Fig. 6b–d ). These differences were maintained during recovery of host immune cells. NK cells rapidly recover to predepletion levels within 1–3 months. Recovery of T cells is slower with the median T cell count across subjects reaching 200 cells per μl by month 3 and B cells recover to a median of 10 cells per μl by month 6. Discussion In this first-in-human, phase 1 trial with heavily pretreated MM patients, we demonstrate feasibility, acceptable safety and preliminary evidence of anti-myeloma efficacy for ALLO-715, the first allogeneic BCMA-targeted CAR T therapy. ALLO-715 was successfully administered in 43 patients with a median time from enrollment to start of treatment of 5 days. The trial established an encouraging safety profile for ALLO-715 in line with other anti-BCMA autologous-targeted cell therapies. In the present study, CRS and neurotoxicity were observed at 56% and 14%, respectively, with only one grade 3 case of CRS reported and no grade ≥3 neurotoxicity, which is somewhat lower than the rates seen with autologous anti-BCMA CAR T cell therapy 13 , 14 , 23 . Likewise, similar rates of prolonged cytopenia (19%) were seen in part A of this trial as with other autologous CAR T therapies. Notably, no GvHD was reported, suggesting that knockout of TRAC provides sufficient mitigation of this potential adverse event with allogeneic products. The utilization of ALLO-647 as part of an LD regimen was a key component to this trial. ALLO-647 provides prolonged LD with no apparent increase in grade ≥3 infections compared with other anti-BCMA autologous-targeted cell therapies 8 , 13 , 24 , 25 . We observed grade ≥3 infections in 22% of patients including 3 grade 5 events, whereas grade ≥3 infections were reported in 20% of patients treated with cilta-cel in the phase 1b/2 CARTITUDE-1 trial 14 and 22% of patients treated with ide-cel in the phase 2 KarMMa trial 13 . Among anti-CD19 allogeneic CAR T trials, grade ≥3 infections were reported in 27% of patients treated with ALLO-501 in the ALPHA study 26 and 33% of patients in the ANTLER study treated with CB-010 (ref. 27 ). It should be noted that viral reactivations, in particular CMV viral reactivations, were noted in 33% of patients in UNIVERSAL highlighting the importance of CMV monitoring and consideration of anti-infective prophylaxis. Reassuringly, only two grade 3 CMV reactions requiring the use of intravenous antiviral therapy were reported and no grade 4 or 5 reactivations or infections were reported. Given the small sample size and the lack of a control arm, it is difficult to ascertain what role fewer or shorter courses of prophylaxis may have played at the cohort or even individual level. Per the National Comprehensive Cancer Network (NCCN) guidelines on infectious prophylaxis, patients who are treated with anti-CD52 therapeutic antibodies are considered at high risk for CMV reactivation 28 , which may differentiate the UNIVERSAL patient population from those in trials of autologous CAR T therapies. However, most trials for autologous CAR T products to date have categorized infections broadly as bacterial, viral or fungal, which makes it difficult to tease out the individual viral types for comparison to the CMV rates in the UNIVERSAL trial. In addition, CMV monitoring was not standard in the autologous CAR T clinical trials and therefore maybe underreported in published datasets for autologous CAR T products. However, CMV reactivation is an emerging risk that institutional practices are adjusting for and implementing both monitoring and prophylaxis 29 . Although patients were treated on an inpatient basis for this trial, the safety profile of ALLO-715 after FCA LD enables the possibility for outpatient administration that could be explored in the future. The trial evaluated multiple doses of ALLO-715 and multiple regimens for LD. ALLO-715 displayed in vivo expansion across all dose levels including in 20 out of 24 patients at DL3. A lack of expansion in patients did not correlate with the occurrence of CRS and the probable explanation for the lack of CAR T cell expansion in these patients was insufficient host immune cell depletion. Although responses were observed at doses ranging from 160 × 10 6 cells to 480 × 10 6 cells, most responses were observed at the higher dose levels with no apparent benefit of increasing the cell dose from 320 × 10 6 cells to 480 × 10 6 cells. Responses were also observed at the different ALLO-647-containing, lymphodepleting regimens, with most responses observed with the FCA regimens. In the group of patients treated with FCA LD and 320 × 10 6 cells ( n = 24), 71% had an objective response with a median DOR of 8.3 months. In patients treated with the FCA60 and 320 × 10 6 cells ( n = 10), the ORR of 80% falls in the range of those observed with other anti-BCMA therapies in a relapsed/refractory MM setting 5 , 8 , 9 , 11 , 30 , 31 , whereas the median DOR for this group has not yet been reached. Moreover, response rates may further improve in future studies of allogeneic CAR T cells such as those including consolidation dosing 32 , addition of the gamma-secretase inhibitor nirogacestat 33 or the next-generation construct ALLO-605, which includes an intrinsic signal 3 designed to recapitulate cytokine signaling selectively in the CAR T cells (TURBO domain) with the aim of improving engraftment and persistence 34 . Autologous CAR T cells targeting BCMA are rapidly evolving with approvals of ide-cel and cilta-cel and multiple additional products under evaluation. Allogeneic CAR T cell therapy has several meaningful advantages over autologous CAR T products, including the ease of manufacture and administration without extended delays or need for leukapheresis or bridging therapy with ALLO-715. For example, in KarMMa, the registrational study for idecabtagene vicleucel, the median time from leukapheresis to product availability was 33 days, with a range of 26–49 days, and 87% of subjects received bridging therapy to control disease during the manufacturing process 8 . Similarly, in CARTITUDE-1, the registrational study for cilta-cel, the median time for leukopheresis to product availability was 32 days with a range of 27–66 days 9 . In contrast, the median time from enrollment to the start of LD in this trial was 5 days and the use of bridging therapy was not needed. In addition, once small dose expansion cohorts were open, all patients who enrolled received treatment, highlighting a key advantage of off-the-shelf therapy. In comparison, both ide-cel and cilta-cel saw a 9% and 14% dropout rate, respectively, between leukapheresis and CAR T infusion 13 , 14 . As targeted anti-BCMA therapies enter earlier lines of therapy, their availability as an off-the-shelf product will become increasingly important because the potential use case extends beyond areas with ready access to leukapheresis and to a larger patient population. UNIVERSAL is currently continuing to enroll. Ongoing and future studies will also evaluate the importance of the phenotypic characteristics of T cells obtained from healthy donors versus those obtained from relapsed or refractory patients with multiple previous lines of therapy. In addition, enrollment into a study of ALLO-605, a next-generation, BCMA-targeted, allogeneic TurboCAR product has begun ( NCT05000450 ). In summary, ALLO-715 is the first allogeneic CAR T cell therapy for myeloma and these initial results from the UNIVERSAL trial provide evidence of feasibility, safety and efficacy for this off-the-shelf cellular therapy as a potential treatment for patients with MM. Methods Study design and participants UNIVERSAL is a phase 1, single-arm, open-label trial ( NCT04093596 ) conducted at 13 sites in the United States of America. For this unplanned, post-hoc analysis, 48 patients were enrolled in the study and 43 received treatment as shown in Fig. 1 , including 16 women (37%), 18 patients (52%) aged <65 years, 18 patients (42%) aged 65–75 years and 3 (7%) aged ≥75 years. Eligible patients were men or women who had relapsed/refractory MM, were aged ≥18 years and had received at least three previous lines of therapy, including a PI, IMiD and anti-CD38 monoclonal antibody. Patients were also required to be refractory to their last line of treatment (progression during or within 60 d of their last dose), and have measurable disease, an ECOG PS of 0 or 1, adequate hematological, renal and liver function and a left ventricular ejection fraction ≥50%, with no clinically significant pericardial or pleural effusion at screening. All patients were screened for the presence of DSAs and those with positive DSA tests ( n = 6) were excluded. Patients were also required to have normal blood oxygen saturation levels. Patients who had received pevious non-CAR T BCMA-targeted therapy (bispecific, ADC or other) were allowed. Previous exposure to a BCMA-directed CAR T was excluded. The acute effects of any pevious therapy had to be resolved and any patients who had a grade ≥2 AE or serious (S)AE before LD were discussed with the sponsor before inclusion. Finally, eligible patients had to be seronegative for hepatitis B antigen and hepatitis C antibody, and women of childbearing potential had to have a negative serum pregnancy test. Patients with known active or history of central nervous system (CNS) or leptomeningeal involvement of myeloma or plasma cell leukemia were excluded, as were patients with significant CNS dysfunction or any autoimmune disease with CNS involvement. Patients with current or a history of thyroid disorder, except hypothyroidism controlled on a stable dose of hormone replacement therapy, were excluded as well as those with active malignancies that required systemic treatment within 3 years before enrollment. Patients were excluded if they had major surgery within 3 months before the start of LD, radiation therapy within 2 weeks before the start of LD or autologous stem-cell transplantation within 6 weeks before the start of LD. Eligible patients were not permitted to have any previous allogeneic hematopoietic stem-cell transplantation and those with systemic anticancer therapy within 2 weeks before conditioning regimen or rituximab within the past 2 years were also excluded. Patients were excluded if they participated in other studies of investigational drugs within 28 days before lymphodepletion, had previous treatment with any gene therapy (CAR T cell therapy was permitted) or previous treatment with anti-CD52 monoclonal antibody (ALLO-647 was permitted). Ongoing treatment with immunosuppressive agents was not permitted, including corticosteroid use within 1 week before first dose of ALLO-715 (except an inhaled steroid for asthma, topical steroid use or another local corticosteroid administration), and infliximab had to be stopped at least 45 days before administration of ALLO-715. Active and clinically significant autoimmune disease within the last 2 years was exclusionary. as was any active uncontrolled bacterial, fungal or viral infection at screening and the presence of positive blood cultures within 7 days before ALLO-715 infusion. Patients known to be refractory to platelet or red blood cell transfusions were excluded as were those with any form of primary or acquired immunodeficiency. Patients with any indwelling line or drain were excluded, although dedicated central venous access catheters such as a Port-a-Cath or Hickman catheter were permitted. Patients were excluded if they had any of the following in the previous 6 months: myocardial infarction, congenital long QT syndrome, torsade de pointes, arrhythmias, left anterior hemiblock, unstable angina, coronary/peripheral artery bypass graft, symptomatic congestive heart failure, cerebrovascular accident, transient ischemic attack, pulmonary embolism, deep vein thrombosis or other clinically significant episode of thromboembolic disease. Patients with ongoing cardiac dysrhythmias of NCI CTCAE (National Cancer Institute Common Terminology Criteria for Adverse Events) grade ≥2 or atrial fibrillation of any grade and those with cardiac amyloidosis were also ineligible. A history of hypertension crisis or hypertensive encephalopathy within 6 months before screening was exclusionary, as was a known or suspected hypersensitivity to murine or bovine products. Fertile male subjects and female subjects of childbearing potential were excluded if they were unwilling or unable to use a highly effective method of contraception for at least 12 months (6 months for males) after the last dose of cyclophosphamide or 6 months for females and males after the last dose of ALLO-647, whichever was later. Finally, patients with other acute or chronic medical or psychiatric conditions, including recent or active suicidal ideation or behavior, laboratory abnormality that could increase the risk associated with study participation or investigational product administration or could interfere with the interpretation of study results, were excluded as were those unwilling to participate in an extended safety monitoring period. ALLO-715 was manufactured by Allogene from peripheral blood mononuclear cells obtained by leukapheresis from three healthy volunteer donors and was supplied for infusion as a frozen cell suspension in four different good manufacturing practice lots. Patients received LD followed by ALLO-715 at one of four dose levels (DLs) in a 3 + 3 dose escalation design: 40 (DL1), 160 (DL2), 320 (DL3) and 480 (DL4) × 10 6 CAR + T cells. The first subject was treated and observed for 28 d before treating subsequent subjects with ALLO-715 and, if no DLT occurred in 3 subjects or if there was 1 DLT within 6 subjects at a given DL, enrollment to the next higher DL commenced in a staggered fashion. Several LD regimens were evaluated: FCA39, FCA60, FCA90 and CA39, with fludarabine (F) 90 mg m −2 , cyclophosphamide (C) 900 mg m −2 and ALLO-647 (A) 39, 60 or 90 mg divided over 3 days. The starting dose of ALLO-647 (39 mg) was determined based on preclinical pharmacokinetic results and FCA39 was the first LD tested. ALLO-647 was infused over 3 days at a concentration of 10 mg ml −1 . The study protocol allowed for alternative doses of ALLO-647 to be tested for LD in DL2, DL3 or DL4 at the same or lower dose of ALLO-715, following the same 3 + 3 design used for the ALLO-715 escalation: escalate if no DLT occurred in 3 subjects or if there was 1 DLT within 6 subjects at a given dose level. There was no simultaneous escalation with ALLO-715 and ALLO-647. In the dose escalation phase, subjects were enrolled sequentially into FCA (at least three subjects), then into other available cohorts when open, until three to six subjects are treated in each cohort. Only one LD tested at a time and other LD doses were opened after a DLT period of the previous LD regimen. The FCA60 regimen was chosen for expansion cohorts based on its favorable safety efficacy and tolerability profile. During the study, patients received prophylaxis for infection with pneumocystis pneumonia, herpesvirus and fungal infections according to NCCN guidelines for patients at high risk of infection 37 or standard institutional practice. NCCN recommendations included microbial prophylaxis during neutropenia using fluoroquinolone, fungal prophylaxis during neutropenia, antiviral therapy for 2 months after anti-CD52 therapy and until CD4 + ≥200 cells μl −1 and prophylaxis for Pneumocystis jirovecii with trimethoprim/sulfamethoxazole (or atovaquone, dapsone, pentamidine if trimethoprim/sulfamethoxazole intolerant) for at least 2 months after ALLO-647 and until CD4 + ≥200 cells μl −1 . Letermovir was recommended for CMV prophylaxis, especially in CMV-seropositive subjects, and patients were monitored weekly by PCR for a minimum of 2 months after ALLO-647 treatment. On confirmation of CMV viremia, recommended pre-emptive therapy was oral valganciclovir or intravenous ganciclovir for 2 weeks and until CMV was no longer detectable. CMV PCR antigen was monitored weekly. There was no protocol-defined level at which treatment was recommended, sites and investigators using their clinical judgment. For most subjects, the CMV antigen PCR level at which treatment was started was between 170 IU ml −1 and 2,200 IU ml −1 with a median of 800 IU ml −1 . Patients were to be followed for at least 54 months and then asked to participate in a separate long-term follow-up study. Study oversight Allogene sponsored the study, provided ALLO-715 and ALLO-647, and collaborated with academic investigators on study design, data analysis/interpretation and manuscript writing. The trial was conducted in accordance with the International Conference on Harmonization Good Clinical Practice Guidelines, Declaration of Helsinki and all applicable regulatory requirements. The trial protocol was approved by an independent institutional review board (IRB) at each site before initiation, including Medical College of Wisconsin IRB, Memorial Sloan Kettering, Cancer Center IRB, Vanderbilt IRB, Stanford IRB, Advarra IRB, Integ Review IRB, Dana-Farber Cancer Institute IRB, Mayo Clinic IRB, WCG IRB, Cleveland Clinic IRB, Mt. Sinai IRB and City of Hope IRB. A data safety monitoring board was not used in this part of the study. All patients gave written informed consent. All authors confirm the accuracy of the data and adherence of the trial to the protocol. Endpoints and study procedures The primary objectives of the present study were to evaluate the safety and tolerability of ALLO-715 and ALLO-647. All AEs were collected for 3 months (90 days) after the dose of ALLO-715 or until the subject began a new anticancer therapy, whichever happened first. After 3 months, only SAEs were collected until disease progression or initiation of new anticancer therapy, whichever happened first. SAEs that were assessed as related to ALLO-715 or ALLO-647 by the investigator were collected regardless of the time of occurrence. Severity was graded according to the NCI CTCAE v.5.0. CRS was defined and graded according to the American Society of Transplantation and Cellular Therapy (ASTCT) grading criteria 35 . Neurological toxicity was evaluated using a broad standardized MedDRA query (SMQ) of noninfectious encephalopathy/delirium with adjudication by clinical review using ASTCT grading criteria for events reported as immune effector cell-associated neurotoxicity syndrome and CTCAE v.5.0 criteria for all other neurological toxicities. Infusion-related reactions (IRRs) were identified by review of AEs that occurred within 24 hours of ALLO-647 infusion and deemed related to ALLO-647. Potential symptoms of IRRs were adjudicated by clinical review. Events were graded according to the NCI CTCAE on the basis of the highest individual symptom grade. Prolonged cytopenias were defined as AEs of neutropenia and/or thrombocytopenia and/or anemia of grade ≥3 present or persisting at study day 56 after ALLO-715 infusion and which had been ongoing for the preceding 21 days. Lymphopenias were defined as events reported by investigators using the Preferred Terms lymphopenia, lymphocyte count decreased, T lymphocyte count decreased or B lymphocyte count decreased in Medical Dictionary for Regulatory Activities. The definition was per the individual sites using CTCAE v.5.0 and most of the events were reported as lymphocyte count decreased. Key secondary endpoints were response and DOR. Clinical response and disease progression were assessed by the investigator according to the International Myeloma Working Group consensus criteria 36 . In addition, ALLO-715 cellular kinetics, ALLO-647 pharmacokinetics, immunogenicity of ALLO-715 and ALLO-647, and evaluation of host immune depletion and reconstitution by T and B lymphocyte and NK (TBNK) cell subsets were also evaluated. For TBNK and cellular kinetics evaluations, patient peripheral blood samples were assayed by multi-parameter flow cytometry before and after infusion to detect TBNK cell subsets, with validated antibody panels or BCMA CAR + T cells using an anti-idiotypic antibody with a lower limit of 0.01% of leukocytes (CellCarta Precision Medicine). Whole-blood samples were collected into Cyto-Chex (Streck) and shipped on ice overnight to CellCarta. Samples were stained with antibodies to CD45 and CD3 (Becton Dickenson) and anti-Allo-715 CAR idiotype (Allogene Therapeutics) and run on an LSR-II cytometer (Becton Dickenson). Data analysis was done using FlowJo software (FlowJo LLC). After gating out dead cells and debris, single cells were defined using side scatter peak height versus area. Lymphocytes were gated using a combination of side scatter and CD45. CAR T cells were separated from host lymphocytes using an anti-idiotype antibody developed at Allogene. A quantitative (q)PCR assay was also used to determine lentiviral vector transgene copy number to enable quantitative tracking of BCMA CAR + T cells to a minimum of 50 copies of transgene per microgram of DNA (Navigate BioPharma). The concentration versus time profile of ALLO-647 was adopted from a population PK model, which used post-hoc concentrations for all subjects enrolled in the UNIVERSAL study. The model-predicted exposure of ALLO-647 increased with administered dose and appeared to be more than dose proportional. Exploratory endpoints included the evaluation of MRD by either next-generation sequencing (clonoSEQ, Adaptive Biotechnologies) or EuroFlow Next Generation Flow (Covance) with a minimum sensitivity of 10 −5 in patients who obtained VGPR + . In situations where central MRD could not be obtained, local MRD calculations were considered. Other exploratory endpoints included the measurement of select cytokines and quantification of soluble BCMA at select timepoints. Statistical analysis This was an unplanned post-hoc analysis. The dose escalation part of this phase 1 study was governed by a 3 + 3 design and 4 DLs, including 3 predefined DLs (DL1, DL2 and DL3) and 1 additional DL (DL4) or intermediate dose. Up to 6 subjects could be tested in each cohort at each dose level with 1 cohort at DL1, 2 cohorts at DL2, 3 cohorts at DL3 and up to 3 cohorts at DL4 or intermediate dose. To characterize the dose of ALLO-647, the DL3 cohorts were utilized to evaluate higher doses of ALLO-647. No sample size calculation was used because the size of each dose cohort was defined by the nature of the 3 + 3 design. The backfill option added up to three subjects per cohort within a DL to a maximum of six per cohort. All patients who enrolled were used for summaries related to subject demographics. All patients who received any amount of study drug (either ALLO-647 or ALLO-715) were included in safety analyses. All patients who received any amount of ALLO-715 were included in a modified intention to treat population for efficacy analyses. Descriptive statistics include medians with minima and maxima for continuous variables and counts and percentages for categorical variables. Investigator-assessed responses are reported as ORRs with CIs assessed by exact methods (Clopper–Pearson 95% CIs). DOR was estimated using Kaplan–Meier methods. Censoring of data for DOR was based on FDA-censoring recommendations. Follow-up time was calculated using the reverse Kaplan–Meier method. Given the exploratory nature of the present study, no adjustments for multiple comparisons were made. Analyses were performed with SAS 8.2. All data presented utilize a data cutoff of 14 October 2021. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability This trial is currently ongoing. Subject to patient privacy and confidentiality obligations, access to patient-level data and supporting clinical documents may be available upon request and subject to review by the study sponsor on completion of the trial. Such requests can be made to Allogene Therapeutics, Inc., 210 East Grand Avenue, South San Francisco, CA 94080, USA or by email at info@allogene.com. A material transfer and/or data access agreement with the sponsor will be required to access the data. Change history 17 March 2023 A Correction to this paper has been published:
A team of medical specialists working at Memorial Sloan Kettering Cancer Center in New York has found that donated white blood cells can be used effectively as part of CAR T cell therapy to treat myeloma patients. In their study, published in the journal Nature Medicine, the group gave patients varying amounts of donated CAR T cells as a treatment option in a phase I clinical trial for myeloma, which occurs in plasma cells in bone marrow. Jennifer Brudno and James Kochenderfer with the U.S. National Institutes of Health published a News & Views article in the same journal issue outlining how chimeric antigen receptor (CAR) T cell therapies work and discussing the work done by the team in New York. CAR T cell therapy involves removing T cells from a cancer patient, modifying them genetically to more specifically target cancer cells, and then injecting them back into the same patient. CAR T cell therapy is highly effective for some patients. The downside is that it is time-consuming to grow the cells after alteration, ruling it out for some patients, and it can be expensive, as well. In this new effort, the researchers took a different approach, using previously altered cells donated by healthy volunteers. As part of the research, the team in New York ran a phase I clinical trial called UNIVERSAL to find out if the approach is feasible. The trial involved giving varying amounts of altered cultivated cells to 43 myeloma patients, each of whom had either not responded well to other treatments or who had relapsed. Also, an additional genetic modification was made to the donated cells to make them less likely to be targeted by the immune system. Each patient was also given antibodies to reduce the chances of the immune system targeting the donated cells. Testing showed that just over half of the volunteer patients (55%) had a positive response to the therapy. The therapy was also found to be both safe and feasible. The researchers suggest that their trial was a success and plan to continue their work with the goal of improving results.
10.1038/s41591-022-02182-7
Space
'Standard candles' illuminate the far side of the Milky Way
Paper: dx.doi.org/10.1038/nature13246 Journal information: Nature
http://dx.doi.org/10.1038/nature13246
https://phys.org/news/2014-05-standard-candles-illuminate-side-milky.html
Abstract Flaring and warping of the disk of the Milky Way have been inferred from observations of atomic hydrogen 1 , 2 but stars associated with flaring have not hitherto been reported. In the area beyond the Galactic centre the stars are largely hidden from view by dust, and the kinematic distances of the gas cannot be estimated. Thirty-two possible Cepheid stars (young pulsating variable stars) in the direction of the Galactic bulge were recently identified 3 . With their well-calibrated period–luminosity relationships, Cepheid stars are useful distance indicators 4 . When observations of these stars are made in two colours, so that their distance and reddening can be determined simultaneously, the problems of dust obscuration are minimized. Here we report that five of the candidates are classical Cepheid stars. These five stars are distributed from approximately one to two kiloparsecs above and below the plane of the Galaxy, at radial distances of about 13 to 22 kiloparsecs from the centre. The presence of these relatively young (less than 130 million years old) stars so far from the Galactic plane is puzzling, unless they are in the flared outer disk. If so, they may be associated with the outer molecular arm 5 . Main We derived the distances for the five Cepheids from near-infrared photometry obtained with the Infrared Survey Facility (IRSF) and we used radial velocities from the Southern African Large Telescope (SALT) to determine the kinematics (see Methods)—both telescopes are at the South African Astronomical Observatory (SAAO), Sutherland, in South Africa. From these data we were able to ascertain the population to which the Cepheids belong. The other 27 Cepheid candidates are either better assigned to a different class (such as anomalous Cepheids) or else their classification as classical Cepheids is uncertain. Table 1 lists the derived distances and various other parameters for the Cepheids. They are at about the distance and position at which a stream associated with the Sagittarius (Sgr) dwarf galaxy crosses the plane 6 , but the low radial velocity (mean heliocentric radial velocity after correction for the effects of stellar pulsation of V R = 4 ± 8 km s −1 , see Table 1 ) is completely different from that expected for members of the Sgr dwarf stream (about 150 km s −1 ) 6 , 7 and the Cepheids are clearly Galactic. They cannot be in the Galactic bulge because their distances from the centre put them far beyond the bulge and the velocity dispersion of the five stars, 16 ± 5 km s −1 (much of which is observational), is much smaller than expected for bulge objects (>60 km s −1 ) 8 . Furthermore, these short-period Cepheids will be relatively young (about 100 million years (Myr) old), and, although there is a young component, including Cepheids 9 , in the innermost regions of the bulge, the bulk of the population is old (about 10 billion years (Gyr) old) 8 . Figure 1 shows the positions of the five stars in comparison to catalogued Cepheids. The various sources of uncertainty for the distances of the Cepheids are discussed in the Methods, but the reddening law and reddening corrections presented the biggest challenge and are the primary contributors to the error bars shown in the figure. Table 1 Data for individual Cepheids Full size table Figure 1: Schematic of the Galaxy. The positions of the Cepheids (open circles with assumed maximum uncertainties of ±0.2 mag) are compared to the location of the H i gas. The solid and dashed curves are model fits, S and N1, respectively, from ref. 1 at three times the HWHM above and below the Galactic plane. We note that figures 1 and 2 of ref. 2 show the H i flare in the relevant region extending up to about 2 kpc. The dark grey points are previously known Galactic Cepheids 10 and the approximate regions surveyed by OGLE (2 < | b | < 6) are shown in light grey on either side of the plane. The positions of the Sun and Galactic centre are indicated by the star symbols. PowerPoint slide Full size image There is almost no information on gas or stars in the Galactic disk immediately behind (Galactic longitude l ± 15°) the Galactic centre. The atomic hydrogen observations 2 on either side of the centre, but away from the central region itself, suggest that the gaseous disk of the Milky Way at l ≈ 0 is not warped but shows a marked flaring at Galactocentric radii ( R , the distance from a star to the centre of the Galaxy) of 15 kiloparsecs (kpc) and more; we note that the details are model dependent. The thickness of the gaseous disk 1 , 2 increases from 60 parsecs (pc) half-width at half-maximum (HWHM) at R = 4 kpc to 2.7 kpc at R = 30 kpc and, especially at positive Galactic longitudes 1 , there is a marked increase from about 0.4 kpc at R = 15 kpc to about 1.0 kpc at R = 20 kpc. Therefore we found the Cepheids at exactly the distance predicted for this increase in disk thickness, as can be seen in Fig. 1 . The absence of Cepheids nearer the Sun is consistent with the lower HWHM in these regions, whereas the absence of more distant Cepheids is partly due to the decreasing density at larger distances from the centre and partly the consequence of the Optical Gravitational Lensing Experiment (OGLE) observational cut-off. So the relatively narrow range of distances is consistent with our hypothesis that these stars are in the flared disk. In the Methods we also show that the numbers of Cepheids observed is consistent with expectations from a flared disk. Cepheids are usually associated with spiral arms and the distances of these five are similar to that expected for the far outer molecular spiral arm of the Galaxy 5 where it passes behind the central region of the Galaxy; the HWHM of this arm may be only about 0.6 kpc, in which case the Cepheids would be on its periphery. However, we note that distances and thickness computed for this arm depend sensitively on the model adopted and are therefore uncertain 2 , 5 . It is instructive to examine why the outer regions of a galactic disk flare. In the inner parts of a galactic disk the gravitational force k ( z ) at height z perpendicular to the galactic plane is dominated by the strong concentration of stars there. As we move to greater galactocentric radii, however, the concentration of stars drops dramatically, k ( z ) decreases and is increasingly dominated by the effects of dark matter. The flaring of the gas layer in the outer parts of our own and other galaxies has been attributed to this, and observations can in principle be used to study the distribution of dark matter in the halo of galaxies 11 . Studies of the flaring of H i gas in our Galaxy 1 suggest that in addition to an isothermal dark halo of 1.8 × 10 12 where is the mass of the Sun, there is a self-gravitating exponential dark-matter disk (1.8–2.4 × 10 11 ) as well as a dark-matter ring (13 kpc < R < 18.5 kpc and 2.2–2.8 × 10 10 ), which may represent the remains of a cannibalized dwarf galaxy 1 . The most serious uncertainty in using gas as a tracer of the gravitational field arises from the need to adopt a model to derive the gas distribution. It is therefore highly desirable that the gravitational field in the outer Galaxy be investigated using young stars for which good distance estimates can be made. Classical Cepheid variables are by far the best stars for this purpose. Studies of diffuse groups of B stars 12 , which are even younger than Cepheids, are also consistent with a Galactic disk extending 15 kpc and 20 kpc from the centre, at Galactic latitude b = −4° and −7°, respectively. These stars are in the third Galactic quadrant near the place where the warp forces the Galactic plane to its greatest negative displacement from b = 0°. So although these young stars are displaced from b = 0°, they are in the local Galactic plane, and therefore tell us nothing about a flare. The collection of stars now known as the ‘Monoceros ring’ has been interpreted as evidence for a warped disk 13 , or alternatively as the remnant of a dwarf galaxy cannibalized by the Milky Way 14 . It is perhaps curious that the Cepheids discussed above are at the distance from the Galactic centre that one would expect the Monoceros ring to be, if indeed it were a complete circular ring around the Galaxy. The stellar population that makes up this so-called ring is generally considered to be old (>1 Gyr) and therefore different from the Cepheids (although there have been suggestions of an association with spiral arms 11 ). Models 15 indicate that the ages of the youngest Cepheids discussed here are less than 130 Myr. The disputed origin of the Monoceros ring 16 is beyond the scope of this Letter. Nevertheless, we note that simulations that suggest that the ring is a consequence of the interaction of the Sgr dwarf galaxy with the Milky Way 17 do not predict any significant density of stars in the ring at the distance of the Cepheids under discussion. Clearly, these Cepheids are just the tip of the iceberg. Further work on these stars and other ‘standard candles’ in the outer Galaxy will present new opportunities to probe the gravitational field and therefore the distribution of dark matter in the outer parts of our Galaxy. Methods Summary The Fourier coefficients listed for each light curve of the candidate Cepheids 3 were compared with those of classical Cepheids in the Large Magellanic Cloud (LMC) 18 to show that five of the stars with periods greater than one day fall clearly into the classical Cepheid class; we can therefore derive their distances from their luminosities. The distances and the interstellar absorptions were derived together using pairs of colours ( V and I or J and K S ). The results from the infrared magnitudes were adopted because the uncertainty due to interstellar reddening is significantly higher at shorter wavelengths. The detailed analysis indicates that the reddening law towards the Galactic centre is abnormal, as is well known 19 . Various sources of uncertainty on the distance moduli are discussed in detail in the Methods, but the reddening law and the exact values of the reddening are the primary contributors, which lead to our estimate of the upper limit to the uncertainty of ±0.2 magnitudes (mag). Radial velocities were determined by cross-correlation with a synthetic spectrum and the zero point of the velocity scale was confirmed by observation of two stars with known velocities. An approximate calculation can be made of the numbers of Cepheids expected by extrapolating from the solar neighbourhood and assuming a scale length of 3 kpc within the plane. If the Cepheids in the flared disk have the same scale height as the gas (577 pc) then we would expect about 18 to exist above a height of 1 kpc from the plane in the direction surveyed by OGLE. Given the uncertainties, this is consistent with the five Cepheids that we do find, particularly as we do not expect our sample to be complete. Online Methods In the following, we describe how the stars were identified as classical Cepheids by comparison with similar stars in the LMC 18 . We then go on to derive distances, taking into account the well known abnormal reddening law towards the Galactic centre 19 . Identifying classical Cepheids A problem when studying Cepheids is that it is not always easy from available photometry to distinguish classical Cepheids (type I) from other objects, for example, anomalous or type II Cepheids (BL Her stars, W Vir stars). This issue has been discussed in the context of distant Cepheids towards the anti-centre 20 (that is, the direction in the Galaxy that is opposite from the centre, viewed from our perspective). In the interior region of the Galaxy, and particularly in the direction of the bulge, this is likely to be a significant problem. Fortunately it is possible to distinguish between some classes of stars using the Fourier coefficients of their light curves, and these are listed for the OGLE 3 Cepheids towards the bulge. The main Fourier parameters for the I- band light curves of the five stars discussed here ( Extended Data Table 1 ) can be compared with plots of the Fourier coefficient ratios R 21 , R 31 and phase differences φ 21, φ 31 (where the subscripts denote the order of the cosine curve fit) against period for various classes of variable star in the LMC 18 and this enables us to classify these five securely as classical Cepheids. Other possible Cepheids in the OGLE bulge catalogue have characteristics that suggest they belong to the anomalous Cepheid class, are possible type II Cepheids or else their classification is doubtful. Photometry The infrared photometry ( Extended Data Table 2 ) was carried out using the 1.4-m IRSF and the SIRIUS camera at Sutherland 21 . Each of the targets was observed once on 2012 May 6 (Universal time, ut ) with an exposure time of 25 s (5 s times five exposures at dithered positions). The photometry was extracted using the Image Reduction and Analysis Facility (IRAF) package DAOPHOT ( ) and standardized by comparison with nearby stars from the 2MASS point source catalogue 22 . The uncertainties for the brightest and faintest of the Cepheids range from 0.02–0.07 mag at J , 0.02–0.03 mag at H and 0.02–0.04 mag at K S , respectively. These are significantly less than the uncertainties on the 2MASS measures, where they exist, for the same sources. We use these single-epoch J , H and K S measurements to estimate the distance, noting that the near-infrared amplitudes of these short-period stars will be small (<0.1 mag; ref. 23 ). Distances and interstellar absorptions In general there are severe problems in dealing with observations of distant stars in the Galactic plane close to or beyond the centre because of the large and uncertain amounts of interstellar extinction in these directions. Cepheids offer an important advantage in this regard in that distances can be derived from relations that allow the reddening and the distance to be determined together and unambiguously when observations in two colours are available—for example, V and I or J and K S —provided the reddening law is known. Recent work 19 has indicated that the law of reddening is different towards the Galactic bulge from that adopted elsewhere 24 and here we use: from ref. 19 and which was found by the same method 25 . It should be noted that the relation in V and I may be somewhat more complex than the one given 25 . Adopting period–luminosity relations in V and I (as derived 26 by the OGLE group) and J and K S (derived 27 for Cepheids with 0.4 < log P < 1.0) from the LMC together with an LMC distance modulus of 18.5 mag and interstellar extinction values 28 of A V = 0.22 mag, A I = 0.13 mag, A J = 0.06 mag and A K = 0.02 mag for the LMC direction, we then have the distance modulus μ 0 for a Cepheid with a pulsation period P : and Combining these pairs of equations with the reddening law given above leads to the two estimates of the distance modulus, ( μ 0 ) VI and ( μ 0 ) JK —derived, respectively, from equations (1) and (2) and equations (3) and (4)—and the interstellar extinction corrections, A I and A K ( Extended Data Table 2 ). Uncertainties in the distances The LMC period–luminosity relations that we used are well defined 4 . Their absolute calibration is based on the LMC distance modulus, which has been determined in a number of ways. The uncertainty in the adopted value (about 0.04 mag or 2%) is negligible for our discussion. The mean OGLE VI magnitudes derived from their extensive observations have negligible error. Because of the small pulsation amplitudes of the Cepheids in the infrared region of the spectrum, the error on our J and K S magnitudes is ≤0.05 mag. Possible metallicity effects on the period–luminosity relations have been much discussed 4 . Nothing is known about the metallicities of Cepheids behind the Galactic centre, but those of Cepheids in the outer disk of the Galaxy in the general direction of the anti-centre 29 , and at comparable distances from the centre to those discussed here, have a mean logarithmic iron-to-hydrogen ratio [Fe/H] = −0.60 ± 0.12, which is intermediate between those of the LMC and the Small Magellanic Cloud 30 . The difference in distance between the Small Magellanic Cloud and the LMC derived from J and K S observations of Cepheids 31 agrees with values measured in other ways, without the application of metallicity corrections. Furthermore, Hubble Space Telescope parallaxes of Galactic Cepheids 32 (with [Fe/H] ≈ 0) agree with the LMC modulus adopted without the application of any metallicity corrections (they give 18.52 ± 0.03 mag from V and I and 18.47 ± 0.03 mag from K S ). The various factors indicate that any residual metallicity effects on the distances derived for these Cepheids will be very small. A potential source of uncertainty is in the width of the period–luminosity relations. This width is due to the fact that, at a given period, a Cepheid brighter than the average is also bluer. This leads to the smaller-than-average V being compensated by a lower-than-average derived apparent absorption. It is clear that the uncertainty in the modulus due to the spread in colour at a given period is 4 : where β 1 is the colour coefficient of a (nearly dispersionless) period–luminosity–colour relation in ( V , I ) and β 2 is the ratio of total to selective absorption. For the Cardelli 24 law of reddening, which is often used, β 1 ≈ β 2 . Thus, any uncertainty due to the width of the period–luminosity relation in our case comes from the change in β 2 for the bulge, which is 0.33. The scatter in V − I at a given period 33 is 0.08, which would result in σ ( μ 0 ) = 0.03 in equation (5). In the infrared, the widths of the period–luminosity relations are lower and will not introduce significant uncertainty. Interstellar reddening is a source of error and, as pointed out above, the evidence points to an abnormal reddening law in the direction of these stars. The uncertainty in this reddening law in JK S is small; this, together with the low extinction in the infrared, leads to only a small uncertainty in the distance modulus (0.003 mag for the most heavily reddened star). Even if, contrary to the evidence, we used the Cardelli 24 law of reddening, the change in distance moduli would not affect our conclusions. In that case, the modulus of our most reddened star would decrease by 0.28 mag (a change of distance from 24.4 kpc to 21.4 kpc) and the moduli of the other stars would decrease by an average of 0.12 mag (1.25 kpc). Owing to the greater absorption in V and I and the greater uncertainty in the reddening law, the uncertainties in the derived distances are greater. The uncertainty in the reddening coefficient leads to an uncertainty of 0.25 mag in the modulus of the most heavily reddened star and a mean of 0.12 mag in the other cases. If, contrary to expectations, a Cardelli reddening law had been adopted, the modulus of the most heavily absorbed star would have decreased by 0.90 mag and those of the others by a mean of 0.42 mag. Clearly, reddening uncertainties in VI are much more important than in JK. Summary of adopted distances and their uncertainties In the main paper we adopt the distances derived from the J and K S magnitudes (see Table 1 ), because they are the more accurate values. The above discussion indicates that the errors in those distance moduli are: 0.04 mag from the absolute calibration; ≤0.05 mag due to the pulsation amplitude; and negligible amounts from the period–luminosity relation width, metallicity effects and uncertainties in the Nishiyama reddening law. If the Cardelli reddening law were applied to these stars their moduli would be reduced by a mean of 0.15 mag, but a change this large seems to be ruled out by observations. The systematic uncertainties overwhelm the rather small statistical errors, so we do not attempt to assign individual errors to distances. We consider 0.2 mag to be a very conservative estimate of the total error of an individual modulus (random plus systematic) and this is what is illustrated in Fig. 1 , but we fully expect the errors to be less than this. In the case of the moduli from V and I , the main uncertainty is from the coefficient of the reddening law and complications in deriving this have been noted 25 . We simply mention here that with the adopted law the VI moduli are 0.20 mag larger than the JK S values adopted, whereas with a Cardelli law they are 0.31 mag smaller, suggesting that a less extreme variation from the Cardelli law applies to these stars. Radial velocities Our spectra ( Extended Data Table 3 ) were obtained with the Robert Stobie spectrograph on the SALT. A volume phase holographic grating of 1,300 lines mm −1 was used to cover the wavelength range 7,800–9,600 Å, putting the Ca ii triplet on the middle charge-coupled device of the detector. The resolution is 3.4 Å with a projected slit width of 1.5 arcsec. Radial velocities were obtained by cross-correlation of the spectra with a synthetic spectrum taken from the library assembled for the RAVE experiment 34 . Two stars with known radial velocities 35 were used as a check on the radial velocity zero point. The measured velocities for these two stars are 34.7 km s −1 and −12.0 km s −1 , respectively. Mean radial velocity errors due to photon statistics are 10 km s −1 , so the radial velocity zero point seems to be secure. The measured heliocentric velocities have been corrected for stellar pulsation adopting a standard velocity curve for short-period Cepheids with a semi-amplitude of 20 km s −1 (for example, figure 6 of ref. 36 ). The mean heliocentric velocity after correction is 4 ± 8 km s −1 compared with 13 ± 7 km s −1 before correction. This indicates that uncertainties in the correction will not affect our conclusion regarding the mean radial outward velocity of this group of stars and that the error given in the main text is realistic. Galactic structure In the main paper and in the following we adopt a distance from the Sun to the Galactic centre of 8.5 kpc and a flat rotation curve with a velocity θ = 220 km s −1 , to allow for a direct comparison with models describing the H i gas behaviour in the outer Galactic disk 2 . Plausible changes 37 in these values will not affect our conclusions. The heliocentric distances of the Cepheids ( D values in Table 1 ) are comparable with that of the Sgr dwarf galaxy (about 24 kpc), and a tidal stream from this system crosses the Galactic plane, behind the Galactic centre, close to the Galactic bulge at positive Galactic longitude. RR Lyrae variables belonging to this stream have recently been found in our field 38 at a distance of around 27 kpc. The possibility that the Sgr system contains stars as young as about 100 Myr has also been raised 39 . This is the expected age of short-period classical Cepheids 15 , so we cannot rule out the possibility that our stars belong to the Sgr system on the basis of photometry alone, and kinematic information is essential. Because the heliocentric radial velocities of Sgr dwarf members are about 150 km s −1 (refs 6 and 7 ), it is clear from the velocities in Table 1 that our Cepheids belong instead to the far outer parts of the Galactic disk. The possible association of the Cepheids with the far outer molecular spiral arm 5 was raised in the main paper. This arm lies at positive Galactic longitudes (in the first quadrant). At l = 13°.25, the lowest longitude at which the carbon monoxide (CO) was measured, the estimated distance 5 is D = 23 kpc, corresponding to R = 14.5 kpc; the exact distances are sensitive to the kinematic model. These are somewhat less than the D and R values in Table 1 . Our values (in Table 1 ) of course refer to a region where there is no information from the gas. Adopting an alternative kinematic model 2 with elliptical orbits will lead to larger derived distances of the gas. The five Cepheids are concentrated in a relatively small part of the area covered by the OGLE survey at positive longitudes. It is possible that variable interstellar absorption over the field could account for this. However, it seems more likely that it is due to real clumping, such as is common for young objects in spiral arms. The far outer molecular arm has not yet been seen emerging from the Galactic centre region at negative longitudes. We note that our stars are at Galactocentric distances comparable to, but greater than, those of a small number of masers defining an outer arm in the general direction of the anti-centre 40 . The Cepheids whose radial velocities were studied in the general direction of the anti-centre are also at somewhat shorter distances (mean R = 12.9 kpc) 20 . The Sun–Cepheid–Galactic centre angle is small for all of our stars (0° to 2.7°, as measured in the Galactic plane). Thus, the corrected radial velocity ρ primarily measures motion that is radial from the Galactic centre and the five Cepheids give a mean ρ = 23 ± 9 km s −1 , indicating a significant mean outward radial motion. This would not be inconsistent with a value of 9 km s −1 predicted in the model 2 . It should also be noted that in the general solar neighbourhood systematic deviations from circular motion of about 10 km s −1 are known for OB stars and Cepheids in regions of around a kiloparsec radius 41 . Therefore, our result does not necessarily imply any significant general deviation from circular orbits. The uncertainty in this conclusion is related to the small number of objects involved rather than to the uncertainties in estimating mean Cepheid velocities. In the general anti-centre region at somewhat smaller Galactocentric distances no evidence was found 20 for a general outward velocity, though curiously the three Cepheids in that study 20 with Galactic latitudes of |Δ l| < 10° from the anti-centre have a mean positive radial velocity of 10 ± 4 km s −1 . Despite the small number of our stars the result would be in conflict with an outward motion of the local standard of rest of about 14 km s −1 which has sometimes been suggested 42 to explain the Galactic asymmetry of the H i velocities. Number of Cepheids observed and expected With such a small number of Cepheids in our sample it is impossible to carry out a detailed study of the space distribution at their distance from the Galactic centre. However, the following rough calculation shows that their presence far from the Galactic plane requires the presence of a flared disk. Consideration of the number of Cepheids in the solar neighbourhood suggests that the expected number of such stars in a column perpendicular to the Galactic plane and with a cross-sectional area of one square kiloparsec is about 60. With a scale height of 86 pc, as for the gas 2 (HWHM 60 pc) and taking into account that the area on the Galactic plane, between D = 20 kpc and D = 30 kpc, covered by the OGLE survey of (slightly less than) 44 kpc 2 , the number of Cepheids expected in that survey with z > 1 kpc is approximately 0.008. This is for solar neighbourhood densities. With a disk scale length of 3 kpc, the drop in density from 8.5 kpc to 15 kpc is a factor of 9 and the expected number of Cepheids is about 0.001 (that is, for an unflared disk Cepheids are not expected). However, at the distance of our Cepheids the scale height of the gas is about 577 pc (HWHM 400 pc) and if the Cepheids follow the gas we predict the existence of about 18 in the relevant region. This calculation is obviously quite uncertain, but it is sufficient to show that our conclusion that these Cepheids are in the outer regions of a flared disk of scale height similar to that of the gas is entirely plausible. We see no other satisfactory explanation for these stars. We cannot rule out the possibility that a few more of the OGLE variables are classical Cepheids. Owing to the small numbers, the likely effects of non-uniform interstellar absorption and the fact that young objects are expected to be found in groups rather than uniformly distributed over the field, it is not feasible to draw any strong conclusion from the fact that these Cepheids are confined to the positive longitude side of the OGLE field or that four of the five stars are at northern latitudes, despite the fewer OGLE fields there. Change history 09 June 2014 Typos in refs 3, 18 and 26 were corrected in the HTML on 9 June 2014.
South African astronomers have discovered the very first known stars in the flared disk of our Milky Way Galaxy. These stars are situated on the far side of our Galaxy, 80 thousand light years from the Earth and beyond the Galactic Centre. The discovery is important because stars like these will allow astronomers to test theoretical ideas about how galaxies, like the Milky Way in which we live, formed. In particular these stars, which are close to the effective edge of the Milky Way, will help astronomers trace the distribution of the very mysterious dark matter. Dark matter is known to be an important component of all galaxies, but its nature and distribution remain elusive. The five stars involved in this discovery are very special ones, known as Cepheid variables, whose brightness changes regularly on a cycle time of a few days. These Cepheid variables have characteristics that allow their distances to be measured accurately. A team of astronomers led by Prof. Michael Feast used observations made with the Southern African Large Telescope (SALT) and the Infrared Survey Facility (IRSF), both at the South African Astronomical Observatory's (SAAO) site at Sutherland in the Northern Cape, to determine the distances of these stars and hence their locations within our Galaxy. The majority of stars in our Galaxy, including our own sun, are distributed in a flat disk (see illustration). Early in the 21st century radio astronomers discovered that hydrogen gas, of which the Galaxy contains a great deal, flared away from the disk at large distances from the Galactic centre, but until now no one knew that stars did the same thing. An infrared image (left) of the Cepheid named OGLE-BLG-CEP-32 and the stars which surround it, together with its SALT spectrum (insert right). These were used to show that the Cepheid was in the flare of the Galactic disk. Credit: Whitelock et al The team who made the discovery are from South Africa and Japan: Prof Michael Feast (University of Cape Town – UCT, SAAO), Dr John Menzies (SAAO), Dr Noriyuki Matsunaga (the University of Tokyo, Japan) and Prof Patricia Whitelock (SAAO, UCT). These results will be published in detail on 15 May, in the international journal Nature.
dx.doi.org/10.1038/nature13246
Medicine
Sit-stand office desks cut daily sitting time and appear to boost job performance
Charlotte L Edwardson et al. Effectiveness of the Stand More AT (SMArT) Work intervention: cluster randomised controlled trial, BMJ (2018). DOI: 10.1136/bmj.k3870 Journal information: British Medical Journal (BMJ)
http://dx.doi.org/10.1136/bmj.k3870
https://medicalxpress.com/news/2018-10-sit-stand-office-desks-daily-boost.html
Abstract Objectives To evaluate the impact of a multicomponent intervention (Stand More AT (SMArT) Work) designed to reduce sitting time on short (three months), medium (six months), and longer term (12 months) changes in occupational, daily, and prolonged sitting, standing, and physical activity, and physical, psychological, and work related health. Design Cluster two arm randomised controlled trial. Setting National Health Service trust, England. Participants 37 office clusters (146 participants) of desk based workers: 19 clusters (77 participants) were randomised to the intervention and 18 (69 participants) to control. Interventions The intervention group received a height adjustable workstation, a brief seminar with supporting leaflet, workstation instructions with sitting and standing targets, feedback on sitting and physical activity at three time points, posters, action planning and goal setting booklet, self monitoring and prompt tool, and coaching sessions (month 1 and every three months thereafter). The control group continued with usual practice. Main outcome measures The primary outcome was occupational sitting time (thigh worn accelerometer). Secondary outcomes were objectively measured daily sitting, prolonged sitting (≥30 minutes), and standing time, physical activity, musculoskeletal problems, self reported work related health (job performance, job satisfaction, work engagement, occupational fatigue, sickness presenteeism, and sickness absenteeism), cognitive function, and self reported psychological measures (mood and affective states, quality of life) assessed at 3, 6, and 12 months. Data were analysed using generalised estimating equation models, accounting for clustering. Results A significant difference between groups (in favour of the intervention group) was found in occupational sitting time at 12 months (−83.28 min/workday, 95% confidence interval −116.57 to −49.98, P=0.001). Differences between groups (in favour of the intervention group compared with control) were observed for occupational sitting time at three months (−50.62 min/workday, −78.71 to −22.54, P<0.001) and six months (−64.40 min/workday, −97.31 to −31.50, P<0.001) and daily sitting time at six months (−59.32 min/day, −88.40 to −30.25, P<0.001) and 12 months (−82.39 min/day, −114.54 to −50.26, P=0.001). Group differences (in favour of the intervention group compared with control) were found for prolonged sitting time, standing time, job performance, work engagement, occupational fatigue, sickness presenteeism, daily anxiety, and quality of life. No differences were seen for sickness absenteeism. Conclusions SMArT Work successfully reduced sitting time over the short, medium, and longer term, and positive changes were observed in work related and psychological health. Trial registration Current Controlled Trials ISRCTN10967042 . Download figure Open in new tab Download powerpoint Introduction A wealth of epidemiological evidence shows that sedentary behaviour is associated with an increased risk of chronic disease (type 2 diabetes, cardiovascular disease, some cancers) and mortality, often independently of body mass index (BMI) and physical activity, 1 2 3 4 poor mental health, 5 6 and a lower quality of life. 7 Office workers are one of the most sedentary populations, spending 70-85% of time at work sitting. 8 9 It has also been reported that over a third of their total sitting time at work is accumulated in bouts of prolonged sitting (>30 minutes). 8 Occupational sedentary behaviour specifically has been associated with an increased risk of diabetes and mortality 10 and musculoskeletal problems such as neck and shoulder pain, 11 as well as being detrimental for important work related outcomes such as engagement 12 and presenteeism. 13 Research on outcomes such as work engagement and presenteeism is, however, limited. These links between sedentary behaviour and health and work related outcomes are important because the estimated costs of presenteeism and absenteeism in the United Kingdom are reported to be more than £30bn ($39bn; €34bn), with presenteeism costing over twice as much as absenteeism. 14 More positively, reductions in sitting and breaking up sitting through standing and walking in acute experimental settings have led to improvements in important cardiometabolic markers of health such as glucose and insulin levels and blood pressure, 15 16 17 18 19 20 21 22 and feelings of fatigue and vigour. 23 24 In response to this evidence, interventions to reduce sitting time in the workplace have received increasing attention in recent years. 25 These have focused on numerous strategies, including physical changes to the workplace, such as providing height adjustable desks to enable sitting or standing, pedalling workstations, treadmill desks, policy changes, information provision, counselling, and computer prompts. 25 While positive findings were observed for some strategies in terms of reducing sitting time, particularly the provision of height adjustable desks, the quality of evidence was considered low for most studies owing to non-powered small studies and studies with a high risk of bias. 25 Furthermore, interventions have typically been evaluated over the short term, so knowledge on longer term effectiveness is lacking. Although some studies have examined the impact of sitting reduction interventions on work related outcomes such as job performance and productivity, 25 26 presenteeism, 26 and absenteeism, 26 27 it is difficult to draw conclusions across these studies owing to the limitations in study designs. The Stand Up Victoria study was one recent example that addressed these limitations. This was a multicomponent intervention in Australia, and effectiveness was tested within a cluster randomised controlled trial over 12 months. 28 Components comprised a group based workshop, feedback on sitting behaviour, provision of a height adjustable desk attachment, goal setting, and ongoing support for three months in the form of emails or individual coaching sessions. The intervention was successful in reducing daily sitting and sitting at work 29 and led to small improvements in glucose levels and cardiometabolic risk. 30 However, high quality designs remain scarce and studies in the UK are lacking. The Stand More At Work (SMArT Work) intervention was designed in response to this need and was developed using guidance from the Behaviour Change Wheel 31 (a framework for designing interventions) after formative research with office workers. 32 We undertook a cluster randomised controlled trial to test the impact of the SMArT Work intervention over the short (three months), medium (six months), and longer term (12 months) in a sample of office workers working within the English National Health Service, the largest employer in the UK. The primary objective was to test whether the SMArT Work intervention led to changes in occupational sitting time at 12 months compared with control. Methods Study design The study is reported according to the CONSORT statement for cluster randomised controlled trials. This study was a cluster randomised controlled trial with follow-up measures at 3, 6, and 12 months. The full trial protocol has been published. 33 Randomisation occurred at the office group level to reduce the risk of contamination. Using computer generated lists, a statistician randomised office groups (clusters) 1:1 to either intervention or control group stratified by cluster size (≤4 and >4 participants) with a block size of six. Randomisation was performed in batches after participant clusters had completed their baseline measures. Team members who took measurements were blinded to group randomisation. The team leads could not be blinded as they were responsible for study coordination, including delivery of the desks and intervention components. Team leads had no involvement in data processing and analysis. Recruitment took place between October 2015 and June 2016, with baseline data collection between November 2015 and June 2016 and follow-up data collection between March 2016 and June 2017. The study was coordinated from the Leicester Diabetes Centre, University Hospitals of Leicester NHS Trust, and all data were collected on site at the University Hospitals of Leicester NHS Trust. Setting and participants The participants were recruited from the University Hospitals of Leicester NHS Trust. This trust consists of three hospitals across Leicester—Leicester Royal Infirmary, Leicester General Hospital, and Glenfield Hospital. All participants provided informed consent on entering into the study. During the grant application process, managers across the trust were approached to gauge interest in their team taking part in the study (this information was used to generate the original sample size calculation). Once the study had started, these managers were approached again as well as the staff within their team. Alongside this, we carried out other methods of recruitment. The study was included in the chief executive’s monthly e-newsletter as well as being advertised on the University Hospitals of Leicester NHS Trust staff intranet, and by posters displayed in staff rooms across the hospital sites. To promote the study to staff members and answer any questions they might have, we set up advertisement stands, manned by a member of the research team, in the canteens on each hospital site over lunch times. Any interested teams and individual staff members contacted the study team to obtain a participant information sheet outlining the study requirements and a reply slip used to assess eligibility. Staff members who responded were asked to encourage their colleagues to join the study. We contacted eligible participants to organise a convenient date to consent them into the study and take their baseline measurements. Measurements were carried out in a private room at the participants’ place of work. Staff aged 18-70 years were eligible if they were office based (self reported and confirmed at a visit by a researcher), spent most (≥75% (self reported), excluding mandatory breaks) of their workday sitting (self reported), worked at least 0.6 full time equivalent, worked at the same desk for at least three days a week, and were capable of standing. Participant personal and anthropometric measures Information on age, sex, ethnicity, smoking status, current job role, pay grade, and working hours were collected by questionnaire. Body weight and body fat (Tanita SC-330ST, Tanita, West Drayton, UK), height (Leicester Height Measure, Seca, Birmingham, UK), and waist circumference (midpoint between the lower costal margin and iliac crest) were measured to the nearest 0.1 kg, 0.1%, 0.5, and 0.5 cm, respectively. Arterial blood pressure was measured in the sitting position (Omron Healthcare, Henfield, UK); three measurements were obtained and the average of the last two used. Outcome measures Primary and secondary outcomes were assessed at baseline and at 3, 6, and 12 months. Primary outcome The primary outcome was change in occupational sitting time measured by the activPAL micro (PAL Technologies, Glasgow, UK). The activPAL is a small accelerometer worn on the thigh, which determines body posture—that is, sitting/lying, and upright (with and without stepping). This device is increasingly used in sedentary behaviour research 34 and has been shown to be highly accurate in measuring sitting, standing, and stepping and in detecting reductions in sitting. 35 36 37 We asked participants to wear the device continuously for seven consecutive days on the midline anterior aspect of the right thigh. The device was initialised using the manufacturer’s software (activPAL3 Professional Research Edition; PAL Technologies, Glasgow) with default settings. The device was waterproofed with a nitrile sleeve and Hypafix Transparent (BSN medical, Hull, UK) dressing and secured to the thigh with a piece of Hypafix Transparent dressing. We asked participants to complete a log of sleep and wake times while wearing the device, removal times of the device, and the start and end times of each workday. Devices were collected in person, and a validated algorithm in STATA (StataCorp) was used to download and process data. This has been described elsewhere, 38 but in brief the algorithm uses the activPAL eventsXYZ.csv files to isolate waking hours from sleeping (time in bed), prolonged periods of non-wear, and invalid data. The processed data were checked visually (by creating heatmaps of the data, as described elsewhere 34 ) for any occasions where the algorithm incorrectly coded sleep and waking behaviour (eg, where wake and sleep times were different from other days of data—ie, looked very early or very late compared with other days), and on such occasions we referred to the self reported log and, if necessary, corrected the data. This algorithm has previously shown a high level of agreement with diary reported wear times during waking hours(κ>0.8 for 88% of participants; median κ=0.94). 34 To isolate data for work hours, we matched the self reported work times collected in the log with those of the device data (in the processed eventsXYZ file). We included events (ie, bouts of sitting, standing, stepping) that crossed the self reported start and end of work times within the work hours data if 50% or more of the event was within the period of interest. 34 Workplace data were considered valid if the device was worn for 80% or more of self reported work hours 39 and participants provided at least one valid workday. 29 To minimise the possibility of reactivity, we discarded the first day of data collected from analysis. Secondary outcomes Physical activity and other sedentary behaviour variables —Other variables of interest calculated from the activPAL data included daily sitting time along with prolonged sitting time (≥30 minutes), standing time, stepping time (light and moderate to vigorous) with outcomes calculated during work hours and daily (ie, across all waking hours). For the daily data we defined a valid day as a day with less than 95% spent in any one behaviour (eg, standing or sitting), more than 500 steps, and 10 hours or more of data from waking hours. To be included in the analysis of daily data we required participants to have at least one valid day. Alongside the activPAL, participants also wore the ActiGraph Link accelerometer (ActiGraph, Pensacola, FL) on their non-dominant wrist continuously for seven days to capture time spent in moderate to vigorous physical activity during work and daily levels. ActiGraph files were processed with R-package GGIR version 1.5-10 ( ). 40 41 We excluded files from all analyses if post-calibration error was greater than 0.02 g (gravity) 42 or fewer than 10 hours of wear time was recorded during the 24 hour day of interest. Detection of non-wear has been described in detail previously (see ‘‘Procedure for Nonwear Detection’’ in the paper’s supplementary document). 40 Briefly, non-wear is estimated based on the standard deviation and value range of each axis, calculated for 60 minute windows with 15 minute moving increments. If for at least two of the three axes the standard deviation is less than 13 m g (milligravity) or the value range is less than 50 m g we classified the time window as non-wear. We used the threshold of 100 m g or more to calculate the time accumulated in moderate to vigorous physical activity at work and daily. 43 Musculoskeletal health —the Standardised Nordic Questionnaire was used to assess musculoskeletal problems in nine body areas (neck, shoulder, upper back, elbow, wrist, lower back, hip, knee, and ankle) over the past week and year. 44 Work related measures —A questionnaire captured several measures. Work engagement was assessed using a nine item questionnaire with a 7-point Likert scale 45 ; work engagement is defined by high levels of personal energy where a worker wants to put the time and effort into their work (vigour and vitality) and sees their work as significant (dedication) and interesting (absorption). 45 Work engagement is an important indicator of both productivity and workforce wellbeing. 46 Job satisfaction 47 and performance 48 were measured using single item questions on a 7-point Likert scale. Occupational fatigue was assessed using the Need for Recovery Scale, an 11 item questionnaire with yes or no options for each question. 49 The need for recovery refers to the extent that the work task induces a need to recuperate from work induced effort. The severity and duration of symptoms are assessed, which indicate that the respondent is not fully recovered from the effects of sustained effort during the working day and has reduced motivation for activities in the evening with family or friends. 50 Fatigue at work has been associated with stress and burnout, which in turn can lead to reductions in productivity and higher absence due to sickness. 50 Sickness presenteeism, often defined as going to work despite illness, was assessed using two questionnaires: the eight item Work Limitations Questionnaire 51 measured the degree to which health problems interfered with specific aspects of job performance and the productivity impact of these work limitations (presenteeism). It asks employees to rate their level of difficulty (or ability) to perform in eight areas of work in the past two weeks. For example, to concentrate on work, speak with people, handle the workload, and finish on time. Responses are combined into four work limitation scales: time management, physical demands, mental and interpersonal, and output demands. The Work Productivity and Activity Impairment Questionnaire (WPAI-GH 2.0) 52 measured absenteeism (percentage of work time missed due to health problems in the past seven days), presenteeism (percentage of impairment experienced while at work in the past seven days due to health problems), overall impairment (combination of absenteeism and presenteeism), and activity impairment (percentage of impairment in daily activities as a result of health problems in the past seven days). This latter questionnaire was only used for cost effectiveness analysis and will not be reported in this article. Data on sickness absence from work was obtained by self report (previous three month) and by organisational records for 12 months before the start of the study and for the 12 months’ duration of the study. Cognitive function —Cognitive function was assessed using computerised and paper based tasks. A touch screen laptop was used for the Digit Symbol Substitution Test, 53 which assesses processing speed, attention, and concentration, and the Stroop Colour-Word Test, which assesses executive function. 54 Paper based tasks included the Hopkins Verbal Learning Test to assess memory recall 55 and verbal fluency. 56 Mood and affective states —Mood and affective states were assessed using the Mood Affect Adjective Check List-Revised. This check list measures anxiety, depression, hostility, and positive and sensation seeking affects. 57 It measures affect both as a temporary state (today) and, more generally, as a disposition (generally). Quality of life —The World Health Organization Quality of Life-BREF was used to measure quality of life. This questionnaire incudes four domains: physical health, psychological health, social relationships, and environment. 58 Intervention group The intervention group received the SMArT Work intervention for the length of the randomised controlled trial (12 months). SMArT Work is grounded in several behaviour change theories (social cognitive theory, 59 organisational development theory, 60 habit theory, 61 self regulation theory, 62 and relapse prevention theory 63 ), and it is implemented through the Behaviour Change Wheel and the associated capability, opportunity, motivation, and behaviour (COM-B) approach. 31 The intervention design and behaviour change strategies take into account the organisational environment and social norms, individual, and interpersonal factors that influence sitting behaviour at work. Supplementary table 1 provides the timeline of these strategies and supplementary figure 1A includes the logic model of the intervention. Organisational strategies —We sought management buy-in by meeting with the chief executive of the hospital trust. He showed his support for the study and the intervention through his regular e-newsletter sent to all staff, and through members of the Clinical Management Groups who were also asked to show support (ie, encourage involvement and allow time for intervention activities) and to filter this message down to the other management team leads. Environmental strategies —After attendance at a seminar (see individual and group strategies for more information), participants were provided with a height adjustable desk or desk platform to enable them to sit or stand to work. They were given a choice between a full sized electric desk (twin leg single step stand desk 1200×800, MACOI, Kimbolton, UK), which allows the desk top to move up and down, or a choice of two sizes of desk platform (Pro Plus 30 or Pro Plus 48, VARIDESK; TX), which sits on the existing desk allowing the computer screen and keyboard to be moved up and down. This choice allowed flexibility for office set-up and to avoid testing the effectiveness of a specific type of desk rather than the height adjustable desk concept. We provided a brief training session on how to use the desk or platform and on the ergonomic set-up. A leaflet was also provided to reinforce these messages. Individual and group strategies —An initial group based education seminar (around 30 minutes’ duration) was delivered, which covered the health consequences of sitting and the benefits of reducing and regularly breaking up sitting. These messages were also reinforced in a leaflet provided at the end of the seminar. Participants were given their baseline results from the activPAL device at the end of the seminar, which informed them of their sitting (total and prolonged), standing, and stepping time at work, and overall daily levels. They were then provided with an action plan and goal setting booklet and encouraged to set a goal around sitting less at work based on their activPAL feedback and to create an action plan for this to be achieved. We provided participants with a DARMA cushion (Darma, CA, USA). to enable them to more regularly track and self monitor their sitting time (total and prolonged) and be prompted (in the form of a vibration) to regularly break up sitting. This cushion, which can be placed on an office chair, is approximately 2.5 cm thick and uses Bluetooth to sync data with a mobile phone app to provide the participant with real-time feedback. The frequency of the vibration prompt is a user defined setting (eg, can be set up to vibrate every 30 or 45 minutes). Every few months the participants received posters, with either educational or motivational messages. To provide ongoing support to participants, a trained member of the research team offered brief (about 15 minutes) coaching sessions, either face-to-face or by telephone, at month 1 and every three months thereafter to discuss progress, review goals and action plans, and discuss personal or social and group barriers and any benefits experienced. After each visit for follow-up measurements, the participants were provided with their results from the activPAL device, and these were compared with the baseline data. This allowed the participants to review their progress and goals. Control group Participants in control office clusters were not given any lifestyle advice, guidance, or results from the activPAL device. However, they received the results of health measures (eg, weight, blood pressure) taken at each time point (the intervention participants also received their own results). Other than this, these participants continued with usual practice for the 12 month study period. Statistical analysis Sample size After starting recruitment procedures, we amended our sample size calculation because of differences in office cluster sizes from our original plan. The study funder and sponsor agreed this amendment. The office cluster sizes were different because during the grant application process we approached managers within the hospital trust for their interest, and the original sample size was based on the department sizes of the managers who had expressed an interest in taking part. On commencement of the trial and advertising of the study, which was over two years after this initial contact, not all managers and staff within these initially identified potential clusters volunteered, but staff who were within other departments not originally identified did volunteer. These resulted in different clusters sizes. The published protocol 33 outlines the original sample size of 238 participants from 14 clusters. The average cluster size was smaller than originally planned. After completion of recruitment, 37 office clusters were recruited, with an average office cluster size of 4 (range 1-16) office workers. This final sample size resulted in more than 90% power to detect a reduction of 60 min/workday (SD 60 min/workday 64 ) in occupational sitting time between the groups, with a 25% drop-out and non-compliance to primary outcome taken into account. A 60 min/workday difference was chosen after consideration of the published literature at the time of designing the study. 1 64 65 As with the initial sample size calculation this assumes an intraclass correlation coefficient of 0.05 and coefficient of variation for cluster size of 0.9. The sample size was robust to changes in the intraclass correlation coefficient—a value of 0.1 would still give over 90% power. Data analysis A statistical analysis plan was written, finalised, and agreed before data were available. We compared cluster and participant level characteristics by group allocation, using either means (standard deviations) or medians (interquartile ranges) for continuous variables, and counts and percentages for nominal variables. The primary outcome, occupational sitting time (average min/workday) at 12 months, was analysed on a complete case basis using a generalised estimating equation model with an exchangeable correlation structure, accounting for clustering. The primary analysis was based on participants providing data for at least one valid workday from the activPAL device. The model included a binary indicator for randomisation group and was adjusted for baseline sitting time, cluster size (≤4 or >4 participants), and average activPAL wear time during work hours across baseline and 12 months. We carried out several sensitivity analyses of the primary outcome and daily sitting time: intention to treat analysis with missing data imputed using multiple imputation, 66 impact of variation in occupational or waking wear time, time spent in each activity, normalised to an eight hour workday and a 16 hour waking day as used in a previous similar study, 29 and the effect of the number of valid activPAL working and overall days chosen for the primary analysis and how changing this affected the results. We assessed two scenarios: two working and overall days or more and three working and overall days or more. To assess if the intervention effect was statistically different between groups we conducted several subgroup analyses: hospital site (Leicester General Hospital, Leicester Royal Infirmary, Glenfield General Hospital), worker status (part time, full time), sex (men, women), age (below or above the median), and body mass index (normal, overweight or obese (≥25 kg/m 2 ). We included interaction terms in the generalised estimating equation models to assess differences between subgroups. Secondary outcomes were also analysed using generalised estimating equation models with an exchangeable correlation structure (an independent structure was used where models did not converge). For binary outcomes we used a logit link with a binomial distribution for the outcome, and for continuous outcomes we used an identity link with a normal distribution. All primary and secondary analyses for the accelerometer (activPAL and ActiGraph) outcomes were adjusted for baseline value, office size, and average activPAL wear time during work hours (for occupational activPAL outcomes) and average activPAL waking wear hours (for daily activPAL outcomes) across baseline and outcome time. We repeated the analysis at each time point (3, 6, and 12 months). Adjustment for multiple testing of secondary outcomes was not performed. We interpreted outcomes according to the overall pattern of results; individual results should therefore be interpreted with caution. Statistical significance was set at 5%. All analyses were conducted using Stata version 14. Patient and public involvement The public were involved in this study in several ways. Office workers within the target organisation contributed to the intervention strategies and content before they were developed. Lay members from within and outside the target organisation (NHS trust) sat on the trial steering committee. These members advised on practical issues such as logistics, space, and desk mechanics. Participants were invited to a presentation of results (two sessions offered at each hospital site), and an infographic of the results was designed and circulated to participants. Results Figure 1 displays the flow of participants through the study. Between November 2015 and June 2016, 146 participants across 37 office clusters were recruited, with 19 office clusters (77 participants) randomised to the intervention arm (one participant subsequently withdrew before intervention implementation, leaving 76 participants) and 18 clusters to the control arm (69 participants). Of these, 121 (83%), 115 (79%), and 109 (75%) participants and 100%, 100%, and 95% of clusters were seen at the 3, 6, and 12 month follow-up, respectively. More participants in the control group than intervention group withdrew from the study (control 33% v intervention 17%). Fig 1 CONSORT flow diagram Download figure Open in new tab Download powerpoint Baseline characteristics Table 1 presents the overall characteristics of the office clusters and the individual participants within these clusters. Office clusters ranged in size from one to 16 participants, with a mean of four participants in each cluster. The mean age of participants was 41.2 (SD 11.1) years, 78% reported being of white European ethnicity, and the majority were women (80%). Most of the participants (74%) worked full time and were spread across different NHS salary bands. On average, participants spent 72.6% (5.94 (SD 1.47) h/workday) of their work hours sitting, of which 47.1% (2.80 (1.60) h/workday) was accrued in prolonged bouts, 20.0% (3.84 (1.07) h/workday) was standing, and 7.5% (1.64 (0.29 h/workday) was stepping. Across daily waking hours, participants spent 63.7% (9.71 (1.55) h/day) of their day sitting, of which 51.4% (4.99 (1.75) h/day) was prolonged sitting, 25.3% (3.84 (1.34) h/day) standing, and 10.8% (1.64 (0.52) h/day) stepping. There were no significant differences between those with available primary outcome data at both baseline and 12 months and those without for the characteristics reported in table 1 , except for salary banding (those on a higher salary were less likely to have available data). Participant characteristics of intervention and control participants were similar, except for ethnicity and sex. The intervention group consisted of more South Asian (21% v 13%) and more male (27% v 13%) participants than the control group. Of the participants randomised to the intervention, 40% (n=30) chose a full electric desk and 60% (n=46) chose the desk platform. Table 1 Baseline characteristics at both cluster and individual levels according to randomised groups: usual practice (control) and SMArT Work intervention. Values are means (standard deviations) unless stated otherwise View this table: View popup View inline Change in occupational sitting time at 12 months (primary outcome) Table 2 reports the mean change in occupational sitting time by randomisation group and the difference in change between groups at 12 month follow-up. In the complete case analysis, a statistically significant difference between groups was found in occupational sitting time (adjusted difference −83.28 min/workday, 95% confidence interval −116.57 to −49.98 min/workday) in favour of the intervention group. Similar results were seen in the intention to treat analysis ( table 2 ). Table 2 Changes in occupational sitting time at 12 month follow up between participants randomised to usual practice (control) or SMArT Work intervention View this table: View popup View inline Sensitivity analyses ( table 2 ) showed similar results to the primary analysis for occupational sitting time, with statistically significant differences between groups at 12 months when the various levels of activPAL data were used (ie, including only those with at least two and three valid days). Although a significant difference between groups for occupational sitting was found when standardising the data to an eight hour workday, the difference was smaller (−41.29 min/8 h workday). Figure 2 shows the results of the subgroup analyses. No statistically significant interaction effects were found for change in occupational sitting time. Fig 2 Forest plot of intervention effect at 12 months on occupational and daily sitting time by subgroup. *Adjusted for cluster effect, baseline occupational sitting, baseline overall sitting, stratification category (office size ≤4 or >4 participants) and average activPAL wear times during work hours/average activPAL waking wear time across baseline and 12 months. †Interaction between intervention group and subgroups Download figure Open in new tab Download powerpoint Secondary outcomes activPAL and ActiGraph outcomes Table 3 presents the secondary outcomes collected by the activPAL and ActiGraph. Differences between groups were found in occupational sitting time at three months (−50.62 min/workday) and six months (−64.40 min/workday) and daily sitting time at six (−59.32 min/workday) and 12 months (−82.39 min/workday) in favour of the intervention group compared with control, indicating that the intervention group spent significantly less time sitting than the control group. Similar results for daily sitting time at 12 months were seen in the intention to treat analysis, when standardising the daily sitting time data to a 16 hour waking day and when the various levels of activPAL data were used (ie, including only those with at least two and three valid days) (data not shown). Figure 2 displays the results of the subgroup analyses for daily sitting time at 12 months. For most subgroups there were no interaction effects. However, an interaction effect was found for age. At 12 months, the between subgroup interaction effects (P=0.02) revealed that the intervention was more effective for participants above the median age (42.5 years), who reduced their daily sitting by 45.11 additional minutes daily than those below the median age. Table 3 Changes in secondary outcome sitting and physical activity variables at follow-up between participants randomised to usual practice (control) or to the SMArT Work intervention* View this table: View popup View inline Differences were found between groups in prolonged sitting time at six months (occupational: −35.31 min/workday, daily: −25.38 min/day) and 12 months (occupational: −44.93 min/workday, daily: −58.34 min/day) in favour of the intervention group compared with control, but not at three months. The intervention group stood more than the control group at all time points, with group differences in occupational standing of 48.91, 72.62, and 66.00 min/workday, respectively, and in daily standing of 36.95, 55.96, and 62.81 min/day, respectively. No differences were found in occupational or daily stepping time and moderate to vigorous physical activity at any time point as measured by the ActiGraph (P>0.05). Musculoskeletal problems At baseline, a high proportion of participants in both groups reported experiencing musculoskeletal problems in the previous 12 months (see supplementary table 2). No differences were found between groups at the 12 month follow-up in the proportion of participants reporting musculoskeletal problems (neck, lower back, upper extremity, lower extremity, any part) and the pain experienced from musculoskeletal problems in the previous 12 months (P>0.05). A difference between groups was, however, found for the proportion of participants reporting that lower back problems prevented them from carrying out normal activities, with the odds of lower back problems preventing them from carrying out normal activities being less in the intervention group. Differences between groups were also found for musculoskeletal problems reported over the past seven days for the neck and upper extremity areas at 12 month follow-up and any part at six months, with the odds of reporting problems being less in the intervention group. Work related outcomes Work engagement —Differences (in favour of the intervention group versus control) at six and 12 months were observed for the vigour subscale and for overall work engagement (see supplementary table 3). Differences at 12 months (in favour of the intervention group) were seen for work dedication and work absorption. No differences were found at three months. Job satisfaction and performance and occupational fatigue —Differences at six and 12 months (in favour of the intervention group) were observed in job performance and recovery from occupational fatigue, but not in job satisfaction. No differences were found at three months. Sickness presenteeism —Differences were observed between groups, in favour of the intervention group compared with control, in the scales of time management and mental-interpersonal demands and for overall sickness presenteeism at 12 and three months, respectively. Sickness absence —No differences between groups were seen for either self reported or organisation reported (see supplementary table 4) sickness absence from work (P>0.05). Cognitive function outcomes Supplementary table 5 displays the results for the cognitive function tests. There were differences between groups in reaction times at 3, 6, and 12 months for the congruent level of the Stroop Colour-Word Test and in proportion of correct hits at the incongruent level, all in favour of the intervention group compared with control. Mood, mental health, and quality of life For most mood affect variables no differences were observed between groups (see supplementary table 6). However, differences were found for anxiety today at six and 12 months and dysphoria today at six months, in favour of the intervention compared with control. Between group differences were found for anxiety generally at three months, hostility generally at 12 months, and dysphoria generally at three months, in favour of the control group. Quality of life was assessed in four individual domains and overall (see supplementary table 7). Between group differences were found in two domains of quality of life and for the overall score at six and 12 months, all in favour of the intervention group compared with control. Participants in the intervention group compared with control group reported an improvement in their psychological, environmental, and overall quality of life. Discussion This cluster randomised controlled trial evaluated the effectiveness of a multicomponent intervention, involving a height adjustable workstation, for reducing occupational sitting time in a sample of office workers based within the University Hospitals of Leicester NHS Trust. The SMArT Work intervention resulted in reductions in occupational and daily sitting time over the short (three months), medium (six months), and longer term (12 months). The reduction in sitting was largely replaced by time spent standing, as stepping time remained unchanged. Although a reduction in daily sitting time was observed, this was of a similar magnitude to the reduction seen during work time, suggesting that the changes seen for daily sitting time were likely due to changes made at work. Time spent in prolonged sitting was also reduced in the intervention group. Results were also suggestive of improvements and benefits in assessed secondary outcomes, including job performance, work engagement, occupational fatigue, sickness presenteeism, and psychological health, although these tended to be at the later follow-up time points. No notable changes were found in job satisfaction, cognitive function, and sickness absence. Comparison with other studies The majority of previous workplace interventions employing height adjustable workstations have been evaluated over the short term (eg, three months) using small samples, and observed sitting reductions of between 30 minutes and two hours daily, 25 which is comparable with the present study. Other recent larger studies, evaluating similar multicomponent interventions, have also exhibited similar behaviour changes. 29 67 However, although these studies observed reductions at their concluding assessment time point, these tended to be smaller than those observed at the shorter term follow-up. In the present study, the reductions in sitting at three months were not only maintained at both subsequent follow-up time points (six and 12 months) but were largest at the final follow-up assessment at 12 months. We included a six month follow-up assessment where participants received feedback on their health and behaviour, and one-to-one coaching was continued throughout the whole study period. This may indicate that the ongoing coaching sessions or feedback on health and behaviour, or both, were able to assist the participants in maintaining their initial behaviour change. The value of regular contact was highlighted as a motivating factor in the process evaluation focus groups. A previous study targeting sitting also highlighted that regular assessments motivated participants. 68 Consistent with previous research, 26 29 67 sitting was replaced with standing rather than ambulation, despite emphasis on both behaviours. Participants may have chosen to reduce their sitting time by performing work tasks standing at their desk rather than reducing and breaking up their sitting through activities such as using a toilet, printer, or water cooler further away, walking meetings, or a combination of both strategies, suggestions that were promoted in the intervention. More qualitative research may be needed to elicit how best to encourage changes in movement while at work, in terms of the ability to perform work tasks more actively and to incorporate more movement during work breaks. Participants in the intervention group on average reduced their sitting time by more than an hour daily (95% confidence interval of 40 to >85 min/day reduction in sitting), and a recent meta-analysis examining the strength and shape of the dose-response relation between sedentary behaviour and health outcomes, suggests that this may have meaningful health benefits. 69 For example, the increased risk of all cause and cardiovascular mortality was strongest for those sitting for more than 8 h/day (relative risk of 1.04 for each additional hour after eight hours) and 6 h/day (relative risk of 1.04 for each additional hour after six hours), respectively. The average daily sitting time of the intervention group at baseline was 9.7 h/day. Furthermore, the association between sitting time and type 2 diabetes appeared to be linear, suggesting that any reduction may be beneficial. However, the acute experimental evidence is equivocal for replacing sitting time with standing time and the resulting metabolic health benefits, with one study showing that breaking up sitting with standing has acute beneficial effects on postprandial metabolic health in those with impaired glucose regulation, 15 with other studies reporting a modest 16 70 or no effect 19 71 in healthy populations. Future studies are therefore needed to assess the benefit of displacing sitting with standing on health outcomes over the longer term. Nevertheless, an increase in standing seemed to have a positive impact on many work related outcomes such as job performance, work engagement, occupational fatigue, sickness presenteesim, and some musculoskeletal problems. In previous similar shorter term (eg, three months) interventions, self reported or objectively measured work performance were not negatively or positively affected. 25 26 Our findings suggest that this type of intervention may take time to positively affect work performance, as these differences were observed in the present study at six and 12 months. While levels of sickness absence across the UK have remained relatively stable (6.6 days per person in 2014 and 6.9 days per person in 2015), presenteeism is a growing problem for employers. Consequently, presenteeism is now more costly than sickness absenteeism, £21.2 bn per annum versus £10.6 bn per annum, respectively. 72 Work engagement is an important indicator of productivity, turnover, and wellbeing of the workforce, 46 73 and occupational fatigue has been associated with unintentional injuries at work 74 and health problems, 75 76 while both have been linked to sickness absenteeism. 75 77 78 Despite positive changes to work engagement (all subscales at 12 months and overall work engagement) and occupational fatigue, we did not observe any differences in self reported or organisational records of sickness absence. Given that the positive changes observed in other work related outcomes occurred later in the randomised controlled trial (six and 12 months), any impact on sickness absenteeism may emerge in future months. Nevertheless, positive changes in sickness presenteeism were found for the domains of time management and mental-interpersonal demands, when measured using the Work Limitations Questionnaire. Recent data reported that half a million employees experienced work related musculoskeletal disorders in Great Britain in 2016-17, which resulted in 8.9 million working days lost. 79 We also observed a high prevalence of musculoskeletal conditions in this office based sample and although we did not find any differences reported over the whole 12 month randomised controlled trial period, the prevalence of neck and upper extremity problems experienced in the past seven days at 12 months, and the proportion of lower back problems interfering with normal activities, was lower in the intervention group. Results from previous research with similar interventions have been mixed in terms of the benefits for musculoskeletal problems. One study reported a non-significant increase in musculoskeletal conditions, 8 several studies reported no differences, 26 80 81 whereas other studies have reported slight decreases in lower back pain, 82 upper back pain, 65 and neck pain. 65 One recent review concluded that sit-stand workstations may help reduce low back pain in workers. 83 A small body of epidemiological evidence suggests that lower levels of sedentary behaviour are associated with higher quality of life scores. 84 85 Our results corroborate these findings, with increases in quality of life reported by the intervention participants. Taking these findings together, this type of intervention (providing an environmental change combined with additional strategies such as education, self monitoring, and brief coaching) may be of benefit to employers in terms of having more engaged and higher performing staff as well as cost saving from sickness presenteeism, musculoskeletal problems, and potentially sickness absenteeism. A separate paper will formally assess the cost effectiveness of the intervention. Strengths and limitations of this study The strengths of this study include the robust randomised controlled design, with randomisation at the cluster level, the fully powered sample size, the short, medium, and longer term follow up assessments, and the device based measurement of the primary outcome. Therefore, this study tackles many of the limitations of previous evaluations of workplace interventions focused on reducing sitting time. 25 Furthermore, we performed several sensitivity analyses to check the robustness of our results. Although the study had a 27% loss to follow-up/non-compliance with primary outcome assessment by 12 months, our sample size was sufficiently large enough to account for this drop-out. This drop-out/non-compliance rate is similar to that seen at 12 months in the Stand Up Victoria study. 29 The conduct of the present study in an NHS trust is both a strength and a limitation. The NHS is the fifth largest employer globally, with around 1.3 million staff. Clerical and administrative staff make up about a third of NHS employees, therefore this intervention has potential to reach a large number of people. Conversely, as the study was only conducted in a single organisation this may limit the generalisibility of the intervention and findings to other types of organisations beyond the NHS, particularly those with large open plan offices, which were rare within the University Hospitals of Leicester NHS Trust. Although we used an objective assessment of sitting time and physical activity and removed the first day of data collection from the activPAL, it is possible that reactivity (change in behaviour from an awareness of being monitoring) may have biased the results. Many of our work related outcomes were assessed by self report and may have been subject to reporting bias. As SMArT Work was a complex intervention it had the potential to exert effects at many levels, therefore we included many outcomes. However, this study was not powered to detect differences in all of the measured outcomes, and adjustment for multiple comparisons was not performed. The emphasis therefore should be on the pattern of the secondary outcome results. Conclusions The SMArT Work multicomponent intervention was able to reduce occupational and daily sitting time in the short, medium, and longer term in office workers within the University Hospitals of Leicester NHS Trust. The intervention also appeared to have a positive impact on musculoskeletal conditions and many work related outcomes such as job performance, work engagement, occupational fatigue, and sickness presenteeism as well as being beneficial for psychological outcomes such as daily anxiety and quality of life. Areas for future research include the replication of these findings in other organisations, focusing interventions on standing and moving more throughout the whole day (ie, taking a whole day approach to reductions in sitting), eliciting how best to promote movement rather than just standing, and longer term follow up to assess maintenance of behaviour change and allow sufficient time to impact those outcomes that take longer to influence, such as absenteeism. What is already known on this topic High levels of sedentary behaviour (sitting) have been associated with an increased risk of morbidity and mortality and have been shown to be detrimental for work related outcomes such as engagement and presenteeism Office workers are one of the most sedentary populations, spending 70-85% of time at work sitting Interventions to reduce sitting in the workplace have received increasing attention in recent years but studies to evaluate these have been deemed low quality What this study adds The SMArT Work multicomponent intervention involving a height adjustable workstation, successfully reduced occupational sitting time over the short, medium, and longer term in a sample of office workers Positive changes were observed in work related and psychological health Acknowledgments We thank the participants for taking part, Serena Abel for her assistance with data collection and data entry, Alex Rowlands for his role in processing the ActiGraph data, Stephan Bandelow for his advice on measuring cognitive function, Mike Bonar for his design skills, and the independent members of the Trial Steering Committee for their advice and oversight of the study. Footnotes Contributors: CLE, SJHB, MJD, DWD, DWE, LJG, TY, and FM obtained funding for the research. All authors have contributed to the design of the study. GW performed the statistical analysis, supervised by LJG. BJ was involved in data collection at the start of the project and SEO’C was involved in data collection and study coordination throughout. CLE and FM supervised BJ and SOC. CLE processed the activPAL data. The first draft of this manuscript was produced by CLE and all authors have reviewed, edited, and approved the final version. CLE and FM are the guarantors. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. Funding: The trial was sponsored by Loughborough University. This project is funded by the Department of Health Policy Research Programme (project No PR-R5-0213-25004). The research was supported by the National Institute for Health Research (NIHR) Leicester Biomedical Research Centre which is a partnership between University Hospitals of Leicester NHS Trust, Loughborough University and the University of Leicester, the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care–East Midlands (NIHR CLAHRC–EM), and the Leicester Clinical Trials Unit. The views expressed are those of the authors and not necessarily those of the NHS, NIHR, or Department of Health. DWD is supported by a NHMRC senior research fellowship (NHMRC 1078360) and the Victorian Government’s operational infrastructure support programme. The sponsor had no role in the design, undertaking and reporting of the study. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: no support from any organisation for the submitted work (other than the Department of Health noted in the acknowledgments section); no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. DWD reports grants from National Health and Medical Research Council (Australia), grants from Victorian Health Promotion Foundation (VicHealth), during the conduct of the study. MJD reports personal fees from Novo Nordisk, Sanofi-Aventis, Lilly, Merck Sharp and Dohme, Boehringer Ingelheim, AstraZeneca, Janssen, Servier, Mitsubishi Tanabe Pharma, and Takeda Pharmaceuticals International, and grants from Novo Nordisk, Sanofi-Aventis, Lilly, Boehringer Ingelheim, and Janssen, outside the submitted work. Ethical approval: This study was approved by Loughborough University, and Research and Innovation approval was obtained from the University Hospitals of Leicester NHS Trust (EDGE ID 34571). Data sharing: Requests for access to data from the study should be addressed to the corresponding author at ce95@le.ac.uk. The study protocol has been published. All proposals requesting data access will need to specify how it is planned to use the data, and all proposals will need approval of the trial co-investigator team before data release. Transparency: The guarantors (CLE, FM) affirm that the manuscript is an honest, accurate, and transparent account of the study bring reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained. This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: .
Sit-stand workstations that allow employees to stand, as well as sit, while working on a computer reduce daily sitting time and appear to have a positive impact on job performance and psychological health, finds a trial published by The BMJ today. The results show that employees who used the workstations for 12 months, on average, reduced their sitting time by more than an hour a day, with potentially meaningful benefits. High levels of sedentary behaviour (sitting) have been associated with an increased risk of chronic diseases (type 2 diabetes, heart disease, and some cancers) as well as death and have been shown to be detrimental for work related outcomes such as feelings of engagement and presenteeism (going to work despite illness). Office workers are one of the most sedentary populations, spending 70-85% of time at work sitting, but studies looking at ways to reduce sitting in the workplace have been deemed low quality. So a team of researchers based in the UK, with collaborators in Australia, set out to evaluate the impact of (Stand More AT (SMArT) Work) an intervention designed to reduce sitting time at work. The trial involved 146 office workers based at the University Hospitals of Leicester NHS Trust of whom 77 were randomly assigned to the intervention group and 69 to the control group over a 12 month period. The average age of participants was 41 years, 78% reported being of white European ethnicity, and the majority (80%) were women. The intervention group were given a height adjustable workstation, a brief seminar with supporting leaflet, and workstation instructions with sitting and standing targets. They also received feedback on sitting and physical activity, an action planning and goal setting booklet, a self monitoring and prompt tool, and coaching sessions. The control group carried on working as usual. Workers' sitting time was measured using a device worn on the thigh at the start of the study (baseline) and at 3, 6, and 12 months. Daily physical activity levels and questions about work (eg. job performance, engagement) and health (eg. mood, quality of life) were also recorded. At the start of the study, overall sitting time was 9.7 hours per day. The results show that sitting time was lower by 50.62 minutes per day at 3 months, 64.40 minutes per day at 6 months, and 82.39 minutes per day at 12 months in the intervention group compared with the control group. Prolonged sitting time was also reduced in the intervention group. The reduction in sitting was largely replaced by time spent standing rather than moving, as stepping time and physical activity remained unchanged. The results also suggest improvements in job performance, work engagement, occupational fatigue, presenteeism, daily anxiety and quality of life, but no notable changes were found for job satisfaction, cognitive function, and sickness absence. The authors say this was a well-designed trial and their results remained largely unchanged after further analyses. But they acknowledge that their findings may not apply to other organisations, and that self-reporting of work related outcomes may have affected the results. Nevertheless, they say the SMArT Work successfully reduced sitting time over the short, medium, and longer term, and positive changes were observed in work related and psychological health. And they suggest future research should assess the longer term health benefits of displacing sitting with standing and how best to promote movement rather than just standing while at work. In a linked editorial, Dr. Cindy Gray at the University of Glasgow says this is an important study that demonstrates lasting reductions in sedentary behaviour and other work-related benefits. But she questions the potential health gains of simply replacing sitting with standing. The intervention did not increase potentially more beneficial physical activity. She also questions SMArT Work's transferability and suitability for other types of employees, including shift workers, as well as its cost-effectiveness, which she says should be addressed in future research.
10.1136/bmj.k3870
Biology
Parasites from domestic pets affecting wildlife world wide
Nicholas J. Clark et al. Parasite spread at the domestic animal - wildlife interface: anthropogenic habitat use, phylogeny and body mass drive risk of cat and dog flea (Ctenocephalides spp.) infestation in wild mammals, Parasites & Vectors (2018). DOI: 10.1186/s13071-017-2564-z
http://dx.doi.org/10.1186/s13071-017-2564-z
https://phys.org/news/2018-01-parasites-domestic-pets-affecting-wildlife.html
Abstract Background Spillover of parasites at the domestic animal - wildlife interface is a pervasive threat to animal health. Cat and dog fleas ( Ctenocephalides felis and C. canis ) are among the world’s most invasive and economically important ectoparasites. Although both species are presumed to infest a diversity of host species across the globe, knowledge on their distributions in wildlife is poor. We built a global dataset of wild mammal host associations for cat and dog fleas, and used Bayesian hierarchical models to identify traits that predict wildlife infestation probability. We complemented this by calculating functional-phylogenetic host specificity to assess whether fleas are restricted to hosts with similar evolutionary histories, diet or habitat niches. Results Over 130 wildlife species have been found to harbour cat fleas, representing nearly 20% of all mammal species sampled for fleas. Phylogenetic models indicate cat fleas are capable of infesting a broad diversity of wild mammal species through ecological fitting. Those that use anthropogenic habitats are at highest risk. Dog fleas, by contrast, have been recorded in 31 mammal species that are primarily restricted to certain phylogenetic clades, including canids, felids and murids. Both flea species are commonly reported infesting mammals that are feral (free-roaming cats and dogs) or introduced (red foxes, black rats and brown rats), suggesting the breakdown of barriers between wildlife and invasive reservoir species will increase spillover at the domestic animal - wildlife interface. Conclusions Our empirical evidence shows that cat fleas are incredibly host-generalist, likely exhibiting a host range that is among the broadest of all ectoparasites. Reducing wild species’ contact rates with domestic animals across natural and anthropogenic habitats, together with mitigating impacts of invasive reservoir hosts, will be crucial for reducing invasive flea infestations in wild mammals. Background Animals closely associated with humans can act as reservoir hosts that spread parasites to wildlife [ 1 , 2 , 3 ]. Spillover of parasites (i.e. the transmission of a parasite from one host species to another) between domestic and wild animals is an increasing threat to animal health, and understanding factors that drive this process is crucial [ 4 , 5 , 6 ]. Yet while conversion of natural habitat into production zones, habitat fragmentation and global urbanisation increase contact rates between domestic and wild animals [ 7 , 8 ], patterns of parasite sharing at the domestic animal - wildlife interface are poorly resolved. Cat fleas ( Ctenocephalides felis ) and related dog fleas ( C. canis ) are blood-feeding ectoparasites causing enormous grievances for pets worldwide [ 9 , 10 , 11 , 12 ]. Flea control relies on mass use of preventative drugs, equating to hundreds of dollars spent by owners each year [ 13 ]. In addition to pets, C. felis and C. canis are presumed to infest a diversity of wild species. Control of parasite spread and infestation-related morbidity are therefore multifaceted problems [ 14 , 15 , 16 , 17 ]. The potential for urban-wildlife parasite exchange represents a considerable One Health threat, especially since fleas can transmit harmful bacteria (some of them being zoonotic [ 18 , 19 ]). Despite the pervasive risk for flea spillover between domestic and wild animals, there is a dearth of knowledge on C. felis and C. canis distributions among wildlife [ 10 , 20 , 21 ]. Predicting parasite spread requires an understanding of wildlife characteristics that enable host shifting [ 22 ]. The human-induced range expansion of domestic animals and other non-native species that act as viable hosts for fleas (including foxes, rabbits and rats [ 23 , 24 , 25 ]) has led to the encroachment of potential reservoir host species into almost all terrestrial environments [ 26 , 27 , 28 ]. Close proximity between natural and anthropogenic habitats might increase exposure to feral and domestic animals [ 29 , 30 , 31 ] and could be a key predictor of C. felis and C. canis infestation in wildlife. However, other host attributes, such as body mass, diet and phylogenetic ancestry, can be informative for predicting whether hosts share parasite species [ 3 , 32 ]. These attributes may facilitate flea exchange, as factors regulating habitat use are important drivers of ectoparasite infestation [ 33 , 34 , 35 ]. How historical and ecological species traits facilitate or inhibit flea infestation is not known. Moreover, information on infestation rates in wildlife species is scattered throughout the literature. We use a systematic literature search and web scraping tools to build a global database of C. felis and C. canis infestations in wildlife species. Using Bayesian hierarchical models, we incorporate mammalian trait data to ask if extrinsic (habitat use, diet breadth) and intrinsic (phylogenetic ancestry, body mass) host attributes act as drivers of flea infestation risk. We use host specificity analyses and null models to assess whether fleas infest species that are more similar in their phylogenetic ancestry, habitat use or diet than expected by chance. If habitat use is a key driver of infestation risk, we expect that use of anthropogenic habitats will increase species’ infestation probability and that both flea species will infest hosts that exhibit more similar habitats than expected by chance. Results Introduced mammals as reservoir hosts for fleas at the domestic-animal wildlife interfac e Both flea species infest wildlife on all continents apart from Antarctica (Fig. 1 ). In total, 138 (20%) out of 685 sampled wild mammal species harboured cat fleas ( Ctenocephalides felis ) and 31 (4%) harboured dog fleas ( C. canis ). Species most frequently reported to be associated with C. felis were all invasive mammals, including feral cats (26 out of 446 total flea-host-location observations; ranging all sampled continents), feral dogs (21 observations), red foxes ( Vulpes vulpes ; 19 observations), black rats ( Rattus rattus species complex; 16 observations), brown rats ( Rattus norvegicus ; 14 records) and European rabbits ( Oryctolagus cuniculus ; 9 observations; Table 1 ). Likewise, C. canis was commonly reported infesting feral mammals, including red foxes (22 observations), feral dogs (12 observations), feral cats (8 observations), and black and brown rats (5 observations each; Table 1 ). From studies that included prevalence information, mean C. felis prevalence was highest in feral cats (mean 32.3%). Ctenocephalides canis prevalence was highest in red foxes (mean 3.5%; Table 1 ). While these observations may be biased by more intensive sampling of invasive mammals, especially if there is greater incentive to publish on invasive species, they suggest invasive mammals act as suitable reservoir hosts for cat and dog fleas. Fig. 1 Geographical distributions of observed cat flea Ctenocephalides felis ( a ) and dog flea C. canis ( b ) infestation reports in free-roaming mammals around the globe. Sizes of points represent the number of mammal species sampled in each record. Colours correspond to the total number of feral host species observed to carry fleas at each location (blue = 0, purple = 1, magenta = 2, pink = 3, red = 4) Full size image Table 1 Sampling frequencies and prevalences of cat and dog fleas ( Ctenocephalides felis and C. canis ) in selected invasive host species. Note that prevalence information was not available from all studies included in the database Full size table Among native species, C. felis was commonly reported in American opossums (Virginia oppossum Didelphis virginianam : 7 observations, and common oppossum Didelphis marsupialis: 6 observations), North American gray foxes ( Urocyon cinereoargenteus : 5 observations), and Australian brushtail possums ( Trichosurus vulpecula : 3 observations). For C. canis , commonly reported native species included Iberian lynx ( Lynx pardinus : 4 observations), North American gray foxes (3 observations) and a variety of other wild carnivores (including the coyote Canis latrans , golden jackal Canis aureus , and common gennet Genetta genetta ). Host phylogeny, body mass and anthropogenic habitat use drive parasite infestation risk Host phylogeny explained considerable variation in C. felis infestation probability, accounting for 64.9% of variation (CI: 45.4–74.2%). Ctenocephalides felis infestation probability decreased with increasing host body mass (accounting for 41.2% of the remaining explained variation; CI: 1.2–79.7%), with a decrease of 1 kg in mean body mass equating to an increase of 0.6% in infestation probability (CI: 0.2–2.8%). This could either mean that large body size prevents infestation or, more likely, that larger mammals are less likely to overlap human habitats. As expected, anthropogenic habitat use was a strong positive predictor of C. felis infestation (accounting for 22.3% of remaining explained variation;CI: 1.6–54.9%), with odds of infestation for anthropogenic habitat-using species increasing by 256% compared to species that do not use anthropogenic habitats (CI: 125.9–687.8%). Credible intervals for all other coefficients included zero (Additional file 1 : Figure S1). For C. canis , infestation probability was linked to host phylogeny (19.2% of explained variation; CI: 6.4–33.5%) when accounting for a significant positive effect of total citations associated with the term ‘ectoparasite’ (6.3% of explained variation; CI: 0.4–28.5%). Similarly to C. felis , infestation probability for C. canis increased with decreasing host body mass (accounting for 77.4% of remaining explained variation; CI: 14.2–96.7%). Infestation probability for C. canis is predicted to increase by 2.7% (CI: 0.2–11.3%) with a decrease of 1 kg in host body mass. The use of anthropogenic habitats was weakly positive but non-significant for predicting C. canis infestation (regression coefficient CI: −0.34–2.87), suggesting that more data is needed to elucidate this possible pattern. Credible intervals for all other regression coefficients included zero (Additional file 1 : Figure S1). Entering species’ ecological traits (from all sampled mammalian hosts for which we had phylogenetic data; n = 639) into equations from fitted regressions (using coefficient posterior modes and phylogenetic variable intercepts) revealed two key patterns. First, although C. felis infestation probability shows a phylogenetic signal (related species showing similar infestation risk), this parasite is predicted to infest a wide diversity of mammals covering the majority of clades along the sampled host phylogeny (Fig. 2 ). According to the model, species with particularly high risk of C. felis infestation include many canids, felids and murids, in addition to host species such as possums (Phalangeridae and Didelphidae), skunks (Mephitidae), shrews (Soricidae), weasels (Mustelidae) and old world porcupines (Hystricidae). Secondly, C. canis is predicted to infest a much lower diversity of species, with susceptible hosts primarily including wild canids, felids, murids and mustelids (Fig. 3 ). Fig. 2 Cat flea ( Ctenocephalides felis ) infestation probability in wild mammals, mapped across a phylogeny of 639 sampled mammal species. Colours represent ancestral state mapping of predicted infestation probability, calculated by entering species’ attributes into fitted logistic regression equations (using posterior modes for regression coefficients and variable intercepts according to phylogenetic ancestry). Cooler blues indicate low infestation probability; warmer reds show high infestation probability. Key phylogenetic host groups (i.e. clades in which multiple species show above 0.7 infestation probability) are indicated with outline figures (clockwise from top: porcupines (Hystricidae); mice and rats (Muridae); possums and oppossums (Phalangeridae, Didelphidae); shrews (Soricidae); hedgehogs (Erinaceidae); felines (Felidae); foxes (genus Vulpes ; Canidae); dogs (genus Canis ; Canidae); skunks (Mephitidae); and weasels (Mustelidae). Images were sourced from under a Creative Commons License ( ) Full size image Fig. 3 Dog flea ( Ctenocephalides canis ) infestation probability in wild mammals, mapped across a phylogeny of 639 sampled mammal species. Colours represent ancestral state mapping of the fitted infestation probability, calculated by entering species’ attributes into fitted logistic regression equations (using posterior modes for regression coefficients and variable intercepts according to phylogenetic ancestry). Cooler blues indicate low infestation probability; warmer reds show high infestation probability. Key phylogenetic host groups (i.e. clades in which multiple species show above 0.7 infestation probability) are indicated with outline figures (clockwise from top: rats (Muridae); felines (Felidae); foxes (genus Vulpes ; Canidae); dogs (genus Canis ; Canidae); and weasels (Mustelidae). Images were sourced from under a Creative Commons License ( ) Full size image Dog fleas infest phylogenetically clustered mammalian host species For C. canis , host specificity intervals became significantly clustered as phylogenetic weight increased ( a values approaching 1; Fig. 4 ), indicating infested hosts were more closely related than expected by chance. As ecological niche weight increased ( a values approaching 0), intervals overlapped zero, suggesting C. canis hosts did not exhibit more similar habitat or diet niches than expected (Fig. 4 ). For C. felis , in contrast, host specificity intervals included zero for all a weighting values, suggesting infested hosts were not more closely related to each other nor did they exhibit more similar habitat or diet niches than expected by chance (Fig. 4 ). Fig. 4 Differentials between observed and expected functional-phylogenetic host specificity (STD*) for dog fleas ( Ctenocephalides canis ; left panel) and cat fleas ( C. felis ; right panel) at varying α weights. Weighting values approaching 0 give more weight to host ecological distance, while values approaching 1 give higher weight to host phylogenetic distance. Negative differentials indicate infested hosts are more similar than expected by chance; positive values indicate infested hosts are more dissimilar than expected. Differentials were generated from 10,000 iterations, using a mammalian supertree [ 72 ] and either randomly sampled host habitat dendrograms (coloured boxes) or host diet niche dendrograms (grey boxes) in each iteration. Boxplots show differential medians (lines within boxes), and 2.5% and 97.5% quartiles (hinges) for individual parasites. Whiskers show minimum and maximum values. Asterisks (*) indicate significant differences from 0 Full size image Discussion Management strategies to mitigate parasite spillover require identifying host attributes that increase infestation risk [ 3 , 36 , 37 ]. We find that the use of anthropogenic habitats is a key driver of cat flea infestation risk. As habitat encroachment accelerates [ 8 , 38 ], increased contact between wild mammals and human-associated reservoir hosts is likely to increase spillover of cat fleas to wildlife. While intrinsic host attributes such as phylogenetic ancestry and large body mass may ameliorate risk for some species, our findings suggest a large diversity of species are susceptible to cat flea infestation. In contrast, dog fleas are less widespread and more restricted to hosts with shared evolutionary histories. Future spillover of dog fleas, in turn, is expected to be more strongly confined to a few phylogenetic groups, reducing their overall spread compared to the more host-generalist cat fleas. Contact patterns between potential host species will not only depend on habitat overlap, but also on species-level behavioural and population-level demographic attributes [ 39 ]. Understanding within-population infestation dynamics and mitigating impacts of invasive reservoir hosts will therefore be crucial for reducing flea spillover at the domestic-wildlife interface. Although it is often stated that cat and dog fleas are cosmopolitan parasites infesting a diversity of species [ 9 , 10 , 40 ], this is the first study to uncover the magnitude and geographic spread of their wildlife occurrences. In doing so, we provide tangible evidence that invasive species contribute to the spread of the most common parasites of pets in human households. Numerous feral mammal species were identified as important reservoir hosts for both cat and dog fleas. Already considered some of the most damaging alien animal species for global biodiversity, feral cats, foxes and rats are commonly observed to harbour flea infestations [ 17 , 23 , 24 , 41 , 42 ]. Previous authors have speculated on the role of feral hosts as reservoirs, suggesting that within its climatic limits, the cat flea is capable of using virtually any available feral mammalian host to sustain its population [ 15 ]. Feral species thrive at the human-wildlife interface [ 28 , 31 , 43 ], and we show that anthropogenic habitat use influence C. felis infestation risk. Collectively, our results suggest spatial overlap with feral reservoir hosts plays a crucial role in flea spillover to wildlife and will likely magnify spillover that is already driven by encroachment of flea-bearing domestic pets into natural habitats. Our study adds to a growing body of empirical and theoretical evidence implicating invasive species as contributors of parasite spread [ 6 , 44 , 45 , 46 ]. Yet in addition to feral mammals, urban-adapted native species may facilitate flea spillover. In the Americas, opossums (family Didelphidae) and raccoons (Procyonidae) are well-recognised as urban reservoirs for heavy cat flea infestations and flea-transmitted pathogens [ 17 , 47 , 48 , 49 ]. Other urban-dwelling species such as European hedgehogs ( Erinaceus europaeus ; family Erinaceidae) have been found carrying cat fleas in Germany and Hungary [ 50 , 51 ] as well as in New Zealand, where hedgehogs are introduced [ 52 ]. While this indicates increases in human footprints facilitate the spread of fleas, effects of urbanisation on parasite emergence are not well-understood. Some recent studies suggest parasites that commonly infect urban-adapted wildlife species exhibit increased prevalence in urban or suburban environments; while others find the opposite pattern [ 53 , 54 ]. Fleas infesting domestic dogs, for example, will likely be more abundant in rural housing conditions where pets sleep on natural soils with high humidity, as opposed to urban housing that may be less suitable for nesting fleas [ 55 , 56 ]. Studies that assess flea prevalence and infestation intensity across gradients of land use and domestic animal encroachment are needed to understand the true impacts of urbanisation on flea spillover. Host switching and dispersal are key mechanisms underlying parasite spillover [ 57 , 58 ]. While the geographic origins of cat and dog fleas are unknown [ 59 ], it is likely they spread to new regions following dispersal of humans and their pets [ 11 ]. This would have exposed fleas to a diversity of potential new host species. In our study, a strong signal of phylogeny for predicting host infestation suggests that conserved traits facilitated host switching following initial contact with reservoir hosts. Flea host range expansions may therefore follow a pattern of ‘ecological fitting’. This postulates that new host associations arise following contact with species that share traits with previous hosts [ 57 , 60 ]. Ecological fitting has been observed in many host-parasite assemblages [ 60 , 61 ]; however, uncovering the particular suite of conserved traits involved can be challenging. Our findings shed some light on the flea-mammal system, suggesting that body size and perhaps adaptability to urban environments are important for driving infestation risk. Broad similarities in habitat use and diet are either unimportant or too coarse to accurately identify patterns. Considering a wider array of traits that may influence flea exposure, such as nesting behaviour or local population density, would be useful to expand our understanding of spillover. Ours is not the first study to suggest that cat fleas are more widespread, both in terms of geography and host-breadth, than dog fleas [ 50 , 62 , 63 ]. Many authors have speculated on why this occurs. Proposed hypotheses include a relatively restricted host range or restricted tolerance to extreme temperatures for the dog flea compared to the cat flea [ 15 , 23 ]. While our data does not prove that infested hosts are maintaining flea populations, our findings highlight key differences in patterns of host use between the two flea species. Cat fleas are found on a much wider phylogenetic diversity of wild mammals than dog fleas. Host species that have been reported to carry dog fleas are restricted to certain phylogenetic clades, supporting hypothesis that dog fleas show higher host specificity than do cat fleas [ 10 ]. Experimental infestation studies, including co-infestations of C. felis with C. canis , coupled with additional field infestation data would be useful to rigorously test this hypothesis. This study makes assumptions that any mammal species recorded to harbour a flea species has been searched for cat and dog fleas. This limitation that hinders our power to make predictions about infestation risk. On the flipside, there are likely many more confirmed associations between wild mammal hosts and fleas that our search methods failed to identify. Searching of Web of Science and PubMed could be extended to encompass mammal-flea associations for a broader range of flea species. This would serve to increase our understanding of flea biogeography while giving better resolution of host traits that influence risk of cat and dog flea infestation. We reinforce earlier calls for more detailed record-keeping to help identify informative processes involved in parasite spillover among wild host species [ 64 , 65 ]. While the scope of our study was to collate data on flea-host associations and make inferences on species-level infestation risk, future studies addressing differences in infestation prevalence and intensity at the population-level would be informative for broadening our understanding of flea spillover. Conclusions We find that cat fleas are among the most host-generalist of all ectoparasites, a trait that likely contributes to parasite spread at the human-wildlife interface. We suggest that reducing wild species’ contact rates with domestic animals across natural and anthropogenic habitats, together with mitigating impacts of invasive reservoir hosts, will be crucial for reducing invasive flea infestations in wild mammals. Crucial to developing management strategies will be differentiating between incidental hosts and those capable of maintaining and spreading fleas throughout the parasite lifecycle. Methods Compiling a global flea host-parasite database We searched PubMed (National Library of Medicine National Institutes of Health, US) and Web of Science (Clarivate Analytics, US) to identify publications that describe cat and/or dog flea infestations in free roaming wild and domestic species. These databases apply hierarchical search algorithms to cover a broad range of nested terms; for instance, searching ‘ruminant’ will also search terms nested within ruminant, such as ‘goat’, ‘cattle’ etc. (Additional file 2 for details of literature accession methods and specific search terms). From identified papers, we recorded host species, presence/absence of C. felis and C. canis , and, if data on individuals sampled was available, number of hosts sampled and number infested with each flea species. Fleas regarded as Ctenocephalides spp. (i.e. only identified to genus level) were recorded as unidentified Ctenocephalides species. Note that few studies distinguished between C. felis subspecies, and so all C. felis records were grouped as a single category. Further flea host-parasite records were gathered from the Global Mammal Parasite Database v2.0 [ 66 ] and the Natural History Museum Database, London, UK ( ; accessed 06/06/17). To make inferences about traits that best predict the probability that wildlife are infested with either flea species, we gathered a list of all wild mammal species known to have been sampled for fleas. We included hosts from published flea host-parasite community datasets ([ 67 ] from Palaearctic regions; [ 68 ] from Serbia) and comprehensive flea-host checklists. We also included mammal species that have been recorded to harbour arthropod ectoparasites in the Global Mammal Parasite Database v2.0 [ 66 ]. For all mammal species included, associations with cat and dog fleas were recorded as binary variables (present or absent). To account for possible sampling bias among species, we queried the number of published references for each binomial species name from the Scopus literature database ( ; accessed 08/06/17) using accompanying search terms ‘parasite’ and ‘ectoparasite’. We are aware that our list of host species is incomplete, but we believe our database is sufficiently representative to explore variable wildlife traits that may influence likelihoods of cat and dog flea infestation. The final database included 446 unique host-parasite-location observations. Mammalian host phylogeny and ecological trait data For all sampled mammal species, we gathered ecological trait data from the International Union for Conservation of Nature (IUCN; ; accessed 04/05/17), EltonTraits 1.0 [ 69 ], PanTheria [ 70 ] and habitat diversity [ 71 ] databases to include attributes likely to distinguish hosts in terms of availability and suitability for flea infestations. Selected traits included: body mass, linked to longevity and adaptation to environments; diet diversity (a Shannon index based on species’ proportional use of 10 diet categories represented in EltonTraits); habitat use (binary indicators of whether a species uses each of 18 IUCN habitat categories); cohabitation diversity (a co-occurrence ß diversity metric quantifying the target species’ degree of habitat and community specialization, where a ‘generalist’ occurs in a range of habitats that differ in species composition while a ‘specialist’ uses habitats that contain a consistent collection of other mammal species; [ 71 ]); IUCN threat status; mid-range latitude; and mid-range longitude. To test for differences among specific habitat types in logistic regressions, IUCN habitat variables were used to create binary indicators that reflect whether species use anthropogenic (‘introduced vegetation’ or ‘artificial terrestrial’), forest (‘forest’ or ‘shrubland’) and dry bush habitats (‘desert’, ‘savanna’ or ‘grassland’). Species’ phylogenetic relationships were estimated from a recent mammalian supertree [ 72 ]. Phylogenetic logistic regressions Infestation probability for each flea species was modelled separately using species-level infestation data of all mammal species with one of the two focal flea species as the response (‘1’ if a species has been recorded as infested; ‘0’ if a species has not been recorded as infested). We tested whether host attributes influence infestation probability using a hierarchical logistic regression with a logit link function. Predictor variables included host body mass, diet diversity, cohabitation diversity, anthropogenic habitat use, forest habitat use, dry bush habitat use, citation counts linked to ‘parasite’ and citation counts linked to ‘ectoparasite’. We included an interaction between cohabitation diversity and anthropogenic habitat use to test if species that rely more on anthropogenic habitats have increased infestation risk (where species that use anthropogenic habitats and have low cohabitation diversity indices are assumed to rely more heavily on man-made habitats than those with higher diversity indices). To account for underlying structure driven by host phylogenetic relationships or recent population trends, host phylogeny and IUCN threat status were included as random grouping terms, allowing inferences for group-specific slopes whilst estimating between-group variation [ 73 ]. The model was fitted in a Bayesian framework with Markov Chain Monte Carlo (MCMC) sampling using the R package MCMCglmm [ 74 ]. We used parameter expansion (redundant multiplicative reparameterisation of the linear model) for the threat status variance component to reduce dependence among parameters and improve chain mixing [ 75 ]. For the phylogenetic variance component, we used a χ 2 distribution with one degree of freedom, which improves sampling properties and heritability estimates for binary outcomes [ 75 , 76 ]. Residual variance was fixed at 1, as this variance is non-identifiable when estimating binary outcomes [ 76 ]. All continuous predictors were centred and scaled (dividing by one SD) prior to regression. We ran two chains for 2000,000 iterations each, removing 1000,000 as ‘burn-in’ and with a thinning value of 1000 (2000 total posterior samples for each parameter). Chain mixing was inspected visually and with the Gelman-Rubin diagnostic (all values < 1.2). Autocorrelations were calculated to ensure independence of consecutive samples (all autocorrelations < 0.1). Because a limited number of records in our database included number of hosts sampled and number infested (e.g. 78 observations from 33 host species for C. felis ), power to detect prevalence patterns was low and we focused only on species-level associations. Parasite functional-phylogenetic host specificity We calculated observed and expected host specificity for each flea species to assess whether fleas use hosts that are more similar based on phylogeny, habitat use or diet than expected by chance. We used the functional-phylogenetic host specificity metric described by Clark & Clegg [ 77 ], which integrates host phylogenetic and ecological distances to quantify their relative influences on parasite host specificity. To describe similarity between host habitat and diet niches, we applied hierarchical clustering to dissimilarity Gower’s distance matrices [ 78 ]; the first matrix incorporated host micro-habitat traits (9 terrestrial habitat use binary indicator variables) and macro-habitat traits (co-occurrence diversity; midrange longitude and midrange latitude; all as continuous variables). The second matrix incorporated two host diet traits (a fuzzy variable to describe the proportional use 10 diet categories and the Shannon diet diversity continuous variable). All continuous variables were scaled by one SD and weighted by the inverse of their phylogenetic autocorrelations to capture variance in niches not captured by phylogeny. We used Abouheif’s C, a metric efficient at detecting phylogenetic autocorrelation regardless of topology [ 79 , 80 ]. Distance matrices were built using weighted variables following Pavoine et al. [ 81 ]. Uncertainty in host-parasite analyses is important to incorporate when assessing host-specificity and infestation risk [ 3 , 82 ]. Because different hierarchical clustering algorithms lead to different inferences [ 83 ], we generated eight dendrogram topologies from each matrix (habitat and diet) to capture uncertainty in relationships. Phylogenetic and dendrogram branch lengths were scaled (dividing distances by the maximum distance for each tree) so pairwise distances ranged from zero to one. Pairwise phylogenetic and niche distances ( PDist and FDist , respectively) were then used to calculate functional-phylogenetic distance ( FPDist ): FPDist={\left(a{PDist}^p\kern0.5em +\kern0.5em \left(1\kern0.5em -\kern0.5em a\right){FDist}^p\right)}^{1/p\operatorname{}} FPDist={\left(a{PDist}^p\kern0.5em +\kern0.5em \left(1\kern0.5em -\kern0.5em a\right){FDist}^p\right)}^{1/p\operatorname{}} (1) The weighting parameter α varies from zero to one; values approaching one give greater weight to PDist ; a values approaching zero give greater weight to FDist . We set p = 2 to calculate squared Euclidean FPDist distances. Host FPDist distances were used to calculate a phylospecificity index ( STD* ) for each parasite, using species-level infestation data and following Clark & Clegg [ 77 ]. Null host distributions were created for each parasite by randomly drawing the observed number of infested species from the sampled host pool and calculating expected STD* . We allowed α to vary across a uniform distribution from zero to one to alter relative weights of host phylogeny and ecological niche in each draw. Expected STD* values were subtracted from observed to yield specificity differentials. These will be negative if hosts are more similar than expected (clustered) and positive if hosts are less similar (overdispersed). This was repeated 10,000 times to generate distributions of STD* differentials for each flea species. Separate analyses were conducted using host habitat and diet niche dendrograms. To allow for comparisons between flea species and account for uncertainty in the large number of C. felis host-parasite records (138 host species, see Results), we randomly selected 31 observed host species for C. felis analyses in each draw (equal to the number of host species observed to carry C. canis ; see Results). For all analyses, we report posterior modes and 95% credible intervals (highest posterior density intervals for logistic regressions; 2.5% and 97.5% quantiles for host specificity indices). Effects were considered ‘significant’ if credible intervals did not include zero. Of the 685 sampled mammal species (see Results), we were able to collate trait data for 639 species. These 639 species were included in logistic regressions (MCMCglmm cannot impute missing predictors), while the full set of 685 species was used for host specificity analyses (missing trait data was imputed from the full range of observed values for each trait).
Fleas from domestic pets are infesting native wildlife and feral animals in all continents except Antarctica, a new study reveals. The University of Queensland-led global study found domestic pet fleas feeding on species as diverse as Australian brushtail possums, coyotes, golden jackals and Iberian lynx. UQ School of Veterinary Science researcher Dr. Nicholas Clark said the potential for urban-wildlife parasite exchange represented a considerable threat, especially since fleas could transmit harmful bacteria including those causing bubonic plague and typhus. The study showed that so-called cat fleas – the main flea species found on domestic dogs and cats— were infesting more than 130 wildlife species around the world, representing nearly 20 per cent of all mammal species sampled. "Dog fleas are less widespread and to date they've been reported on 31 mammal species," he said. "Both flea species are commonly reported infesting free-roaming (feral) cats and dogs or introduced mammals such as red foxes, black rats and brown rats." The breakdown of barriers between wildlife and invasive species had increased the transfer of fleas between domestic animals and wildlife. Credit: University of Queensland parasites from domestic pets affecting wildlife worldwide. Credit: University of Queensland Dr. Clark said this was a threat to One Health – a concept from the Centres of Disease Control and Prevention that recognises that the health of people is linked to the health of animals and the environment, and requires a global effort to address. University of Sydney researcher Associate Professor Jan Šlapeta said that despite the extensive risk for flea spill-over between domestic and wild animals, there was a lack of knowledge on cat and dog flea distributions among wildlife. "This study is the first to uncover the magnitude and geographic spread of the wildlife occurrences of domestic dog and cat fleas," he said. "We have provided tangible evidence that invasive species contribute to the spread of the most common parasites from domestic pets." Dr. Clark said that reducing contact between wild species and domestic animals would be crucial to manage invasive flea infestations in wild animals.
10.1186/s13071-017-2564-z
Chemistry
New protein structures to aid rational drug design
Sachin S. Katti et al, Structural anatomy of Protein Kinase C C1 domain interactions with diacylglycerol and other agonists, Nature Communications (2022). DOI: 10.1038/s41467-022-30389-2 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-022-30389-2
https://phys.org/news/2022-05-protein-aid-rational-drug.html
Abstract Diacylglycerol (DAG) is a versatile lipid whose 1,2- sn -stereoisomer serves both as second messenger in signal transduction pathways that control vital cellular processes, and as metabolic precursor for downstream signaling lipids such as phosphatidic acid. Effector proteins translocate to available DAG pools in the membranes by using conserved homology 1 (C1) domains as DAG-sensing modules. Yet, how C1 domains recognize and capture DAG in the complex environment of a biological membrane has remained unresolved for the 40 years since the discovery of Protein Kinase C (PKC) as the first member of the DAG effector cohort. Herein, we report the high-resolution crystal structures of a C1 domain (C1B from PKCδ) complexed to DAG and to each of four potent PKC agonists that produce different biological readouts and that command intense therapeutic interest. This structural information details the mechanisms of stereospecific recognition of DAG by the C1 domains, the functional properties of the lipid-binding site, and the identities of the key residues required for the recognition and capture of DAG and exogenous agonists. Moreover, the structures of the five C1 domain complexes provide the high-resolution guides for the design of agents that modulate the activities of DAG effector proteins. Introduction The impressive diversity of DAG signaling output is mediated via its interactions with seven families of effector proteins that execute broad sets of regulatory functions 1 . These include protein phosphorylation (PKCs and PKDs 2 , 3 ); DAG phosphorylation (DGKs 4 ); RacGTPase regulation (Chimaerins 5 ); Ras guanine nucleotide exchange factor activation (RasGRPs 6 ); Cdc42-mediated cytoskeletal reorganization (MRCK 7 ); and assembly of scaffolds that potentiate synaptic vesicle fusion and neurotransmitter release (Munc13s 8 ). PKCs define a central DAG-sensing node in intracellular phosphoinositide signaling pathways that regulate cell growth, differentiation, apoptosis, and motility 9 . Shortly after their discovery, PKCs were identified as cellular receptors for tumor-promoting phorbol esters 10 that bind C1 domains in lieu of DAG. These observations, combined with the central roles executed by PKCs in intracellular signaling established their DAG-sensing function as an attractive target for therapeutic intervention, with considerable promise in the treatment of Alzheimer’s disease 11 , HIV/AIDS 12 , 13 , and cancer 14 , 15 . However, the structural basis of DAG recognition by the C1 domains has remained elusive, and the strategies for therapeutic agent design deployed to date all relied on modeling studies (reviewed in 16 ) based on the single available crystal structure of the C1 domain complexed to a ligand that does not activate PKC 17 . Herein, we have overcome these well-documented challenges 18 , 19 , 20 that have hindered the crystallization of extremely hydrophobic C1-ligand complexes for almost three decades. This advance enabled us to determine high-resolution structures of C1 bound to the endogenous agonist DAG and to each of four exogenous agonists of therapeutic interest. Collectively, our findings: (i) provide a structural rationale for the consensus amino acid sequence of DAG-sensitive C1 domains; (ii) provide insight into the origins of DAG sensitivity; and (iii) reveal how the unique hydrophilic/hydrophobic properties of the ligand-binding groove enable C1 domains to accommodate chemically diverse ligands. Results and discussion Structure of the C1Bδ-DAG complex C1 domains form complexes with their ligands in the membrane environment, and dodecylphosphocholine (DPC) is an effective membrane mimic that faithfully reproduces the functional properties of C1 domains with respect to ligand-binding interactions 21 , 22 . For structural studies, we formed the complex between the C1B domain of PKCδ (C1Bδ) and a synthetic DAG analog (di-octanoyl- sn -1,2-glycerol) in the presence of DPC micelles. The structure of the C1Bδ-DAG complex was refined to 1.75 Å with R work = 0.214 and R free = 0.246 (Supplementary Table 1 ). The H3 space group unit cell (89 × 89 × 219 Å, Fig. 1a ) contains a total of 72 protein chains contributed by nine asymmetric units (AUs) with eight C1Bδ molecules per AU (Fig. 1b ). Fig. 1: Arrangement of protein chains, lipid, and detergent molecules in the unit cell and the asymmetric unit of the C1Bδ-DAG complex crystal (PDB ID: 7L92). a The unit cell contains 72 DAG-complexed C1Bδ chains and 18/54 DAG/DPC molecules that peripherally associate with the protein surface. Structural Zn 2+ ions of C1Bδ are shown as black spheres. b The asymmetric unit comprises 8 C1Bδ protein chains with 8 DAG molecules captured within a well-defined groove, and 2/6 peripheral DAG/DPC molecules. c Space-filling representation of the two distinct DAG/DPC micelles. Full size image The organization of the protein crystal lattice is unprecedented as it contains two distinct lipid-detergent micelles located at symmetry elements in the crystal. Micelle 1 is composed of 12 DAG and 12 DPC ordered molecules, and micelle 2 composed of 18 DAG and 6 DPC ordered molecules (Fig. 1c ), in addition to fully or partially disordered lipids. We speculate that the micelles help nucleate the crystallization, as all of the protein subunits are arranged with their lipid sensing loops directly binding to DAG within micelles (Supplementary Fig. 1a, b ). Each C1Bδ protein chain has a DAG molecule captured within a groove formed by its membrane-binding loop regions (Fig. 1b and Supplementary Fig. 1c, d ). The well-defined glycerol ester moieties of this tightly bound ‘intra-loop’ DAG refined with B-factors (18–26 Å 2 ) comparable to those of the surrounding protein residues (17–23 Å 2 ). In addition, each AU contains two less-ordered peripheral DAG and six DPC molecules associated with the amphiphilic protein surface (Supplementary Fig. 1e, f ). There is little variability among DAG-complexed C1Bδ chains within the asymmetric unit, as evidenced by low pairwise backbone r.m.s.d values of 0.3–0.6 Å (Fig. 2a ). The most variable region is located between helix α1 and the C-terminal Cysteine residue that coordinates a structural Zn 2+ ion. According to solution NMR, this region undergoes conformational exchange on the μs-ms timescale 21 , 23 . We crystallized apo C1Bδ under conditions similar to those used for the C1Bδ-DAG complex (but without detergent) for a direct comparison. We found the structure to be identical to the previously reported apo structure (1PTQ 17 ) with a backbone r.m.s.d. of 0.4 Å. The apo structure superimposes well onto the structures of DAG-complexed C1Bδ, with the notable exception of the Trp252 sidechain. In the DAG complex, this sidechain is oriented towards the DAG tethered to the membrane-binding region, whereas in the apo C1Bδ it is oriented away from that region (Fig. 2a ). This was a satisfying result as Trp252 is associated with the “DAG-toggling” behavior of the C1 domains, wherein a conservative Trp→Tyr substitution in the novel (or Ca 2+ -insensitive) and Tyr→Trp substitution in conventional (or Ca 2+ -activated) PKC isoforms significantly modulates apparent affinity for DAG 21 , 22 , 24 , 25 . Fig. 2: Stereospecificity of DAG binding by C1Bδ. a Backbone superposition of 8 DAG-complexed C1Bδ chains of the AU (cyan, PDB ID: 7L92) onto the structure of apo C1Bδ (sienna, PDB ID: 7KND). The sidechain of Trp252 reorients towards the tips of membrane-binding β12 and β34 loops upon DAG binding. DAG adopts one of the two distinct binding modes: “ sn -1” ( b ) or “ sn -2” ( c ). The formation of the C1Bδ-DAG complex in bicelles is reported by the chemical shift perturbations (CSPs) of the amide 15 NH ( d ) and methyl 13 CH 3 ( e ) groups of C1Bδ. Asterisks denote residues whose resonances are broadened by chemical exchange in the apo-state. The insets show the response of individual residues to DAG binding through the expansions of the 15 N– 1 H and 13 C– 1 H HSQC spectral overlays of apo and DAG-complexed C1Bδ. f 1 H– 1 H N Thr242 and Gly253 strips from the 3D 15 N-edited NOESY-TROSY spectrum of the C1Bδ-DAG-bicelle complex. The protein-to-DAG NOE pattern is consistent with the distances observed in the “ sn- 1” mode (Chain 5, light purple) but not the “ sn -2” mode (Chain 7, green). All distances are in Å and color-coded in the “ sn -1” complex to match the labels in the spectrum; “w” denotes water protons. The medium-range NOE that would be characteristic of the “ sn -2” complex is shown in red. Full size image Stereospecificity of DAG binding by C1Bδ Another significant feature of the C1Bδ-DAG structure is that it reveals the mechanism for the stereospecific binding of sn -1,2-diacylglycerol by C1 domains. DAG binds in the groove formed by the protein loops, β12 and β34 (Fig. 2a ) and can adopt two distinct binding modes: “ sn -1” and “ sn -2” 26 (Fig. 2b, c ). The “ sn -1” mode is predominant in the crystal as it is observed in six of the eight C1Bδ chains (Supplementary Fig. 2a ). We note that the DAG/DPC micelle 1 exclusively supports the “ sn -1” binding mode, while in micelle 2, both “ sn -1” and “ sn -2” interactions are found (Supplementary Fig. 1b ). In both modes, the DAG glycerol and ester moieties are anchored to the C1Bδ binding groove by four hydrogen bonds. Three are contributed by the C3-OH hydroxyl group that serves as the donor for the carbonyl oxygens of Thr242 and Leu251, and as the acceptor for the amide hydrogen of Thr242. The fourth bond involves the amide hydrogen of Gly253 and it is this bond that defines the binding mode. In the “ sn -1” position, the acceptor is the carbonyl oxygen O5 of the sn -1 ester group, whereas in the “ sn -2” position the acceptor is the carbonyl oxygen O3 of the sn -2 ester group (Fig. 2b, c ). A particularly important feature of the “ sn -1” binding mode is the involvement of the alkoxy oxygen O2 of the sn -2 DAG chain in the hydrogen bond with the amide protons of the Gln257 sidechain (Fig. 2b , Supplementary Fig. 2b ). Gln257 is part of the strictly conserved “QG” motif in all DAG-sensitive C1 domains 27 that is essential for agonist binding 19 , 21 , 28 , 29 , and whose µs-ms dynamics in apo C1 domains correlates with DAG-binding affinity 23 . The Gln257 sidechain also “stitches” the β12 and β34 loops together, forming hydrogen bonds with Tyr238 of loop β12 and Gly253 of loop β34 (Supplementary Fig. 2b ). The simultaneous involvement of Gln257 in both DAG and intra-protein stabilizing interactions explain its essential role in the formation of the C1Bδ-DAG complex. C1Bδ binds DAG in the “ sn -1” mode in the solution To determine the predominant DAG-binding mode in solution, we conducted solution NMR experiments on C1Bδ-DAG assembled in isotropically tumbling bicelles. Complex formation is evident from the chemical shift perturbations of the C1Bδ backbone NH and the Trp252 NHε groups (Fig. 2d and Supplementary Fig. 3a ), and of the methyl groups of hydrophobic residues residing in the loop regions (Fig. 2e and Supplementary Fig. 3b ). Also evident is rigidification of the loops upon DAG binding as manifested by the appearance of backbone NH cross-peaks broadened in the apo-state due to their intermediate-timescale loop dynamics (Supplementary Fig. 3a ). 3D 15 N-edited [ 1 H, 1 H] NOESY experiments were performed where C1Bδ was extensively deuterated at the non-exchangeable sites to suppress intra-protein NOEs. The two DAG-binding modes observed in the crystal structure are associated with drastically different protein-to-ligand NOE patterns. The signatures of the “ sn -1” mode are predicted to be short-to-medium range NOEs between 1 H N (Gly253) and 1 H CH2 (C1), and a long-range NOE between 1 H N (Gly253) and 1 H CH2 (C3). This is precisely the pattern observed experimentally in the Gly253 strip (Fig. 2f ). The medium-range NOE between 1 H N (Gly253) and 1 H CH (C2) that would signify the “ sn -2” mode (shown in red in the “ sn -2” complex, Fig. 2f ) is not detected. Further confirmation of the “ sn -1” DAG-binding mode is provided by the Thr242 strip whose 1 H N shows a characteristic medium-range NOE to 1 H CH (C2) and a short-range NOE to 1 H CH2 (C3). Thus, both the C1Bδ-DAG structure and solution NMR experiments report “ sn -1” as the dominant mode of DAG binding to C1Bδ. Moreover, also consistent with the C1Bδ-DAG structure, several C1Bδ loop residues (including Gly253) show NOEs to the methylenes of acyl chains of either DAG or bicelle lipids (Supplementary Fig. 3c, d ). Roles of C1Bδ loops in lipid binding Inspection of C1Bδ loop regions in the C1Bδ-DAG structure reveals how exquisitely they are tuned to the chemical properties of diacylglycerol and surrounding lipids. The eleven C1Bδ residues involved in DAG interactions (Supplementary Fig. 4a, b ) can be grouped into three tiers that progressively increase the hydrophobicity of the ligand environment – as illustrated using the deconstructed binding groove of the representative “ sn -1” complex (Fig. 3a ). The atoms from four “tier 1” residues, namely the backbone O and NH groups of Tyr238, Thr242, and Leu251, along with the sidechain of Gln257, define the polar surface of the groove floor that accommodates the C3-OH hydroxyl group of DAG. The hydrophobic sidechains of tier 1 residues point towards the core of the protein and away from the groove. The five “tier 2” residues accommodate the glycerol backbone and ester groups of DAG by creating an amphiphilic environment. Met239 and Ser240 backbone O and N atoms provide a polar environment for the O3 oxygen of DAG, while their sidechains face “outward” to potentially engage in lipid interactions. Pro241, a strictly conserved residue in DAG-sensitive C1 domains, makes non-polar contacts with the C2 carbon of DAG, and its Cδ/Cα are positioned sufficiently close to the DAG oxygens to form C–H…O interactions (Supplementary Fig. 4c ). On the opposite side of the groove, the polar N–H group of Gly253 hydrogen bonds to the O5 oxygen, while the hydrophobic Leu250 sidechain engages in non-polar contacts with the C1 carbon of DAG. Fig. 3: Roles of C1Bδ loops in lipid binding. a Polar backbone atoms and hydrophobic sidechains of DAG-interacting C1Bδ residues create a binding site whose properties are tailored to capture the amphiphilic DAG molecule. This is illustrated through the deconstruction of the “ sn-1 ” binding mode into three tiers that accommodate the glycerol backbone (tier 1), the sn -1/2 ester groups (tier 2), and the acyl chain methylenes (tier 3). b Residue-specific lipid-to-protein PRE values of the amide protons, 1 H Γ 2 , indicate that loop β34 is inserted deeper into the membrane than β12. The PRE value for Trp252 is that for the indole NHε group. Cross-peaks broadened beyond detection in paramagnetic bicelles are assigned an arbitrary value of 120 s −1 . His270 and Lys271 cross-peaks are exchange-broadened and therefore unsuitable for quantitative analysis (open circles). The PRE values were derived from 1 H N transverse relaxation rate constants collected on a single sample in the absence (diamagnetic) and presence (paramagnetic) of 14-doxyl PC. The error was estimated using the r.m.s.d. of the base plane noise. The inset shows the top view of the “ sn-1 ” mode C1Bδ-DAG complex color-coded according to the hydrophobicity. Full size image While tier 1 and 2 residues are contributed by both loops, tier 3 residues all reside on loop β34 (Fig. 3a ). Trp252 and Leu254 sidechains make non-polar contacts with the methylenes of the DAG sn -1 and sn -2 acyl chains (Supplementary Fig. 4a ) that protrude through the depression formed by Gly253 (Figs. 2 b, 3a ). These interactions orient the DAG acyl chains in a position to complete the hydrophobic rim of the C1Bδ domain formed by loop β34 residues Leu250, Trp252, Leu254, Val255, and loop β12 residues Met239 and Pro241. This is illustrated in the top views of the “ sn -1” and “ sn -2” C1Bδ-DAG complexes (right panels of Fig. 2b, c ). Positioning the acyl chains in close proximity to Trp252 and Leu254 sidechains creates a continuous hydrophobic surface tailored for C1Bδ interactions with surrounding lipids. In this context, the significance of Trp252 sidechain reorientation upon DAG binding (Fig. 2a ) becomes clear, as this conformation ensures the continuity of the hydrophobic surface. A direct manifestation of the lipophilicity of the DAG-bound C1Bδ surface is the peripheral association of DAG and detergents observed in all crystal structures of the complexes (Supplementary Figs. 5 , and 6 ; Supplementary Note 2 ). To directly identify the regions of the C1Bδ-DAG that insert into the bilayer and to independently validate the essential role of the β34 loop in membrane partitioning, paramagnetic relaxation enhancement (PRE) experiments were performed using a paramagnetic lipid (14-doxyl PC) incorporated into host bicelles. PREs arise due to the spatial proximity of the unpaired electron of the lipid probe to protein nuclear spins and manifest themselves as extensive line broadening in the NMR spectra. PRE data for the C1Bδ-DAG amide hydrogens report that, while both loops undergo bilayer insertion, loop β34 penetrates deeper into the bilayer than does loop β12 (Fig. 3b ). The PRE data are entirely consistent with the hydrophobicity patterns observed in the crystal structure (Fig. 3b , inset), and project that C1Bδ assumes a tilted position relative to the membrane normal upon DAG binding. Of note, the deconstructed binding groove of the “ sn -2” complex shows a similar three-tiered arrangement of hydrophilic and hydrophobic features equally suited to accommodate DAG in the “ sn -2” orientation (Supplementary Fig. 4b, d ). However, because of the differences in the hydrogen-bonding patterns (left panels of Fig. 2b, c ) DAG position is shallow compared to the “ sn -1” mode, where the DAG C1 carbon resides deeper in the pocket by ~1.5 Å. While our data support the “ sn -1” as the dominant DAG interaction mode, the presence of the “ sn -2” mode in the crystal structure (Supplementary Figs. 1b , 2a ) suggests that it too is sampled transiently during the DAG capture step. C1Bδ complexes with exogenous PKC agonists The DAG-sensing function has been an active target for pharmaceutical modulation of PKC activity. To determine the structural basis of how C1 domains mediate the response of DAG effector proteins to potent exogenous agonists, we selected four such agonists that evoke distinct cellular responses. Phorbol 12,13-dibutyrate (PDBu) is one of the potent tumor-promoting phorbol esters widely used to generate carcinogenesis models through PKC dysregulation 30 . Prostratin, a non-tumorigenic phorbol ester, is a preclinical candidate for inducing latency reversal in HIV-1 infection 31 , 32 . Both PDBu and Prostratin share a tetracyclic tigliane skeleton of 5-7-6-3 membered rings. Ingenol-3-angelate (I3A) is a clinically approved agent for the topical treatment of actinic keratosis (Picato®) with a phorbol-related 5-7-7-3 fused ring structure 33 . AJH-836 is a high-affinity synthetic DAG lactone with considerable promise as an isoenzyme-specific PKC agonist (Supplementary Note 3 ) 34 . In the membrane-mimicking lipid bicelle environment, all four ligands bind to the loop region of C1Bδ—as evidenced by the chemical shift perturbations and rigidification of the corresponding residues (Supplementary Fig. 7 ). Crystals of each C1Bδ-ligand complex formed in the presence of 1,2-diheptanoyl- sn -glycero-3-phosphocholine (DHPC) and all yielded high-resolution structures (1.1–1.8 Å, Supplementary Tables 1 , 2 , Fig. 4 ) with well-defined electron densities of ligands (Supplementary Fig. 8a ). DHPC molecules peripherally associate with the protein surface outside loop β12 (I3A complex), loop β34 (Prostratin and AJH-836 model 7LF3), or both loops (PDBu and ligand-free chain of the AJH-836 model 7LEO) (Fig. 4a–e ). The DHPC-protein interactions involve hydrogen bonds, both direct and water-mediated, as well as hydrophobic contacts. The structure of the AJH-836, where one protein chain has a bound ligand and the other ligand-free chain interacts peripherally with DHPC, highlights the versatility of interactions that Trp252 at the “DAG-toggling” position can form with lipids (Fig. 4e ). Fig. 4: Peripheral DHPC molecules in the C1Bδ-ligand complexes. DHPC molecules peripherally associate with the membrane-binding loop regions of C1Bδ complexed to a PDBu; b prostratin; c ingenol-3-angelate, d AJH-836 (one molecule per AU), and e AJH-836 (two molecules per AU). In e , one protein chain is ligand-free (color-coded green) and has three DHPC molecules, labeled 1 through 3 that cap the membrane-binding loop region. f The versatility of potential Trp252-lipid interactions, exemplified by the Trp252 sidechain from the ligand-free C1Bδ monomer ( e green). In addition to non-polar contacts with the hydrophobic lipid moieties, the Trp sidechains can engage in H-bonding, cation–π, and CH–π interactions. Full size image The structures reveal that all four ligands (Fig. 5a ) bind to the same C1Bδ site as DAG (Fig. 5b ). However, none of them form a hydrogen bond with the Gln257 sidechain that is observed in the DAG complex structure (Fig. 2b ). The fused ring structures of PDBu, Prostratin, and I3A intercalate between loops β12 and β34. The methyl groups attached to rings A, C, and D, and the apical regions of rings A and C (Prostratin and I3A only) protrude outwards from the groove (Supplementary Fig. 8b ). These groups collectively form a hydrophobic ridge that traverses the groove diagonally from Met239 of loop β12 to Trp252 of loop β34 (Fig. 5b ). The AJH-836 lactone ring also intercalates between the loops and is fully sequestered. Fig. 5: Structures of the C1Bδ-ligand complexes reveal the interaction modes of PKC agonists. a Chemical structures and polar groups involved in hydrogen-bonding interactions with C1Bδ of PKC agonists. The numbering of oxygen atoms follows the ALATIS system. b 3D structures of the complexes (PDB IDs from left to right: 7KNJ, 7LCB, 7KO6, and 7LF3) showing the ligand placement in the binding groove. The shape of the ligands’ hydrophobic cap, viewed from the top of the loop region, is outlined in maroon. The hydrophobic ridge that traverses the groove is marked with the maroon dashed line. c Ligand interactions with Thr242, Leu251, and Gly253 (underlined) that recapitulate the DAG hydrogen-bonding pattern are shown with red dashed lines. Blue dashed lines show ligand-specific hydrogen bonds, including the intra-ligand ones in PDBu and Prostratin. The depression created in loop β34 by Gly253 in the PDBu and Prostratin complexes accommodates DHPC molecules in the crystal. Full size image The arrangement and identity of the R groups provide the unique shape of the hydrophobic “cap” over the loop region (Fig. 5b , surface representation). In the PDBu complex, the butyryl R1 and R2 groups are arranged in a T-shape relative to the ridge. Prostratin, having only a single acetyl R2 group, forms a significantly smaller hydrophobic cap. Neither ligand covers the depression in the β34 loop formed by Gly253—thereby creating an opportunistic interaction site for DHPC molecules in the crystal (Figs. 4 a, b, and 5c ). In contrast, bound I3A and AJH-836 form a contiguous hydrophobic surface with loop β34, while leaving loop β12 exposed and available for potential interactions with lipids (Figs. 4 c and 5b ). The angelyl and pivaloyl R1 groups of I3A and AJH-836, respectively, occupy the depression created by Gly253 and engage in hydrophobic contact with the sidechains of the bracketing residues Leu254 and Trp252 (Supplementary Fig. 8c ). This arrangement presents as an overall L-shape of the I3A hydrophobic cap. The hydrophobic surface of AJH-836 comprises highly exposed methyl groups of the pivaloyl R1 and the branched alkylidene R2 groups (Supplementary Fig. 8b ) that create a ridge at an ~20° angle with the long axis of the groove. Thus, each ligand modulates the shape and the hydrophobicity of the C1Bδ membrane-binding region in its own unique way by engaging its R groups in hydrophobic interactions with the same set of protein residues (Met239, Leu250, Trp252, Leu254, and V255). The hydrophilic groups of the ligands orient towards the polar regions of the groove and recapitulate the DAG hydrogen-bonding pattern: the carbonyl oxygen (O3 in Prostratin and AJH-836; O4 in PDBu and I3A) and the hydroxyl group O1-H form hydrogen bonds with Gly253, Thr242, and Leu251 (Fig. 5c , red dashed lines). In addition, Prostratin and PDBu engage a second hydroxyl group (O4-H and O5-H, respectively) in hydrogen bonding with the carbonyl oxygen of Gly253. In I3A, there are two additional hydroxyls (O2-H and O5-H) that hydrogen bond to the carbonyl oxygens of Leu251 and Gly253, respectively (Fig. 5c , blue dashed lines). AJH-836 is the only ligand of the four with no additional ligand-protein hydrogen bonds compared to DAG. The PDBu and Prostratin structures explain the findings of the previous structure-activity studies of phorbol ester derivatives 20 , 35 , 36 that identified the essential role of the hydrophobic substituent, R1 at position C-12, in the PKC membrane insertion and activation. The increased potencies 35 of 12,13-di- and 12-mono-esters with hydrophobic substituents (e.g., PDBu and phorbol 12-myristate 13-acetate (PMA)) relative to 12-deoxyphorbol esters (e.g., Prostratin) can be ascribed to two factors. First, the C-12 R1 group complements the hydrophobic rim of loop β34 which is the primary driver of C1 membrane binding. Second, both R1 and R2 groups are involved in direct interactions with lipids, and thereby contribute to the stabilization of the membrane complex. Molecular dynamics simulations of the C1-ligand complexes suggest that both the chemical identity of the R1 group 37 , 20 and of the ligand itself (PMA vs. DAG) 37 influence the depth of C1 membrane insertion. The I3A complex provides a structural rationale for the relative potencies of ingenol derivatives reported in the HIV-1 latency reversal studies 13 . The most potent ingenols exhibit conformationally restricted R1 substituents that can be accommodated in the depression formed by Gly253 and bracketed by Trp252 and Leu254. In the lactone complex, a combination of the E isomer and its “ sn -1” binding mode (Supplementary Note 3 ) affords a favorable arrangement of bulky R1 and R2 groups within the hydrophobic rim of C1Bδ. This configuration likely constitutes the structural basis for why AJH-836 displays its marked selectivity for novel versus conventional PKC isoforms 34 . Comparative analyses of the C1Bδ-agonist structures Comparative analyses of our DAG- and ligand-bound structures (Fig. 6 ) demonstrate why DAG-sensing C1 domains are capable of binding chemically diverse ligands with high affinity—a property that is driving the design of pharmacological agents. The amphiphilic binding groove, with progressively increasing hydrophobicity towards the rim of the membrane-binding region, is “tuned” to accommodate ligands with matching properties. The placement of three oxygen-containing groups, highlighted in Fig. 6b–e , ensures that the ligand is anchored to the polar groove regions. The ring structure invites intercalation between the C1 membrane-binding loops, while the hydrophobic substituents that protrude outward from the groove, akin to the DAG acyl chains, contribute to the membrane anchoring of the complex. Fig. 6: Structural analysis of C1Bδ-agonist complexes identifies three key oxygen-containing groups and the roles of conserved hydrophobic residues. a Loop region of the backbone-superimposed C1Bδ complexes (pairwise r.m.s.d. <0.6 Å relative to chain 5 of the “ sn -1” DAG complex). Oxygen-containing functional groups involved in the interactions with C1Bδ are highlighted by squares. b – e Pairwise comparison of the binding poses of b PDBu; c prostratin; d ingenol-3-angelate; and e AJH-836 relative to that of DAG in the binding groove. Hydrophobic sidechains that envelope the ligands and form the rim of the membrane-binding region are also shown. f Amino acid sequence of C1Bδ and the consensus sequence of DAG-sensitive C1 domains. g – i A subset of conserved hydrophobic residues that form a “cage”-like arrangement around the ligands, with a potential to form CH–π interactions in addition to the apolar contacts. The rotameric flip of Trp252, illustrated using DAG ( g ) and PDBu ( h ) complexes, is essential for creating a contiguous hydrophobic surface. i The Trp252 sidechain remains in its apo-state rotameric conformation in the C1Bδ-P13A complex that was crystallized in the absence of lipids/detergents. j Top view of the C1Bδ-P13A loop region showing the contribution of the C12-OH group to the hydrophilic character of the P13A “cap”. Full size image The comparative analysis also enables the assignment of specific functional roles to the residues of the C1 domain consensus sequence (Fig. 6f and Supplementary Note 1 ) 27 . Two groups of residues are of particular significance. The first group consists of the four strictly conserved non-Zn 2+ coordinating residues: Pro241, Gly253, Gln257 that directly interact with DAG (Figs. 2 b, c, 3a , Supplementary Fig. 4a, b, d ); and Gly258 which ensures conformational flexibility of the β34 loop (Supplementary Note 1 ). The second group comprises three hydrophobic residues: Leu250, Trp252, and Leu254 of loop β34 that show the deepest membrane insertion (Fig. 3b ). Together with strictly conserved Pro241 and the consensus aromatic residue Phe243, these three residues form the outside hydrophobic “cage” that surrounds the various bound ligands (Fig. 6g, h ). The spatial arrangement of the cage residues not only shields the hydrophilic ligand moiety from the hydrophobic membrane environment (Figs. 2 b, c, and 4 ) but also enables the loop region of C1Bδ to effectively interface with peripheral lipids. The latter function is aptly exemplified by Trp252, whose highly lipophilic indole sidechain reorients towards the loop region upon the formation of C1-ligand complexes in the membrane-mimicking environment. Given that all C1 complexes reported in this work are with potent PKC agonists, we posit that the reorientation of the Trp252 in C1Bδ (Fig. 2a ) is an essential aspect of the overall mechanism of membrane recruitment and agonist capture. Indeed, our previous work suggests that the Trp252 sidechain reorients towards the membrane-binding loops upon initial partitioning of C1Bδ into the hydrophobic environment prior to the agonist binding 22 . Once this “pre-DAG” C1 complex 38 is formed, the Trp sidechain plays an important role in the formation of the ligand-binding site through the completion of the hydrophobic “cage” (Fig. 6g, h ). Therefore, the structural change associated with the Trp252 flip underlies two processes that take place in the hydrophobic environment—membrane partitioning and binding of the membrane-embedded ligand. Consistent with this notion, no rotameric flip was observed in the previously determined structure of the C1Bδ-phorbol-13-acetate (P13A) complex that was obtained under crystallization conditions lacking membrane mimics (Fig. 6i ) 17 . P13A is an extremely weak agonist of PKC 39 , 40 that differs from Prostratin with respect to a single hydroxyl group at the C-12 position. This OH group is ~10 Å away from Trp252 and is unlikely to influence the sidechain conformation directly (Fig. 6i ). Rather, it imparts a significant hydrophilic character onto the “cap” formed by the ligand over the loop regions of C1Bδ (Fig. 6j ). Given the relative hydrophilicity of the ligand (logP of P13A is 0.2, as compared with Prostratin’s 0.8), and of the complex itself, there is no thermodynamic incentive for the former to partition into the membranes in a process that involves the Trp252 sidechain reorientation. In addition to its dual role in membrane recruitment and ligand capture, the Trp252 conformation, and interaction patterns are directly relevant to the question of DAG sensitivity of PKC isoforms—i.e., the parameter that defines the intrinsic thresholds of DAG-mediated activation (Supplementary Note 2 ). Our structural data (supported by previous NMR work 21 , 22 , 23 ) suggest that higher hydrophobicity and lipophilicity of Trp confers thermodynamic advantages and hence higher DAG sensitivity to the C1B domains of novel PKC isoforms relative to conventional PKC isoforms that carry a Tyr at the equivalent position. The atomistic details of our high-resolution structures of five C1Bδ-ligand complexes, particularly with regard to the arrangement of the ligand hydrophobic substituents within the binding groove and assignment of specific functional roles to the key C1 residues, provide unprecedented insight into the structural basis of DAG sensing. These also provide key information for guiding the design of therapeutic agonists that selectively target proteins within the DAG effector family. The critical advance that resolved the Gordian knot of C1-agonist crystallization was the inclusion of the membrane-mimicking agents into the system that provides the hydrophobic environment required to support high-affinity interactions. This general strategy now paves the way for the structural characterization of other C1-agonist complexes. Methods Expression, purification, and isotope enrichment of C1Bδ The cDNA segment encoding the C1B domain of protein kinase C (PKC) δ isoenzyme from Rattus norvegicus (amino acids 229-281) was sub-cloned into pET SUMO expression vector (Invitrogen). The His 6 -SUMO-C1Bδ fusion protein was expressed in Escherichia coli BL21(DE3) Rosetta2 cells (Millipore Sigma). For the natural abundance preparations, the cells were grown in LB broth until OD 600 = 0.6, followed by the induction of protein expression with 0.5 mM IPTG at 18 °C for 16 h. For the isotopically enriched C1Bδ preparations, we used the resuspension method 41 in the M9 minimal medium supplemented with 15 NH 4 Cl and D- 13 C 6 -glucose as nitrogen and carbon sources, respectively. To obtain [~80% 2 H, U- 15 N, 13 C]-enriched C1Bδ, M9 was prepared in 100% D 2 O and additionally supplemented with 1 g of 15 N, 13 C, 2 H ISOGRO ® (Sigma). Cell harvesting, lysis, and C1Bδ purification were carried out as previously described 21 , 22 . The purified protein was stored at 4 °C in the “storage buffer” comprising 50 mM MES at pH 6.5, 150 mM KCl, and 1 mM TCEP, until further use. For NMR experiments, C1Bδ was exchanged into an “NMR buffer” composed of 20 mM d 4 -Imidazole at pH 6.5, 50 mM KCl, 0.1 mM TCEP, 0.02% NaN 3 , and 8% D 2 O. Preparation of isotropically tumbling bicelles Chloroform solutions of long-chain 1,2-dimyristoyl- sn -glycero-3-phosphocholine (DMPC) and short-chain 1,2-dihexanoyl- sn -glycero-3-phosphocholine (DHPC) (Avanti Polar Lipids), or their deuterated versions d 54 -DMPC (Avanti Polar Lipids) and d 40 -DHPC (Cambridge Isotope Laboratories), were aliquoted and dried extensively under vacuum. The bicelles of q = 0.5 (defined by the DMPC to DHPC molar ratio of 1:2) were prepared by suspending the dried lipid films in the NMR buffer, as previously described 42 . Additional lipid components: 1,2-dimyristoyl- sn -glycero-3-phospho-L-serine (DMPS) and di-octanoyl- sn -1,2-glycerol (DAG) were incorporated for all DAG-binding experiments, to produce the final molar ratios of DMPC:DMPS:DAG = 75:15:10. For the paramagnetic relaxation enhancement (PRE) measurements, 1-palmitoyl-2-stearoyl-(14-doxyl)- sn -glycero-3-phosphocholine (14-doxyl PC) was incorporated into bicelles to give on average ~1 molecule per leaflet. DMPS, DAG, and 14-doxyl PC were obtained from Avanti Polar Lipids. The lipid concentrations of final bicelle preparations were measured using the phosphate determination assay 43 . NMR spectroscopy All NMR experiments were carried out at 25 °C (calibrated with d 4 -methanol) on the Avance III HD NMR spectrometer (Bruker Biospin), operating at a 1 H Larmor frequency of 800 MHz (18.8 T) and equipped with a cryogenically cooled probe. The data were processed with NMRPipe 44 and analyzed with NMRFAM-Sparky 45 . The backbone amide ( 15 NH) and methyl ( 13 CH 3 ) resonance assignments of C1Bδ were obtained from our previous work 22 and the BMRB entry 17112 46 . NMR detection of C1Bδ-agonist complex formation in bicelles The ternary C1Bδ-agonist-bicelle complexes were assembled in the “NMR buffer” by combining solutions of the isotopically enriched protein, bicelles, and agonists. DAG was incorporated at the bicelle preparation stage (vide supra). The 30–40 mM ligand stock solutions were prepared in d 6 -DMSO from the crystalline solids (Phorbol-12,13-dibutyrate, Prostratin, and Ingenol-3-angelate, all from Sigma-Aldrich ® ; and AJH-836, custom synthesized in Prof. Jeewoo Lee’s laboratory). The samples for the [ 15 N, 1 H] HSQC ([ 13 C, 1 H] HSQC) experiments contained 0.4 mM [U- 15 N, 13 C]-enriched C1Bδ and 100 mM bicelles (0.3 mM [~80% 2 H, U- 15 N, 13 C]-enriched C1Bδ and 80 mM deuterated bicelles). At these concentrations, the bicelle particles are approximately equimolar to protein. The protein-to-ligand molar ratios were 1:8 (DAG), 1:1.2 (PDBu/Prostratin/Ingenol-3-angelate), and 1:6 (AJH-836). The residue-specific chemical shift perturbations (CSPs, Δ) between the apo and agonist-bound C1Bδ were calculated using the following equation: $$\Delta =\sqrt{\Delta {\delta }_{H}^{2}+{(\alpha \Delta {\delta }_{X})}^{2}}$$ (1) where Δ δ H and Δ δ X are the chemical shift changes of 1 H and X ( 15 N or 13 C), respectively; and α = 0.152 ( 15 N) or 0.18 ( 13 C). PRE and NOESY experiments The residue-specific PRE values of 1 H N resonances, Γ 2 , were determined using relaxation experiments with [ 15 N, 1 H] TROSY-HSQC detection. All data were collected in the interleaved manner, with a two-point (0 and 10 ms) relaxation delay scheme 47 . The diamagnetic 1 H N transverse relaxation rate constants, R 2,dia , were obtained on the NMR sample comprising 0.4 mM [~80% 2 H, U- 15 N, 13 C]-enriched C1Bδ, 100 mM (total lipid) bicelles, and 3.2 mM DAG. The paramagnetic sample was prepared by a 1-hr room-temperature incubation of the diamagnetic sample with a dry film of 14-doxyl PC, and subsequently used to obtain the 1 H N R 2,para values. The error was estimated using the r.m.s.d. of the base plane noise. The Γ 2 values were calculated using the following equation: $${\varGamma }_{2}={R}_{2,{{{{{\mathrm{para}}}}}}}-{R}_{2,{{{{{\mathrm{dia}}}}}}}$$ (2) 3D 15 N-edited NOESY-TROSY experiment was carried out with a mixing time of 120 ms on a sample containing 0.4 mM [~80% 2 H, U- 15 N, 13 C]-enriched C1Bδ and 100 mM (total lipid) deuterated bicelles that contained 3.2 mM DAG. Inter-molecular C1Bδ-DAG 1 H– 1 H NOEs were identified based on the available assignments of the protein amide resonances and characteristic 1 H chemical shifts of the sn -1,2 stereoisomer of DAG. The DAG chemical shifts were obtained using the 13 C- 1 H HSQC spectrum of the 0.4 mM [ 13 C,C1-C3]-racemic DAG (custom synthesized by Avanti Polar Lipids) in 10 mM d 38 -dodecylphosphocholine (DPC, Sigma) micelles, and matched the literature values 48 . Crystallization of apo C1Bδ and its complexes with agonists Apo C1Bδ and its complexes with agonists were crystallized at 4 °C using a hanging-drop vapor-diffusion method. The protein was concentrated at 2–2.3 mM (13–15 mg/ml) in the “storage buffer”. 200 mM stock solutions of DPC (Sigma) and DHPC micelles were prepared using the procedure described previously 21 . To form the C1Bδ-DAG complex, the reagents were combined at a molar ratio of C1Bδ:DAG:DPC = 1:1.2:10. The C1Bδ-DAG crystals were obtained in 2 days from the precipitant comprising 0.2 M ammonium acetate, 0.1 M sodium phosphate at pH 6.8, and 15% isopropanol. These crystals were used as seeds for the hanging drop containing C1Bδ:DAG:DPC = 1:1.2:5 and allowed to grow for 2 more days. To form the C1Bδ-ligand complexes, the appropriate reagents were combined at a molar ratio of C1Bδ:Ligand:DHPC = 1:1.2:10, where the ligand was PDBu, Prostratin, Ingenol-3-angelate, or AJH-836. To achieve full saturation of C1Bδ with AJH-836, we used an additional condition with AJH-836 C1Bδ:AJH-836:DHPC = 1:1.5:10. For all C1Bδ-ligand complexes, the precipitant was 0.2 M ammonium acetate, 0.1 M sodium phosphate at pH 6.8, and 30% isopropanol. The same precipitant composition was used to crystallize apo C1Bδ. Crystals of the apo C1Bδ and its ligand complexes appeared either overnight (PDBu, Prostratin, and Ingenol-3-angelate complexes) or after ~1–2 days (AJH-836 complex). Prior to data collection, the crystals were flash-frozen in the following cryo-protectant solutions: (i) 15% sucrose and 25% PEG 4000 in 0.1 M sodium phosphate at pH 7.2 (apo C1Bδ and its complexes with PDBu, Prostratin, and Ingenol-3-angelate); and (ii) 20% MPD (Hampton Research) in mother liquor (C1Bδ complexes with DAG and AJH-836). Data collection, processing, and model building of C1Bδ-agonist complexes For the apo structure and all the ligands except AJH-836 the data were collected on a home source Cu k-alpha X-ray generator. The data were indexed, scaled, and integrated by PROTEUM software 49 . For the C1Bδ-DAG complex crystal, the data were collected at Argonne National Lab APS synchrotron, beamline 23ID, and for the C1Bδ-AJH-836 complex crystal—at ALS synchrotron at Berkley, beamline BL502. The data were indexed, integrated, and scaled by the beamline auto-processing pipeline (XDS 50 , POINTLESS 51 , and AIMLESS 52 software packages). Structures were solved by molecular replacement, using the PDB entry 1PTQ as a search model 17 . This was followed by iterative cycles of refinement with PHENIX.REFINE and manual building in COOT 53 , 54 . Polder omit maps were generated using PHENIX 55 . Ligands were created using ELBOW.BUILDER and JLIGAND 56 , 57 . Structural analyses were carried out using UCSF Chimera 58 , CCG Molecular Operating Environment (MOE) 59 , and LigPlot + 60 . Although the diffraction data of the C1Bδ-DAG complex could be indexed and scaled in the F23 space group, the structure was solved in the lower symmetry H3 space group because of the non-uniform lipid molecules in the solvent channels. We have built only the lipids for which well-ordered electron density of head groups was present. The peripheral lipids and detergents are less ordered, with average B factors of 71 and 52 Å 2 for DAG and DPC, respectively (the average protein B factor is 36 Å 2 ). However, there are likely more lipids and/or detergents in the solvent channels, as evidenced by the multitude of positive difference electron density peaks that are larger than water, some reaching across the symmetry axis. The coordinates of all structures were deposited in the Protein Data Bank. The accession numbers and statistics are given in Supplementary Tables 1 and 2 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support this study are available from the corresponding author upon reasonable request. The atomic coordinates and structure factors are deposited in the PDB ( ) under the accession codes 7KND (apo), 7L92 (DAG), 7LEO (AJH-836), 7LF3 (AJH-836), 7KNJ (PDBu), 7LCB (Prostratin), and 7KO6 (I3A).
In a major advance for rational drug design, a Texas A&M AgriLife team has described several protein structures of a crucial player in cellular processes. The advance could bring new ideas for treatments of diseases such as Alzheimer's, AIDS, cancer and others. Specifically, the work describes the C1 domain of protein kinase C, PKC, which helps regulate the protein's activity in organisms. In the structures, the C1 domain wraps around different molecules of intense therapeutic interest, providing the first reliable, atomic-resolution guide for designing drug candidates. Published May 16 in Nature Communications, the research was directed by Tatyana Igumenova, Ph.D., associate professor in the Department of Biochemistry and Biophysics in the Texas A&M College of Agriculture and Life Sciences. The project's primary author is Sachin Katti, Ph.D., a postdoctoral fellow working with Igumenova. The study involved a collaboration with Inna Krieger, Ph.D., research assistant professor, and James Sacchettini, Ph.D., professor, both in the Department of Biochemistry and Biophysics. One of the most sought-after protein structures A healthy cell responds to chemical signals in precise, intricate ways. Receiving chemical inputs from the cell's environment and forwarding them to the central control systems within the cell nucleus is the task of specialized proteins such as PKC. Improper PKC activity shows up in many human diseases. As a result, there is much interest in finding ways to fine-tune PKC activity with drugs. The design of such drugs will offer new approaches for treating Alzheimer's disease, AIDS, cancer and more. "Protein kinase C is one of the most intensely studied proteins in cell biology and pharmacology," Igumenova said. "A major hurdle has been the lack of precise structural information to guide drug design efforts." One complication for drug design is that the PKC family has 11 members. Different PKC family members can have opposite physiological effects, so a successful drug candidate must be selective about which PKC it targets. To do that, drug candidates must fit a target PKC like a key to a lock. But determining the 3D structure of a PKC "on-switch"—the C1 domain—bound to PKC activators has not been easy. Protein structures are typically solved using X-ray crystallography. The technique involves using X-rays to determine the position of atoms in a crystal. For this method, researchers need to create conditions where the protein of interest crystallizes. Yet intense efforts in many research labs over the past three decades failed to yield crystals of C1 domains bound to relevant ligands. Because of this lack of progress, multiple researchers pronounced the task impossible, Igumenova said. Crystals of a domain of protein kinase C spontaneously formed in Katti’s NMR sample tube. Credit: Sachin Katti. Solving a 30-year problem Accepting the problem as challenging, Katti and Igumenova decided instead to study the molecules in solution using nuclear magnetic resonance, NMR, spectroscopy. This involved finding the right components to mimic cell membranes, where the C1 domain would encounter ligands. "Then, one fine day, Sachin discovered crystals forming in an old NMR tube," Igumenova said. "I give all the credit to Sachin, who basically said, 'I'm going to go and test them and see if they are actually the protein.' And he was right. It gave us confidence that crystallization is possible." In turn, Katti gives credence to the insights obtained from NMR, and a bit of luck. "I think that's the beauty of doing research where you have to use multiple approaches," he said. "You never know when one approach is going to be useful for doing something with other approaches." Insights from NMR and X-ray crystallography The new protein structures, along with the team's NMR results, have already yielded interesting information. One long-standing mystery has been how C1 domains can accommodate ligands that have very different chemical structures, Igumenova said. "Our previous NMR work indicated that the loops of the C1 domain that bind ligands are very dynamic," Igumenova said. "This C1 domain is like a PAC-man. It binds the membrane, and then it searches for a ligand. Once it finds the ligand, it latches on." In addition, the structure shows that the ligand-binding groove has a "water-loving," or hydrophilic, surface at the bottom, and "water-repelling," or hydrophobic, surface at the top. "If you think about a lipid molecule, the head group is hydrophilic and the tail is hydrophobic," Igumenova said. "So, when C1 domains bind lipid ligands, the patterns match." The team's results include the structure of a C1 domain bound to its natural ligand, diacylglycerol. In addition, the team describes several other structures of C1 that include different compounds of pharmacological interest. The work also provides a method for testing different drug candidates, Katti said. "If you want to study fish, you want to study them in water," Katti said. "Now we know how to create a membrane-like environment where these very hydrophobic compounds can be tested for C1 binding." Next, Katti and Igumenova plan to explore C1 domains from other PKC family members. "It's important for us to focus on C1 domains because they have inherent differences that can be exploited to achieve selectivity," Igumenova said. "What we are finding now is that not all C1 domains are created equal."
10.1038/s41467-022-30389-2
Medicine
How to turn damaged heart tissue back into healthy heart muscle—new details emerge
Single-cell transcriptomics reconstructs fate conversion from fibroblast to cardiomyocyte, Nature (2017). nature.com/articles/doi:10.1038/nature24454 Journal information: Nature
http://nature.com/articles/doi:10.1038/nature24454
https://medicalxpress.com/news/2017-10-heart-tissue-healthy-musclenew-emerge.html
Abstract Direct lineage conversion offers a new strategy for tissue regeneration and disease modelling. Despite recent success in directly reprogramming fibroblasts into various cell types, the precise changes that occur as fibroblasts progressively convert to the target cell fates remain unclear. The inherent heterogeneity and asynchronous nature of the reprogramming process renders it difficult to study this process using bulk genomic techniques. Here we used single-cell RNA sequencing to overcome this limitation and analysed global transcriptome changes at early stages during the reprogramming of mouse fibroblasts into induced cardiomyocytes (iCMs) 1 , 2 , 3 , 4 . Using unsupervised dimensionality reduction and clustering algorithms, we identified molecularly distinct subpopulations of cells during reprogramming. We also constructed routes of iCM formation, and delineated the relationship between cell proliferation and iCM induction. Further analysis of global gene expression changes during reprogramming revealed unexpected downregulation of factors involved in mRNA processing and splicing. Detailed functional analysis of the top candidate splicing factor, Ptbp1, revealed that it is a critical barrier for the acquisition of cardiomyocyte-specific splicing patterns in fibroblasts. Concomitantly, Ptbp1 depletion promoted cardiac transcriptome acquisition and increased iCM reprogramming efficiency. Additional quantitative analysis of our dataset revealed a strong correlation between the expression of each reprogramming factor and the progress of individual cells through the reprogramming process, and led to the discovery of new surface markers for the enrichment of iCMs. In summary, our single-cell transcriptomics approaches enabled us to reconstruct the reprogramming trajectory and to uncover intermediate cell populations, gene pathways and regulators involved in iCM induction. Main Direct cardiac reprogramming that converts scar-forming fibroblasts into iCMs shows promise as an approach to replenish lost cardiomyocytes in diseased hearts 1 , 2 , 3 , 4 . Considerable efforts have been made to improve the efficiency and unravel the underlying mechanism 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 . However, it still remains unknown how the conversion of fibroblast into cardiomyocyte is achieved without following conventional cardiomyocyte specification and differentiation process. This is partly because the starting fibroblasts exhibit molecular heterogeneity that is mostly uncharacterized, and the reprogramming population contains fully, partially and unconverted cells. Traditional population-based genome-wide approaches are incapable of resolving this unsynchronized cell-fate-switching process. Therefore, we leveraged the power of single-cell transcriptomics to better investigate the reprogramming of iCMs that is mediated by Mef2c, Gata4 and Tbx5. Previous studies have indicated that a snapshot of an unsynchronized biological process can capture cells at different stages of the process 16 . Because the emergence of iCMs occurs as early as day 3 (refs 1 , 11 , 12 , 13 , 14 , 15 ), we reasoned that day-3 reprogramming fibroblasts contain a wide spectrum of cells that are transitioning from fibroblast to iCM. We therefore performed single-cell RNA sequencing (RNA-seq) on day-3 cardiac fibroblasts infected with separate Mef2c, Gata4 and Tbx5 viral constructs (hereafter M + G + T) from seven independent experiments (for experimental design, see Extended Data Fig. 1 ), followed by a series of quality control steps ( Extended Data Fig. 1 , Methods and Supplementary Tables 1, 2 ). Extensive data normalization was performed to correct for technical variations and batch effects ( Extended Data Figs 1 , 2 and Methods). After comparing the entire set of single-cell RNA-seq data to bulk RNA-seq data of endogenous cardiac fibroblasts and cardiomyocytes that were obtained from parallel experiments, we detected a group of resident or circulating immune or immune-like cells ( Extended Data Fig. 3 ) that were not included in the subsequent analyses. Unsupervised hierarchical clustering and principal component analysis (PCA) on the remaining 454 non-immune cells revealed three gene clusters that account for most of the variability in the data: cardiomyocyte-, fibroblast- and cell-cycle-related genes ( Fig. 1a, b and Extended Data Fig. 4a–c ). On the basis of the expression of cell-cycle-related genes, the cells were grouped into cell-cycle-active (CCA) and cell-cycle-inactive (CCI) populations ( Fig. 1a ); this was confirmed by the molecular signature of the cells in their proliferation states ( Extended Data Fig. 4d–g , proliferating or non-proliferating). Within CCA and CCI populations, hierarchical clustering further identified four subpopulations based on differential expression of fibroblast versus cardiomyocyte genes: fibroblasts, intermediate fibroblasts, pre-iCMs and iCMs ( Fig. 1a ). When plotted using PCA or t -distributed stochastic neighbour embedding analysis, a stepwise transcriptome shift from fibroblast to intermediate fibroblast to pre-iCM to iCM was evident ( Fig. 1c and Extended Data Fig. 4h, i ). We also analysed the reprogramming process as a continuous transition using SLICER (selective locally linear inference of cellular expression relationships) 17 , an algorithm for inferring nonlinear cellular trajectories ( Fig. 1d, e ). The trajectory built by SLICER suggested that fibroblasts, intermediate fibroblasts, pre-iCMs and iCMs form a continuum on the bottom CCI path, representing an iCM-reprogramming route. We further calculated the pseudotime for each cell on the trajectory by defining a starting fibroblast and measuring the distance of each single cell to the starting cell along the reprogramming route ( Fig. 1e ). We then examined the distribution of cells along the pseudotime line by plotting the ‘free energy’ (max(density) − density) of the trajectory and discovered a peak (lowest density) in pre-iCM state ( Fig. 1f ). These data suggest that the pre-iCM stage is an unstable cell state seeking to settle into a more stable state, such as the iCM state, consistent with the PCA and hierarchical clustering analysis showing that pre-iCMs express both cardiomyocyte and fibroblast markers as an intermediate cell type and our other experimental evidence ( Fig. 1a–c and Extended Data Fig. 4j–o ). To experimentally test the iCM route, we performed population-based gene expression profiling at reprogramming days 0, 3, 5, 7, 10 and 14 ( Fig. 1g, h and Extended Data Fig. 4p–v ). PCA generated a pattern showing an oriented path during reprogramming ( Fig. 1g and Extended Data Fig. 4p–s ). Expression of the three main gene clusters selected from single-cell data showed consistent changes in population data ( Fig. 1h and Extended Data Fig. 4t–v ), supporting the SLICER trajectory. Figure 1: Single-cell RNA-seq reconstructs iCM reprogramming and identifies intermediate cell populations. a , Hierarchical clustering results of 454 single cardiac fibroblasts that were infected with M + G + T or that were mock- or DsRed-infected for 3 days with representative gene ontology terms of the three identified gene clusters underneath. ECM, extracellular matrix; Fib, fibroblast; H, high; iFib, intermediate fibroblast; L, low; M, medium; Pos. reg. of SMC prolif., positive regulation of smooth muscle cell proliferation. For P values, see Extended Data Fig. 4 . b , c , PCA showing representative genes ( b ) or cell groups ( c ). PC, principal component. In b , cell cycle genes are shown in orange, cardiomyocyte markers in red and fibroblast markers in blue. d , e , Three-dimensional trajectory constructed by SLICER showing hierarchical clustering/PCA cell groups ( d ) or pseudotime ( e ). LLE, local linear embedding; NP, non-proliferating; Pro, proliferating. f , Free energy of the reprogramming process. g , h , Microarray of MGT- or LacZ-transduced cardiac fibroblasts from day 0 to 14 plotted as a PCA plot ( g ) or heat map ( h ) showing the mean expression of representative genes from a , b . CM, cardiomyocyte markers; Fib, fibroblast markers. i , Comparison of the CCA:CCI ratio in intermediate fibroblasts, pre-iCMs and iCMs. j – p , Cell-cycle synchronization ( j – l ) or immortalization ( m – p ) of cardiac fibroblasts for iCM induction (see Methods). CF, cardiac fibroblast; flow, flow cytometry; ICC, immunocytochemistry; noco, nocodazole; puro, puromycin; zeo, zeocin. j , Schematic of the cell-cycle synchronization experiment. k , l , Quantification of flow cytometry analysis. Fold change in the number of positive cells after nocodazole treatment compared to DMSO ( k ) or low serum treatment compared to normal serum levels ( l ). n = 4 samples. n – p , Representative 40× images of α-actinin and cardiac troponin T (cTnT) with Hoechst are shown in n and the quantification is shown in o , p . o , Fold change in the percentage positive CF-T cells compared to cardiac fibroblasts (CF). p , Fold change in the number of positive CF-T cells per field compared to cardiac fibroblasts (CF). n = 30 images. Scale bars, 100 μm. Data are mean ± s.e.m., two-sided Student’s t -test: * P < 0.05, ** P < 0.01, *** P < 0.001. PowerPoint slide Source data Full size image By analysing CCA and CCI populations, we found that even though proliferative iCMs (CCA iCMs) were observed ( Fig. 1a–d ), iCMs and pre-iCMs were predominantly CCI ( Fig. 1i ). We therefore designed four sets of experiments to address the relationship between cell proliferation and iCM reprogramming by: (1) manipulating the expression of cell-cycle-related genes in fibroblasts that were lentivirally transduced with M + G + T ( Extended Data Fig. 5a–p ) or infected with a single doxycycline-inducible MGT construct ( Extended Data Fig. 5q–s ); (2) synchronizing the cell cycle of starting cardiac fibroblasts ( Figs 1j–l ); (3) transiently overexpressing large T antigen to accelerate cardiac fibroblast proliferation ( Extended Data Fig. 5t–z ); (4) establishing an immortalized cardiac fibroblast (CF) line CF-T (see Methods) before the initiation of iCM reprogramming ( Fig. 1m–p ). All four sets of experiments yielded consistent results showing that decreased proliferation or cell-cycle synchronization enhanced iCM reprogramming, whereas increased proliferation suppressed iCM generation. We next examined the cellular composition of our isolated starting cardiac fibroblasts (see Methods) and identified five subpopulations ( Fig. 2a, b , Extended Data Fig. 6a–i and Supplementary Discussion 1 ). To delineate how these subpopulations were reprogrammed, we applied hierarchical clustering calculated from the starting cardiac fibroblasts to those that had been transduced with M + G + T and determined the correlation of the expression of non-cardiomyocyte lineage markers to the status of reprogramming ( Fig. 2c ). Expression of both endothelial and epicardial genes was significantly decreased in all cells that were transduced with M + G + T, irrespective of the reprogramming status. However, fibroblast and myofibroblast and/or smooth muscle genes were suppressed in iCMs, but not in intermediate fibroblasts or pre-iCMs ( Fig. 2c and Extended Data Fig. 6j, k ); this finding was supported by experimental data tracking protein expression of representative markers along the reprogramming trajectory ( Fig. 2d–f and Extended Data Fig. 6l, m ). Therefore, we conclude that endothelial and epicardial genes can be readily suppressed, whereas fibroblast and myofibroblast and/or smooth muscle genes were gradually suppressed along the course of reprogramming. This differential suppression is consistent with the difference in the layer of origin among different cardiac cell lineages during development and suggests that recent (epigenetic) memories might be easier to be erased than ones that have been gained earlier in development. The progressive suppression of fibroblast markers also indicates that there is a difference between iCM and induced pluripotent stem cell (iPS cell) reprogramming, because early downregulation of fibroblast markers, such as Thy1, is one of the hallmarks and prerequisites for iPS cell reprogramming to proceed 18 . Figure 2: Heterogeneity of cardiac fibroblasts and stepwise suppression of non-cardiomyocyte lineages during iCM induction. a , b , Hierarchical clustering ( a ) and PCA ( b ) of control cardiac fibroblasts with representative gene expression and gene ontology analysis of the five identified gene clusters. c , Hierarchical clustering calculated with control cardiac fibroblasts ( a ) applied to M + G + T-transduced cells with representative gene expression. Epi, epicardial; Endo, endothelial. d – f , Representative 40× immunocytochemistry images ( d , e ) and quantification ( f ) of Thy1 or SM22α (the protein product of the gene Tagln ) and α-MHC–GFP during reprogramming. d, day. n = 20 images. Scale bars, 100 μm. Data are mean ± s.e.m. In violin plots, box plots were included inside the plot. The centre dot represents median gene expression and the central rectangle spans the first quartile to the third quartile of the data distribution. The whiskers above or below the box show the locations of 1.5× interquartile range above the third quartile or below the first quartile. PowerPoint slide Source data Full size image To understand the molecular cascades that underlie iCM induction, we performed nonparametric regression and -medoid clustering (see Methods), and identified three major clusters of genes that are significantly related to and show similar trends during reprogramming ( Extended Data Fig. 7a–d ). Further analysis identified six smaller gene clusters with narrower variation across the trend and gene ontology analyses were performed for each cluster ( Fig. 3a–g , Supplementary Table 3 and Supplementary Discussion 2 ). The largest cluster (cluster 1) that shows a trend of immediate and continuous downregulation of gene expression is enriched in gene ontology terms related to protein translation/biosynthesis, modification and transportation ( Fig. 3b ). Such changes are probably to balance for increased energy requirements during the cell-fate switch and/or to transit from a protein production and ‘secretion factory’ (a fibroblast) to an energy-consuming ‘power station’ (a cardiomyocyte). The downregulated genes in cluster 2 are enriched in gene ontology terms that suggest a late suppression of fibroblast genes and growth factors, whereas the upregulated genes in clusters 4 and 5 are enriched in gene ontology terms that indicate engagement in a metabolic shift and structural changes towards a cardiomyocyte fate ( Fig. 3e, f ). Figure 3: Identification of Ptbp1 as a barrier to iCM splicing repatterning. a - g , Six gene clusters were identified during reprogramming ( a ) with gene ontology analysis ( b – g , false discovery rate (FDR) <0.05). The number of genes is shown in parentheses. h , i , Representative 20× immunocytochemistry images of cTnT and α-MHC–GFP ( h ) and quantification ( i ) of MGT-infected cardiac fibroblasts treated with shRNA against Ptbp1 (sh Ptbp1 ) or shRNA non-targeting control (shNT). n = 20 images. Scale bar, 200 μm. Data are mean ± s.e.m., two-sided Student’s t -test: *** P < 0.001. j – q , Splicing analyses of day-3 MGT-infected cardiac fibroblasts treated with sh Ptbp1 or shNT. j , k , Correlation between ΔPSI of cardiomyocytes versus cardiac fibroblasts, and ΔPSI of MGT versus LacZ ( j ) or MGT and sh Ptbp1 versus MGT and shNT ( k ). The trend line generated by linear regression and P values from a one-sided binomial test are shown. l , Number of detected alternative splicing events among the five alternative splicing types. AS, alternative splice; A3SS/A5SS, alternative 3′/5′ splicing site; IR, intron retention; ES, exon-skipping event; MXE, mutually exclusive spliced exon. m , Positional distribution of a Ptbp1-binding motif (sequence shown across the top). Motif enrichment scores (top) and P values (bottom) were plotted against genomic positions. The dashed black line indicates P = 0.05. Red/blue arrows indicate peaks of enrichment for exons that were included/skipped more often upon Ptbp1 knockdown, respectively. n , o , Gene ontology analysis of alternatively spliced genes between MGT and sh Ptbp1 and MGT and shNT ( n ) with a representative Sashimi plot ( o ). CM, cardiomyocyte; Mitochondrion inner mem., mitochondrion inner membrane; Inclevel, inclusion level. p , q , Expression of overlapping genes between differentially expressed genes (MGT and sh Ptbp1 versus MGT and shNT) and differentially expressed genes (MGT versus LacZ) ( p ) and sh Ptbp1 -only differentially expressed genes ( q ). KD, knockdown; WT, wild type. PowerPoint slide Source data Full size image Unexpectedly, we found that cluster 1 is also enriched in the gene ontology terms ‘mRNA splicing’, ‘mRNA processing’ and ‘RNA recognition motif’. This finding prompted us to interrogate the role of splicing factor(s) in iCM induction. We therefore used an inducible iCM cell line derived from mouse embryonic fibroblasts (icMEFs) 19 to screen a short hairpin RNA (shRNA) library that targeted 26 splicing factors representing the most common splicing factor families 20 and identified Ptbp1 as the top candidate that also showed differential expression in cardiac fibroblasts versus cardiomyocytes ( Extended Data Fig. 7e–h ). Notably, knockdown of Ptbp1 in various primary fibroblasts consistently resulted in a significant increase in reprogramming efficiency ( Fig. 3h, i and Extended Data Fig. 8a–p ), demonstrating that Ptbp1 is a general barrier to iCM induction. However, overexpression of Ptbp1 has minimal effects ( Extended Data Fig. 8q–u ). To understand how Ptbp1 silencing led to improved iCM reprogramming, we performed high-depth RNA-seq to analyse alternative splicing events of day-3 reprogramming cells with or without Ptbp1 expression. A total of 1,494 alternative splicing events were detected upon Ptbp1 knockdown, 97% of which were not induced by MGT alone ( Extended Data Fig. 9a and Supplementary Tables 4, 5 ). Notably, calculation of the difference in the percentage of spliced-in (ΔPSI) suggested that alternative splicing events between reprogramming versus control fibroblasts and endogenous cardiomyocytes versus cardiac fibroblasts were in an opposite direction (negative association, P = 0.008). Knockdown of Ptbp1 in reprogramming fibroblasts, however, induced a strong positive association ( P = 2.2 × 10 −16 ), suggesting that Ptbp1 silencing together with MGT, but not MGT alone, shifted the splicing pattern from cardiac fibroblast towards cardiomyocyte ( Fig. 3j, k ). Furthermore, a higher percentage of exon-skipping events (63%) of the five known alternative splicing types was observed in MGT-infected cells upon Ptbp1 silencing ( Fig. 3l ). Motif analysis using the RNA map analysis and plotting server (rMAPS) 21 showed that a CT-rich Ptbp1-binding motif was significantly enriched in exon-skipping exons compared to background exons ( Fig. 3m ). Notably, in exons that were included more often upon Ptbp1 knockdown, the motif was strongly enriched within 100 bp of the upstream intron ( P < 1 × −30 ), whereas, in exons that were skipped more often upon Ptbp1 knockdown, the motif was less strongly enriched, but showed a broad peak at 50–200 bp in the downstream intron ( P < 0.05). These data are consistent with the higher percentage of inclusion (69%) than skipping (31%) among exon-skipping events that were observed in Ptbp1 knockdown samples ( Extended Data Fig. 9b ), suggesting that Ptbp1 is a repressor of exon inclusion when bound to an upstream intron, and probably is a weaker repressor of exon skipping when bound to a downstream intron. Next we assessed the gene ontology terms of genes that were alternatively spliced upon Ptbp1 silencing ( Fig. 3n, o and Extended Data Fig. 9c–i ). In addition to altering the splicing patterns of genes related to cardiomyocyte lineage and function ( Fig. 3n ), Ptbp1 silencing resulted in changes in the splicing pattern of 21 other splicing factors, suggesting that Ptbp1 knockdown might trigger a second wave of splicing changes by regulating the switching of the isoform of other splicing factors. Furthermore, we explored the potential downstream effects of Ptbp1-mediated re-patterning of splicing events ( Supplementary Table 6 and Supplementary Discussion 3 ). DESeq2 (ref. 22 ) analyses of differentially expressed genes revealed that Ptbp1 knockdown enhanced the MGT-induced cardiac fibroblast to cardiomyocyte transcriptome shift by augmenting MGT-mediated changes ( Fig. 3p and Extended Data Fig. 9j–n ) and altering the expression of an additional set of cardiac and fibroblast lineage genes ( Fig. 3q and Extended Data Fig. 9o ). To determine whether cardiac reprogramming is a rare and random event or a Mef2c-, Gata4- and/or Tbx5-determined process, we plotted the expression of Mef2c, Gata4, Tbx5 and M + G + T in each cell against the reprogramming pseudotime of that cell calculated by SLICER ( Fig. 4a and Extended Data Fig. 9p ). We found that the expression levels of Mef2c , Gata4 , Tbx5 and M + G + T are highly correlated with the reprogramming progress, despite the fact that their expression was not used in the generation of the trajectory. We also determined the mean expression levels of Mef2c , Gata4 and Tbx5 and the mean ratio of expression ( Mef2c / Gata4 , Mef2c / Tbx5 and Gata4 / Tbx5 ) in the fibroblast, intermediate fibroblast, pre-iCM and iCM populations along the reprogramming trajectory ( Extended Data Fig. 9q–s ). Consistent with our previous studies 6 , 14 , 23 , we observed higher levels of Mef2c than Gata4 and Tbx5 in iCMs, further underscoring the importance of high Mef2c expression in iCM induction. Figure 4: iCM reprogramming determined by Mef2c, Gata4 and Tbx5 and identification of novel surface markers. a , Correlation between expression of Mef2c , Gata4 and Tbx5 and SLICER pseudotime. b , Left, correlation between Tbx5 expression and its targets with gene ontology analysis. Right, intercorrelation of genes on the left. Three sets of co-expressed genes (A, B, C) are shown ( P < 2.6 × 10 −6 ). Pos. reg. of nt metabolism, positive regulation of nucleotide metabolism; pos. reg. of transcription, positive regulation of transcription. c , Top 20 potential negative selection markers for iCM. d , Correlation of the expression of the four surface markers (labelled in red in c ) and reprogramming progress (left) and the expression of these markers in different cell groups (right violin plots). In violin plots, box plots were included inside the plot. The centre dot represents median gene expression and the central rectangle spans the first quartile to the third quartile of the data distribution. The whiskers above or below the box show the locations of 1.5× interquartile range above the third quartile or below the first quartile. e , f , Representative 40× immunocytochemistry images ( e ) and quantification ( f ) of Cd200 and α-MHC–GFP during reprogramming. n = 20 images. Scale bar, 100 μm. Data are mean ± s.e.m. Linear regression reports P < 1 × 10 −41 ( a ) and P < 1 × 10 −39 ( d ), α = 0.05, two-sided analysis. PowerPoint slide Source data Full size image To unravel the gene networks regulated by reprogramming factors, we navigated the relationship between the expression of a reprogramming factor and its downstream targets in each single cell. Using Tbx5 as an example, we calculated the Spearman correlation between Tbx5 expression and the expression of its downstream targets 24 , 25 within each reprogramming cell ( Fig. 4b , left). We then generated a correlation matrix for selected Tbx5 targets to determine their co-expression patterns ( Fig. 4b , right). The correlation patterns suggest that Tbx5 acts by promoting cardiac function-related genes and by suppressing protein biosynthesis and non-cardiomyocyte lineages ( Fig. 4b and Extended Data Fig. 9t, u ). Finally, we aimed to discover novel markers for targeting or enriching cell populations during iCM induction. To identify specific markers for each cell population along the reprogramming trajectory, we selected genes that were expressed significantly higher (for positive selection markers) or lower (for negative selection markers) in the cell population of interest than the other three populations (Tukey-adjusted P value <0.05 in pairwise comparisons after ANOVA; Extended Data Fig. 10a–f and Supplementary Table 7 ). Negative selection markers for iCMs appeared the most attractive as a supplement to cardiac positive selection markers. Among the top 20 negative markers for iCMs, we focused on four surface markers, Cd200 , Clca1 , Tm4sf1 and Vcam1 ( Fig. 4c ). Linear regression analysis suggests that the expression of these markers was highly anti-correlated with the reprogramming process and was barely detectable in iCMs ( Fig. 4d ). Further experimental validation confirmed that Cd200 was a negative selection marker ( Fig. 4e, f ), and knockdown of Cd200 did not affect reprogramming efficiency ( Extended Data Fig. 10g–n ). We have used single-cell transcriptomics analysis to gain insights into the heterogeneity of cells within an unsynchronized cardiac reprogramming system. The findings show promise for improving the efficiency and detection of iCM formation. We also anticipate that the experimental and analytical methods presented here, when applied in additional cell programming or reprogramming contexts, will yield crucial insights into cell fate determination and the nature of cell type identity. Methods Mouse strains and plasmids Transgenic CD1 mice that expressed α-MHC-promoter-driven GFP were described previously 1 . All animal experiments conformed to the NIH guidelines (Guide for the Care and Use of Laboratory Animals) and UNC Qian Laboratory animal protocol 15.277.0. This protocol was approved by the University of North Carolina at Chapel Hill Institutional Animal Care and Use Committee (IACUC) that oversees the university’s animal care and use (NIH/PHS Animal Welfare Assurance Number: A3410-0; USDA Animal Research Facility Registration Number: 55-R-0004; AAALAC Institutional Number: 329). pMXs retroviral vectors containing mouse Gata4 , Mef2c or Tbx5 were described previously 1 . The empty pMXs and pMXs-puro retroviral vectors were purchased from Cell Biolabs and they contain a partial LacZ stuffer sequence and were therefore referred to as LacZ in this manuscript. pMXs-DsRed and the polycistronic pMXs-puro-MGT were described previously 14 . Lentiviruses containing Mef2c , Gata4 or Tbx5 were cloned by replacing the GFP insert in pLenti-GFP-puro (Addgene 17448) with Mef2c , Gata4 or Tbx5 using BamHI and SalI. pTripZ-rTtA was cloned by removing the tet-on promoter and RFP in the pTripZ vector 26 using XbaI and MluI followed by blunt-end ligation. pTripZ-iMGT was constructed by four steps. First, an intermediate plasmid pTripZ-iRFP was cloned to remove the Ubc promoter, rTtA and Puro sequences from the original pTripZ vector, which was achieved by replacing the sequences between MluI and Acc65I with PCR-amplified WPRE. Second, to introduce an AgeI site before Mef2c, the first ~600 bp of the Mef2c sequence before the BsrGI restriction site was PCR-amplified and cloned into pGEMT-easy (Promega), resulting in pGEMT-AgeI-Mef2c-BsrGI. Third, MGT was excised from pGEMT-MGT 14 and inserted into pGEMT-AgeI-Mef2c-BsrGI with BsrGI and SalI, resulting in pGEMT-AgeI-MGT-MluI (there is a MluI site located in the pGEMT-easy vector after SalI). Fourth, pTripZ-iMGT was cloned by replacing RFP in pTripZ-iRFP with polycistronic MGT excised from the pGEMT vector using AgeI and MluI. For gene overexpression, Ptbp1 , Cd200 and cell cycle-related genes ( p15 (also known as Cdkn2b ), p16 (also known as Cdkn2a ), Ccnd1 , Ccnd2 and Ccne1 ) were PCR amplified from cDNA of neonatal mouse cardiac fibroblasts and cloned into the pLenti vector using BamHI and SalI (or XbaI and SalI for Ptbp1 and Ccnd2 and BamHI and XhoI for Ccnd1 ). The control pLenti-LacZ vector was cloned by replacing the GFP insert in pLenti-GFP with the partial LacZ sequence from pMXs-puro using BamHI and SalI. Cloning primers are listed in Supplementary Table 8 . pBabe-Zeo-LargeT was purchased from Addgene (1779). The non-targeting shNT pLKO.1-scramble plasmid was described previously 26 and all other shRNAs (pLKO.1-vector based, MISSION shRNA glycerol stock) were purchased from Sigma and their TRC numbers are listed in Supplementary Table 8 . Isolation of neonatal cardiac fibroblasts and cardiomyocytes, and generation of iCM We chose to reprogram mouse neonatal cardiac fibroblasts, which were used in the first 1 and many of the subsequent cardiac reprogramming studies 2 , 3 , 4 , 5 , 6 , 8 , 12 , 14 , 15 , 23 , 27 . Cardiac fibroblasts were isolated using standard protocols described previously 23 , 27 . Specifically, neonatal (postnatal day (P)1.5) hearts were isolated from α-MHC–GFP + pups and rinsed thoroughly with chilled phosphate-buffered saline (PBS). The hearts were then minced by a razor blade, transferred to 8 ml warm 0.05% Trypsin-EDTA (Gibco), and incubated at 37 °C for 10 min. After five rounds of collagenase digestion (5 ml of warm 0.2% collagenase type II in HBSS for 3 min at 37 °C followed by vortexing for 1 min), a single-cell suspension was obtained by passing through 40-μm cell strainers. The cells were then suspended in 1 ml of red-blood-cell lysis buffer (150 ml NH 4 Cl, 10 mM KHCO 3 and 0.1 mM EDTA) for 1 min on ice and resuspended in magnetic-activated cell sorting buffer (MACS buffer: DPBS, 0.5% BSA, 2 mM EDTA). To sort Thy1 + cells, approximately 1 × 10 7 cells were suspended in 90 μl MACS buffer with 10 μl Thy1.2 micro-beads (Miltenyi Biotec) at 4 °C for 30 min. The cells were then washed, suspended in MACS buffer and applied to an equilibrated LS column (Miltenyi Biotec). Cells bound to beads were flushed out after two washes and seeded onto 0.1% gelatin-coated plates at 2.5 × 10 4 per cm 2 in fibroblast medium (IMDM, 20% FBS, 1× penicillin–streptomycin). After overnight culturing, the medium was replaced to remove unattached cells. We refer to the MACS-isolated Thy1 + adherent non-cardiomyocytes as neonatal cardiac fibroblasts. For bulk RNA-seq experiments, neonatal cardiac fibroblasts were similarly isolated, except that MACS-isolated Thy1 + cells were directly lysed in TRIzol (Life Technology) without culturing. Neonatal cardiomyocytes were isolated using the neonatal cardiomyocytes isolation system (Worthington Biochemical Corporation) except that all enzymes were used at a quarter of the recommended concentration to increase cell viability. After a 1.5 h of pre-plating on an uncoated surface to remove attached non-cardiomyocytes, the unattached cardiomyocytes were collected in TRIzol (>80% viability by Trypan blue staining). For iCM generation, pMXs retroviruses were packaged by transfecting platE cells (Cell Biolabs) with Lipofectamine 2000 (Life Technology) as previously described 14 . Viruses collected from one 10-cm dish were resuspended in 100 μl iCM medium (10% FBS in DMEM:M199 (4:1)) and added to cells at 5 μl of each virus (if cotransducting) per cm 2 of surface area. All transductions were performed in iCM medium containing 4 μg ml −1 of polybrene. For single-cell RNA-seq, cardiac fibroblasts were untransduced, transduced with a 1:2 ratio of pMXs-DsRed:pMXs (LacZ) or transduced with equal amounts of Mef2c, Gata4 and Tbx5 viruses. For microarray experiments, cardiac fibroblasts were transduced with the control pMXs-puro (LacZ) or the pMXs-puro-MGT viruses and collected in TRIzol. Day 5, 7, 10 and 14 samples were selected with 2 μg ml −1 puromycin from day 3 and maintained in 1 μg ml −1 puromycin from day 6. Day-0 samples were overnight-cultured cardiac fibroblasts that were collected immediately before viral transduction. For bulk RNA-seq, cardiac fibroblasts were transduced with pMXs-puro (LacZ), pMXs-puro-MGT, pMXs-puro-MGT + shNT or pMXs-puro-MGT + sh Ptbp1 -271 for three days and then collected in TRIzol. All microarray and bulk RNA-seq samples were prepared in duplicate. Capture of single cells, RNA spike-ins and preparation of cDNA Single cells were captured using the Fluidigm C1 system (up to 96 single cells per plate). A total of seven individual experiments (E1–E7) were performed starting from mouse breeding, cardiac fibroblast isolation, iCM reprogramming, to single-cell capture and cDNA preparation (see Extended Data Fig. 1 for experimental design and workflow). Three of the seven experiments (E1, E2 and E4) contained only M + G + T-transduced cells. Four of the seven experiments (E3 and E5–E7) contained cells treated with two different conditions in order to estimate the relative abundance of mouse mRNA between treatments. Specifically, for experiments E1, E2 and E4, cardiac fibroblasts transduced with M + G + T for 3 days were collected by trypsinization, stained with 7AAD or NearIR Live/Dead dye (Thermo Fisher Scientific), and FACS-sorted for live cells (negative for the Live/Dead dye). Pilot experiments showed an average diameter of 12.6 μm and a buoyancy of 7.5:2.5 (cells:buoyancy buffer) of cardiac fibroblasts. Therefore, the sorted single-cell suspension (around 2,000 cells per μl) was loaded on a medium-sized (10–17 μm) microfluidic RNA-seq chip (C1 Single-Cell mRNA Seq IFC, 100-6041, initially designed chips were used in E1–E3 and redesigned chips were used in E4–E7) and single cells were captured with the C1 system. Bright field images were taken of each capture site. For experiments E3 and E5–E7, day-3 M + G + T-transduced (E5 and E6) or untransduced (E3 and E7) cardiac fibroblasts were stained with the NearIR Live/Dead dye and 0.25–1 μM carboxyfluorescein succinimidyl ester (CFSE, Thermo Fisher Scientific), whereas the DsRed-transduced cardiac fibroblasts were stained with the NearIR Live/Dead dye only. Then, 12,000 CFSE- and NearIR-stained green fluorescent cells and 12,000 DsRed- and NearIR-stained red fluorescent cells were sequentially FACS-sorted into the same tube and mixed. For experiment E3, cell sorting was slightly different, and 700 of each of the CFSE single-positive cells (untransduced), DsRed single-positive cells (DsRed-transduced) and double-negative cells (from the DsRed-transduced wells but with no DsRed protein expression) were sorted into a single-cell suspension. After cell capture, fluorescent images of GFP and RFP channels as well as bright field pictures were taken. Next, control RNA spike-ins were added into lysis mix A (see Fluidigm’s protocol), which were then loaded onto the IFC plate before cell lysis. Experiments E1 and E2 used the Ambion Array Control spike-ins (AM1780) that were included in the SMARTer kit. E1 used only spike 1, 4 and 7 according to Fluidigm’s protocol but at a concentration that is 100-fold higher than suggested, based on recommendations from the UNC Advanced Analytics Core (AAC) that provided the Fluidigm service. E2 used all 8 spike-ins contained in the kit at the following working concentrations (before addition to lysis mix A): 10 pg μl −1 of spike 1, 1 pg μl −1 of spike 2 and with a 10-fold reduction for the next spike and so on. For E3, we used the Ambion spike-ins at half the concentration of those used in E2 and another spike-in, the External RNA Controls Consortium (ERCC) RNA spike-in Mix 1 (Ambion, Life Technologies) after an 80,000-fold dilution. For E4–E7, only the ERCC spike-in was used after a 40,000-fold dilution and 1 μl of the diluted working spike-in was mixed with 19 μl of other components to make lysis mix A. Then cell lysis, reverse transcription and cDNA pre-amplification were performed on the chip according to Fluidigm’s standard protocol and the control RNA spike-ins were processed in parallel with cellular RNA. Differences in spike-ins added to each experiment reflected how the technology evolved over time during the progress of this project. To address the spike-ins issue, among others, we developed a pipeline described in the ‘Processing and normalization of single-cell RNA-seq data’ section to normalize and analyse all acquired useful data. Illumina library preparation and sequencing After in situ cDNA library preparation, the bright field and fluorescent images of each capture site (nest) on the chip were carefully examined. Forty-six empty nests, 30 nests with two or more cells, and 22 nests containing morphologically unhealthy cells out of 672 capture sites on seven chips were excluded from further analysis, resulting in 574 single-cell cDNA libraries. The size distribution and quality of cDNA libraries from each single cell were ensured by bioanalyzer. For E3 only, cDNA library concentrations were measured with picogreen (Thermo Fisher Scientific) and four single-cell cDNA libraries below 1 ng μl −1 were excluded from further analysis. E1–E4 each contained a negative control from an empty nest that was processed in parallel with other healthy single cells. Therefore a total of 574 high-quality cDNA libraries were submitted to the UNC High Throughput Sequencing Facility (HTSF). Illumina libraries were prepared using the Nextera XT DNA Sample Preparation kit according to Fluidigm’s standard protocol, except 13 cycles of amplification were carried out. The barcoded single-cell Illumina libraries of each experiment were pooled and sequenced for 50-bp single-end reads on Illumina HiSeq 2500. Illumina library preparation and sequencing were carried out in three batches: E1 by itself on two lanes, E2 and E3 processed together on one lane each, and E4–E7 processed together on one lane each. Previous studies showed that 0.5–1 million reads per cell were sufficient to detect most genes expressed by single cells 28 , 29 . In this study, we sequenced the cells at about 1–5 × 10 6 reads per cell. Raw reads were re-assigned to each single cell by their unique Nextera barcode and sequencing reads without barcodes were received from the HTSF in.fastq format. For microarray and bulk RNA-seq samples, cellular RNA was extracted with TRIzol (microarray samples were further purified with the RNAeasy kit from Qiagen), and only samples with an RNA integrity number (RIN) above 8, as determined using a bioanalyzer, were further processed. Microarray samples were submitted to the HTSF for one-colour Cy-dye labelling and long oligo (60-mer) Agilent high-density microarrays. Bulk RNA-seq samples were prepared with the TruSeq Stranded mRNA Library Prep Kit (Illumina). The barcoded Illumina libraries were pooled and submitted to the HTSF for sequencing. About 6 × 10 7 100-bp paired-end reads per sample were obtained and sequencing reads removed of Illumina indexes were received from HTSF in.fastq format. Processing and normalization of single-cell RNA-seq data The quality of sequencing results was first checked by FASTQC. Reads were high quality and no trimming was required. The raw reads were then mapped to the merged genome of mm10, ERCC, and E. coli K12 with TopHat2 using default settings. Information about the number of total reads and the percentages of reads mapped to spike-in or mouse genome for each single cell are detailed in Supplementary Table 1 . Outliers showing high ratios of percentage reads mapped to spike-in to percentage reads mapped to mouse genome were removed ( Extended Data Fig. 1d ). This step removed 61 outliers from the 574 sequenced single cells, resulting in 513 high-quality single cells for analysis ( Supplementary Table 2 ). Gene expression was counted with Htseq-count using the union mode 30 ( ). Limit of detection of our single-cell RNA-seq was determined as previously described 28 . In brief, the concentration of each ERCC spike-in in the lysis chamber was first calculated. For experiment E3, seven of the spike-ins were present at 1.24 molecules per chamber and were as follows: ERCC-00014, ERCC-00028, ERCC-00039, ERCC-00067, ERCC-00077, ERCC-00143 and ERCC-00150. For experiments E4–E7, five of the spike-ins were present at 1.24 molecules per chamber and were as follows: ERCC-00031, ERCC-00033, ERCC-00058, ERCC-00069 and ERCC-00134. The number of non-zero measurements of each spike-in was then counted. This number was divided by the total number of high-quality cells from that plate and this is the probability of detection for each spike-in at this concentration. Mean probability of detection of all 12 spike-ins is 0.30, consistent with previous findings 28 and suggesting single-molecule sensitivity of our experiments. We developed a three-step normalization strategy in order to extract biologically meaningful information from all the single-cell RNA-seq data ( Extended Data Fig. 1c ). Firstly, we normalized mouse gene raw counts to each cell’s technical and biological size factors within each experiment using a previously described method 31 . These two size factors account for technical variations within each experiment, such as amplification efficiency and differences in the amount of biological starting material in each cell. On the basis of the normalized DsRed counts, cells in experiments that involved two treatments were classified as DsRed-transduced (E3R, E5R, E6R and E7R, expressing high levels of DsRed), or M + G + T-transduced (E5M, E6M) or untransduced cells (E3U, E7U; Extended Data Fig. 1g ). Secondly, we corrected for ‘batch effects’ that account for technical contributions to experiment-to-experiment variations due to different cell-capture efficiency, types/amounts of spike-ins and Fluidigm chips ( Extended Data Fig. 1b ), while preserving biological information, such as total mRNA abundance. By comparing biological replicate experiments, we found different mean total mRNA counts per cell ( Extended Data Fig. 1h ) that probably resulted from varying cell-capture efficiency per plate (68 sequenced cells in E5, 33% more compared to 51 cells in E6; Supplementary Table 2 ), various amounts of spike-ins used (100-fold more concentrated spike-ins in E1 than in E2) and different types of spike-ins and Fluidigm chips used (Fluidigm spike-in and previous chip in E2 and ERCC spike-in and redesigned chip in E4), suggesting the existence of batch effects. To determine whether different treatments affected mouse mRNA abundance in the cell, we also examined mean total mRNA reads from different treatments in the same experiment ( Extended Data Fig. 1h ). We found no difference in mean total mRNA counts between uninfected and DsRed-transduced cells (E3U versus E3R and E7U versus E7R) but 40% less counts in cells undergoing reprogramming (M + G + T-transduced, E5M, E6M) than DsRed-transduced cells (E5R, E6R), suggesting biological variations caused by treatment. Therefore, to retain mRNA abundance information while correcting for batch effects, we normalized each treatment in each experiment to an experiment size factor so that the median mRNA counts equals 1,000,000 for uninfected and DsRed-transduced cells and 616,136 (deduced from the ratio of median mRNA counts from M +G +T transduction to DsRed transduction (M:R) of E5 and E6) for M + G + T-transduced cells ( Extended Data Fig. 1h ). This normalization successfully removed the batch effects discussed above. An example is shown in Extended Data Fig. 1i comparing cells from E5 and E6 on a PCA plot. Lastly, we focused on non-immune cells (462 cells in total, see Extended Data Fig. 3 for details) and removed residual batch effects using ComBat, a method that was designed for normalizing gene expression data 32 and that performed well in previous studies 33 . After examination of all experiments in each treatment condition with PCA plots ( Extended Data Fig. 2a–c , the ‘Before’ columns), we found no batch effects in the principal component (PC)1/PC2 plot, but started to see incomplete overlap of different experiments in PC3 (for uninfected cells) or PC4 (for M + G + T- and DsRed-transduced cells); PC3 and PC4 only represented <5% variance of the data. Because batch effects were observed between different chips, but not within the same chips, we postulated that the use of two different versions of the Fluidigm medium-size chips might be the cause. The ComBat normalization was run separately for each treatment to remove only technical variations between batches while preserving biological variations between treatments. ComBat requires all input genes to be expressed in all batches, that is, at least one cell in each batch. Therefore, genes that have non-zero counts in all batches were selected and normalized for each treatment. After the normalization, results from different treatments were merged. For those genes that were selected in one treatment but not others, expression levels were set to 0 in other treatment(s). After this procedure, there were a total of 14,414 genes left. PCA analyses with ComBat-normalized counts showed that no batch effects were detected in the top 20 PCs (the ‘After’ columns in Extended Data Fig. 2a–c for PC1–PC4, and data not shown), suggesting successful removal of all residual batch effects in our data. Analysis of single-cell RNA-seq data Outlier detection, PCA, hierarchical clustering and the generation of violin plots were performed with the ‘SINGuLAR Analysis Toolset’ package (Fluidigm) in R. First, normalized expression was log 2 -transformed before analysis. Outliers were detected and removed based on mean gene expression and PCA using the SINGuLAR package ( Extended Data Fig. 2d, e ), resulting in 454 high-quality non-immune cells for downstream analysis. Expression of the reprogramming factors Mef2c , Gata4 and Tbx5 was excluded before PCA, hierarchical clustering or SLICER (see ‘Trajectory construction and identification of genes related to iCM reprogramming’) analysis. Next, the top 400 PCA genes were selected by largest weight (loading) contribution to PCs 1, 2 or 3. Then hierarchical clustering was performed with these 400 genes and cells were grouped as fibroblasts (fibroblasts from control plates), intermediate fibroblasts (fibroblasts from M + G + T plates), pre-iCMs (cells expressing both cardiac and fibroblast markers) and iCMs ( Fig. 1a ). The group information was used to generate PCA plots ( Fig. 1b, c ), violin plots ( Extended Data Fig. 4b, l ) and to perform analyses of variance (ANOVA; Fig. 4c, d and Extended Data Fig. 10a–e ). ANOVA and Tukey post hoc tests were performed with custom scripts in R in order to identify positive- or negative-selection markers for iCM and pre-iCM. For ANOVA, CCI but not CCA cells were used. For violin plots in Figs 2 , 4 , box plots were overlaid over the violin plots. The centre dot represents median gene expression and the central rectangle spans the first quartile to the third quartile of the data distribution. The whiskers above or below the box show the locations of 1.5× interquartile range above the third quartile or below the first quartile. t -distributed stochastic neighbour embedding ( t SNE) analysis was performed with the ‘Rtsne’ package in R. Gene ontology analysis was performed using the DAVID functional annotation tool version 6.7 ( ). All gene ontology terms shown in this study have a P value or corrected P value (FDR) <0.05. For the comparison of distributions of number of detected genes in different cell groups, we conducted a one-sided two-sample Kolmogorov–Smirnov test ( Extended Data Fig. 4n, o ). Because we are comparing the distributions of two samples, the conclusion is more general than a mean test, such as a t -test, and does not rely on restrictive statistical assumptions, such as normal distributions. In Fig. 2a–c , cells from experiments E1–E3 are shown. Analysis of data from experiments E4–E7 showed consistent results ( Extended Data Fig. 6e, f, k ). For Fig. 2c , first, CCI and CCA cells in fibroblasts or epicardial-like cells ( Fig. 2a, b and Supplementary Discussion 1 ) were merged into one Fb/Epi group. Then a new hierarchical clustering for control cells in E3 was calculated using the four cell-lineage-related gene clusters but not the cell cycle genes identified in Fig. 2a . The calculated hierarchical clustering was very similar to that in Fig. 2a and was applied to reprogramming cells from E1 and E2 to generate Fig. 2c . For all correlation analyses, gene expression was always log-transformed before analysis. In Fig. 4a, d and Extended Data Fig. 9p , CCI cells were used. Linear regression was performed to obtain the regression coefficient ( R value) and its corresponding P value (two-sided, α = 0.05). For correlation analysis of Tbx5 and its target genes in Fig. 4b , M + G + T-transduced CCI cells were included. The list of Tbx5 ChIP–seq peaks in HL-1 and the list of genes differentially expressed in wild-type versus Tbx5-null mutant mice hearts were obtained from previous studies 24 , 25 . Genes present in both lists (2,109 genes) were selected as Tbx5 downstream targets and used to calculate Fig. 4b . A total of 170 genes with a Spearman correlation coefficient >0.3 or <–0.3 were selected and their correlation coefficient with Tbx5 is plotted in Fig. 4b (left). Then intercorrelation between these genes was calculated and the correlation matrix ordered by hierarchical clustering is shown as a heat map in Fig. 4b (right). Three sets of genes A, B, C were found to be co-expressed ( P < 2.6 × 10 –6 by Spearman correlation). Representative genes of these sets are listed on the right and their corresponding gene ontology terms were labelled on the left ( Fig. 4b ). For correlation analysis of Mef2c , Gata4 and Tbx5 expression and expression of transcription factors or splicing factors, M + G + T-transduced CCI cells are shown in Extended Data Fig. 9t, u . The list of mouse transcription factors was obtained from public databases as previously described 12 and the list of splicing factors was obtained from a previous study 34 . Trajectory construction and identification of genes related to iCM reprogramming We used SLICER (selective locally linear inference of cellular expression relationships) 17 , an algorithm that we have previously developed, to construct cellular trajectories of iCM reprogramming. SLICER is implemented as an R package, which is freely available on the Comprehensive R Archive Network (CRAN) and on GitHub ( ). In brief, SLICER discovers a nonlinear, low-dimensional manifold embedded in gene expression space that indicates how cellular gene expression profiles change during a sequential process. Additionally, SLICER automatically detects the presence, location and number of branches in a trajectory, corresponding to multiple cell fates or multiple cellular processes occurring simultaneously. Here, the manifold corresponds to the reprogramming process by which fibroblasts turn into iCMs. To ensure consistency between the clustering and trajectory analyses, we ran SLICER on the control and reprogramming cardiac fibroblasts using the top 400 PCA genes, rather than using SLICER’s gene selection approach. We performed nonlinear dimensionality reduction using a technique called local linear embedding (LLE), which is analogous to a nonlinear version of PCA. Here, we used a three-dimensional LLE projection for trajectory construction. We then build a k -nearest neighbour graph in the low-dimensional manifold space produced by LLE. Shortest paths through the neighbour graph correspond to geodesics along the manifold, and we use the lengths of these shortest paths to order cells according to their distances from a user-defined starting cell. The steps of the reprogramming process can then be traced by examining the cells one-by-one in the specified ordering. We also investigated the distribution of cells along pseudotime, reasoning that local differences in density could indicate the relative speed of changes and stability of intermediate states. We estimated the density of cells in pseudotime using a Gaussian kernel density estimator, then calculated the free energy as max(density) – density. Non-proliferating M + G + T-transduced cells were used for free energy calculation ( Fig. 1f ). Using a method similar to the previously described method of ref. 35 , we used nonlinear regression to identify genes that are significantly related to the reprogramming process. Only non-proliferating cells were included in this analysis. For each gene with mean expression above 1, we fit a generalized additive model (GAM) of the Tobit family (VGAM R package). The GAM approach uses cubic splines to fit a smooth nonlinear model, and the Tobit likelihood accounts for zero inflation by modelling gene expression dropout as data censoring. To avoid overfitting the data, which would result in a curve that is too ‘wiggly’, we constrained the GAM fits to use three degrees of freedom. We then identified genes that were significantly related to the reprogramming process using a likelihood ratio test, with a constant GAM as the null model. Using k -medoid clustering (pam algorithm from the cluster R package), we identified clusters of significantly related genes that showed similar trends over the reprogramming process ( Fig. 3a and Extended Data Fig. 7a ). Analysis of bulk RNA-seq and microarray data Bulk RNA-seq data were analysed similar to single-cell data, except that they were only normalized for sequencing depth. Specifically, raw counts from the HTseq count were divided by the total number of mm10 mRNA reads from that sample and then multiplied by 1 × 10 6 to give counts per million. For differential expression analysis of LacZ versus MGT samples and MGT and shNT versus MGT and sh Ptbp1 samples, raw counts were inputted into DESeq2 (ref. 22 ) in R and lists of differentially expressed genes were obtained (FDR < 0.05, fold change >1.25). Heat maps were generated using the heatmap.2 function in the ‘gplots’ package in R. The microarray data were processed using the limma package of Bioconductor 36 . Raw data were first background-corrected and normalized using ‘normexp’ and ‘quantile’ methods, respectively. Next, control probes and low-intensity probes were filtered out using the 1.1 multiplier of the 95% quantile for negative controls as a cut-off. Lastly, probe intensity data were log 2 -transformed and replicated probes for each gene were averaged for subsequent analyses. PCAs were performed in R with the ‘prcomp’ function using all of the 34,378 detected genes and the 3D plot was generated with the ‘scatterplot3d’ package in R ( Extended Data Fig. 4p, q ). Analysis of splicing We aligned the bulk RNA-seq data (100-bp, paired-end reads) to mm10 using Mapsplice version 2.1.4. To detect alternative splicing, we used rMATS 37 version 3.2.5 with Ensembl GRCm38.82 gene annotations and the novelSS (novel splicing site) flag to identify unannotated splicing events. All other rMATS parameters were set to the default values. In Fig. 3j–l, n, o and Extended Data Fig. 9a–i , we used FDR < 0.05 and ΔPSI > 15 as cut-offs, resulting in 1,494 alternative splicing events for MGT and sh Ptbp1 versus MGT and shNT, and 879 alternative splicing events for MGT versus LacZ (see Supplementary Tables 4, 5 ). In Fig. 3j, k , to determine whether the direction (sign) of PSI change is consistent between two group pairs, we identified the overlapping alternative splicing events between the samples (69 overlapping events between MGT versus LacZ and cardiomyocytes versus cardiac fibroblasts, and 155 events between MGT and sh Ptbp1 versus MGT and shNT, and cardiomyocytes versus cardiac fibroblasts). We then conducted a binomial test by first transforming the paired ΔPSI data into either +1 or −1 based on if their signs agreed or not, and then calculating the proportion of +1’s and compare it to 50% using a one-sided binomial test. The results for ΔPSI (cardiomyocyte–cardiac fibroblasts) versus (MGT–LacZ) showed that for only 34.78%, the signs in these two groups agree with each other ( P = 0.0077). Therefore, we conclude that there was enough statistical evidence to support that the directions (signs) in cardiomyocyte–cardiac fibroblasts and MGT–LacZ are different. The result for cardiomyocyte–cardiac fibroblasts versus sh Ptbp1 –shNT shows that for 83.22% the signs in two groups are the same ( P = 2.2 × 10 −16 ). Therefore we conclude that there is enough statistical evidence to support that the directions (signs) in cardiomyocyte–cardiac fibroblast and sh Ptbp1 -shNT are the same. To plot the positional distribution of Ptbp1-binding motifs, we used rMAPS 21 version 1.0.5. The rMAPS tool can only be used for exon-skipping events and has a database of known binding motifs for RNA-binding proteins, including Ptbp1. It considers exon-skipping events with FDR < 0.05 and ΔPSI > 5 as statistically significant and all others insignificant. Then it takes all exon-skipping events (both significant and insignificant) and uses the events that are not significant to create a background profile. We basically extracted the exon skipping events from the rMATS comparison of MGT and sh Ptbp1 versus MGT and shNT and ran rMAPS on this list to generate Fig. 3m . Proliferation assays Lentiviruses were packaged by transfecting HEK293T cells with Lipofectamine 2000 as previously described 12 . For packaging shRNA viruses, a total of 10 μg plasmids consisting of equivalent concentrations of each of the four or five shRNA targeting different regions of the gene were used. Lentiviral Mef2c, Gata4, Tbx5, inducible MGT (iMGT) and large T antigen were added to cells at 5 μl of each virus (if co-transduction) per cm 2 of surface area and all other lentiviruses (rTtA, LacZ, cell-cycle related genes and shRNA) were used at 2.5 μl per cm 2 . For Extended Data Fig. 5b–m , pMXs-puro-MGT was used for iCM induction. In the EdU-incorporation assay, cells were pulsed with 10 μM of EdU for three days before staining with Alexa Fluor 647-labelled EdU of the Click-iT Plus Edu Alexa Fluor 647 Flow Cytometry Assay Kit (ThermoFisher Scientific, C10634). Propidium iodide (Life Technologies, P3566) staining was performed as previously described 19 . For iMGT induction, doxycycline was added at 1 μg ml −1 and the medium was changed every 2–4 days. Puromycin (puro) selection was performed at 1 μg ml −1 and the medium was changed every 2–4 days. For the cell-cycle synchronization assay, cardiac fibroblasts were treated with 400 ng ml −1 of nocodazole (G2/M) or cultured in a low serum condition (0.5% FBS, G0/G1) for four days before iCM induction; the normal serum condition was 10% FBS. To generate CF-T cells, neonatal cardiac fibroblasts were transduced with pBabe-largeT and selected using 600 ng ml −1 Zeocin in fibroblast medium (IMDM, 20% FBS, 1× penicillin–streptomycin) from day 2 for two weeks. The resulting CF-T cells were a relatively homogenous pool of cells after antibiotic selection. All data are representative of multiple repeated experiments. shRNA library screen, immunofluorescence staining, flow cytometry and qRT–PCR For screening of the shRNA library targeting splicing factors, MGT expression was induced by 1 μg ml −1 doxycycline in icMEFs 19 that constitutively express Tet and the MGT construct under the control of a Tet-ON promoter. Lentiviruses containing mixed clones of shRNAs targeting each gene were added to the cells at 5 μl per cm 2 . For Ptbp1 protein expression, western blotting was performed as previously described 14 (anti-Ptbp1, Cell Signaling 8776, 1:500). Adult cardiac fibroblasts (AdCF) and tail-tip fibroblast (AdTTF) were isolated using the explant method as previously described 15 . Clone 271 was used for all sh Ptbp1 -related experiments, except for the initial screen, whereas mixed clones of sh Cd200 viruses were used for sh Cd200 -related experiments. Information on shRNAs are listed in Supplementary Table 8 . Immunofluorescence staining and flow cytometry were performed as previously described 12 . Primary antibodies were used at the following dilutions: rabbit anti-GFP (Invitrogen, A11122, 1:500), chicken anti-GFP (Abcam, ab13970, 1:1,500), anti-α-SMA (Sigma, A2547, 1:200), anti-SM22α (Abcam, ab14106, 1:200), anti-α-actinin (Sigma, A7811, 1:500), anti-Cx43 (Sigma, C6219, 1:200), APC-Thy1.2 (eBioscience, 17-0902-81, 1:100) and APC-Cd200 (Biolegend, 123809, 1:200). Images were captured using an EVOS FL Auto Cell Imaging System (Life Technologies). All images shown in this study were overlaid with Hoechst nuclear staining, except for live images. For quantification, 10–30 images from multiple repeated experiments were randomly taken at 10×, 20× or 40× magnification at the same exposure and then counted in a blinded way. For Extended Data Fig. 6g–i , neonatal hearts were minced into small pieces and plated with fibroblast medium (the explant method 14 ). After seven days of migration, the adherent cells were either immunostained in situ (Cy3-α-SMA, Sigma, C6198, 1:500; APC-Thy1.1, eBioscience, 17-0900-82, 1:100; PE-CD31, Biolegend, 102408, 1:200), or trypsinized, filtered through 40-μm cell strainers (Thermo Scientific), immunostained for Thy1.2 and α-SMA/CD31, and then analysed by flow cytometry. All flow cytometry data were collected on a Beckman Coulter Cyan ADP flow cytometer (UNC Flow Cytometry Core Facility) and analysed with the FlowJo software (Tree Star). qRT–PCR was performed as previously described 12 (see Supplementary Table 8 for primer sequences). All data are representative of multiple repeated experiments. Statistics Unless otherwise stated, values are expressed as mean ± standard deviation (s.d.) or standard mean of error (s.e.m.) of multiple biologically independent samples. Statistical tests performed include Student’s t -test, one way ANOVA followed by post hoc correction, linear regression, Spearman correlation, Kolmogorov–Smirnov test, binomial test, likelihood ratio test and χ 2 test. Application and results of these tests are described in detail in Methods and figure legends. Generally, * P < 0.05 was considered statistically significant, ** P < 0.01 was considered highly significant and * P < 0.001 was considered very highly significant. All data are representative of multiple repeated experiments. Data availability The RNA-seq data that support the findings of this study are available in the Gene Expression Omnibus (GEO) under the accession number GSE98571 . Source Data for all figures are available in the online version of the paper. Accession codes Primary accessions Gene Expression Omnibus GSE98571
Reversing scar tissue after a heart attack to create healthy heart muscle: this would be a game-changer in the field of cardiology and regenerative medicine. In the lab, scientists have shown it's possible to change fibroblasts (scar tissue cells) into cardiomyocytes (heart muscle cells), but sorting out the details of how this happens hasn't been easy, and using this kind of approach in clinics or even other basic research projects has proven elusive. Now, in a new study published today in Nature, UNC researchers report a breakthrough. They have used single cell RNA sequencing technology in combination with mathematical modeling and genetic and chemical approaches to delineate the step-by-step molecular changes that occur during cell fate conversion from fibroblast to cardiomyocyte. The scientists, led by Li Qian, PhD, assistant professor of pathology and laboratory medicine at the UNC School of Medicine, not only successfully reconstructed the routes a single cell could take in this process but also identified underlying molecular pathways and key regulators important for the transformation from one cell type to another. "We used direct cardiac reprogramming as an example in this study," said Qian, the senior author of this paper and member of the UNC McAllister Heart Institute, "But the pipelines and methods we've established here can be used in any other reprogramming process, and potentially other unsynchronized and heterogeneous biological processes." When we are babies, embryonic stem cells throughout our bodies gradually change into a variety of highly specialized cell types, such as neurons, blood cells, and heart muscle cells. For a long time, scientists thought these specific cell types were terminal; they could not change again or be reverted back to a state between embryonic and their final differentiated stage. Recent discoveries, though, show it's possible to revert terminally differentiated somatic cells to a pluripotent state - a kind of "master" cell that can self-produce and potentially turn into any kind of cell in the body. Scientists have also figured out how to convert one kind of differentiated somatic cell type into another without detouring through the pluripotent stage or the original progenitor stage. Such findings shifted the paradigm of cellular hierarchy and revolutionized stem cell research and the field of regenerative medicine. Yet, figuring out how to study the specifics of these processes to leverage them for clinical and basic research has been difficult. Direct cardiac reprogramming, a promising approach for cardiac regeneration and disease modeling that the Qian Lab has pioneered and fine-tuned in the past several years, involves direct conversion of cardiac non-myocytes into induced cardiomyocytes (iCMs) that closely resemble endogenous CMs. Like any reprogramming process, the many cells that are being reprogrammed don't do so at the same time. "It's an 'asynchronous' process," Qian said. "Conversions occur at different intervals. So, at any stage, the cell population always contains unconverted, partially reprogrammed, and fully reprogrammed cells. Therefore, cellular reprogramming is 'heterogeneous,' which makes it difficult to study using traditional approaches." In this study, by using microfluidic single-cell RNA sequencing techniques, Qian's lab addressed the two main issues of 'asynchronous' programming and heterogeneous cell populations. They analyzed global transcriptome changes during fate conversion from fibroblasts to iCMs. Using mathematical algorithms, they identified molecularly distinct subpopulations of cells along the reprogramming pipeline. Then they re-constructed routes of iCM formation based on simulation and experimental validation. These routes provided them an unprecedented high-resolution roadmap for further studies on the mechanisms of cell conversion. "Some of what we found is clinically important," Qian said, "For example, we know that after a heart attack, cardiac fibroblasts around the injured area are immediately activated and become highly proliferative but this proliferative capacity decreases over time. How to take advantage of the varied cell cycle status of fibroblasts over the progression of a heart attack and its aftermath would certainly broaden the application of cellular reprogramming for patients and optimize outcomes." Cardiofibroblasts that Li Qian's lab changed into iCMs -- induced cardiomyocytes that compose healthy heart muscle. Credit: Qian Lab, UNC School of Medicine Qian added, "We demonstrated the routes between cell proliferation and cell reprogramming. We also showed experimental evidence that altering the cell cycle statuses of starting fibroblasts would change the outcomes of new myocyte formation." Her team discovered that the molecular features of subpopulations of fibroblasts were differentially suppressed during reprogramming, suggesting that the susceptibility of cells to be reprogrammed varies. Interestingly, this susceptibility coincides with the timing of cardiomyocyte differentiation during heart development. The signatures in the intermediate populations that seem to appear earlier in heart development were more resistant to the alterations. This suggests that the recent epigenetic memories of cells might be more easily erased, and so the fibroblast subpopulations with such epigenetic features are more easily converted into cardiomyocytes. "Manipulating epigenetic memories - not just changing their current epigenetic status - could be crucial for altering a cell's fate for therapeutic value," Qian said. With further analysis of global gene expression changes during reprogramming, researchers identified an unexpected down-regulation of factors involved in mRNA processing and splicing. "This is a big surprise to us," Qian said. "We found that some of the basic cell machinery is dramatically changed, like the machinery for protein production, transportation and degradation, and as we document in detail - mRNA splicing machinery." The team continued with detailed functional analysis of the top candidate - the splicing factor called Ptbp1. Evidence suggests it as a critical barrier to the acquisition of cardiomyocyte-specific splicing patterns in fibroblasts. Qian's research showed that Ptbp1 depletion promoted the formation of more iCMs. "The new knowledge learned from our mechanistic studies of how a single splicing factor regulates the fate conversion from fibroblast to cardiomyocyte is really a bonus to us," Qian continued. "Without the unbiased nature of this approach, we would not gain such fresh, valuable information about the reprogramming process. And that's the beauty of our platform." Additional quantitative analysis revealed a strong correlation between the expression of each reprogramming factor and the progress of individual cells through the reprogramming process, and led to the discovery of new surface markers for enrichment of iCMs. Qian said, "I believe the interdisciplinary approaches in this paper are very powerful. They helped us identify previously unrecognized functions or mechanisms, as well as better understand the nature of a cell and the progression of a disease. Ultimately, this approach could benefit not only heart disease patients, but also patients with cancers, diabetes, neurological diseases, and other conditions. We are very excited about the road ahead."
nature.com/articles/doi:10.1038/nature24454
Medicine
Prone positioning reduces the need for breathing tubes in COVID-19 patients, suggests in-depth analysis
Efficacy of awake prone positioning in patients with covid-19 related hypoxemic respiratory failure: systematic review and meta-analysis of randomised trials, The BMJ (2022). DOI: 10.1136/bmj-2022-071966 Journal information: British Medical Journal (BMJ)
https://dx.doi.org/10.1136/bmj-2022-071966
https://medicalxpress.com/news/2022-12-prone-positioning-tubes-covid-patients.html
Abstract Objective To determine the efficacy and safety of awake prone positioning versus usual care in non-intubated adults with hypoxemic respiratory failure due to covid-19. Design Systematic review with frequentist and bayesian meta-analyses. Study eligibility Randomized trials comparing awake prone positioning versus usual care in adults with covid-19 related hypoxemic respiratory failure. Information sources were Medline, Embase, and the Cochrane Central Register of Controlled Trials from inception to 4 March 2022. Data extraction and synthesis Two reviewers independently extracted data and assessed risk of bias. Random effects meta-analyses were performed for the primary and secondary outcomes. Bayesian meta-analyses were performed for endotracheal intubation and mortality outcomes. GRADE certainty of evidence was assessed for outcomes. Main outcome measures The primary outcome was endotracheal intubation. Secondary outcomes were mortality, ventilator-free days, intensive care unit (ICU) and hospital length of stay, escalation of oxygen modality, change in oxygenation and respiratory rate, and adverse events. Results 17 trials (2931 patients) met the eligibility criteria. 12 trials were at low risk of bias, three had some concerns, and two were at high risk. Awake prone positioning reduced the risk of endotracheal intubation compared with usual care (crude average 24.2% v 29.8%, relative risk 0.83, 95% confidence interval 0.73 to 0.94; high certainty). This translates to 55 fewer intubations per 1000 patients (95% confidence interval 87 to 19 fewer intubations). Awake prone positioning did not significantly affect secondary outcomes, including mortality (15.6% v 17.2%, relative risk 0.90, 0.76 to 1.07; high certainty), ventilator-free days (mean difference 0.97 days, 95% confidence interval −0.5 to 3.4; low certainty), ICU length of stay (−2.1 days, −4.5 to 0.4; low certainty), hospital length of stay (−0.09 days, −0.69 to 0.51; moderate certainty), and escalation of oxygen modality (21.4% v 23.0%, relative risk 1.04, 0.74 to 1.44; low certainty). Adverse events related to awake prone positioning were uncommon. Bayesian meta-analysis showed a high probability of benefit with awake prone positioning for endotracheal intubation (non-informative prior, mean relative risk 0.83, 95% credible interval 0.70 to 0.97; posterior probability for relative risk <0.95=96%) but lower probability for mortality (0.90, 0.73 to 1.13; <0.95=68%). Conclusions Awake prone positioning compared with usual care reduces the risk of endotracheal intubation in adults with hypoxemic respiratory failure due to covid-19 but probably has little to no effect on mortality or other outcomes. Systematic review registration PROSPERO CRD42022314856. Introduction Patients with covid-19 can develop hypoxemic respiratory failure, potentially necessitating admission to hospital for supplemental oxygen or to an intensive care unit (ICU) for mechanical ventilation. 1 2 3 Although most patients have mild disease, some will develop severe disease, including acute respiratory distress syndrome. 2 Interventions aimed at limiting illness severity and reducing the need for invasive mechanical ventilation are needed. Non-pharmacological interventions such as prone positioning are life saving for patients with moderate-severe acute respiratory distress syndrome receiving mechanical ventilation. 4 5 6 Although high certainty evidence exists for the use of prone positioning in patients receiving invasive ventilation for non-covid-19 related acute respiratory distress syndrome, 5 6 it is unclear whether awake prone positioning improves outcomes in spontaneously breathing non-intubated patients with covid-19. Previous systematic reviews and meta-analyses of observational studies suggested that awake prone positioning was associated with improved oxygenation and low endotracheal intubation rates. 7 8 9 10 Despite these outcomes, the tolerability, safety, and efficacy of awake prone positioning remains unclear in patients with covid-19 related hypoxemic respiratory failure. A prospective meta-analysis of six individual randomized controlled trials reported a reduction in the risk of treatment failure (ie, a composite outcome of intubation or death) and a reduction in the risk of endotracheal intubation. The results of this prospective meta-analysis must be interpreted cautiously as the effect was probably driven by one of the included randomized controlled trials. 11 Two recent systematic reviews and meta-analyses had limitations, such as being driven by the results of the prospective meta-analysis 12 or combining both observational and randomized studies. 10 Moreover, a comprehensive systematic review on awake prone positioning in patients with covid-19 that also incorporates recent trials is needed. Given the uncertainty about the clinical benefits of awake prone positioning 13 and recent evidence from three trials with more than 900 additional patients, 14 15 16 we performed a systematic review and meta-analysis. We used both frequentist and bayesian methods to evaluate the efficacy and safety of awake prone positioning compared with usual care in trials of non-intubated adults with hypoxemic respiratory failure due to covid-19. Methods We conducted this systematic review and meta-analysis according to the Cochrane Handbook for Systematic Reviews of Interventions, 17 adhered to the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) (see supplemental eMethods 1), 18 and prospectively registered the protocol on PROSPERO. Search strategy and study selection The Cochrane Central Register of Controlled Trials, Embase, and Medline were systematically searched from inception to 4 March 2022. We also searched the preprint server medrxiv for relevant unpublished studies and ClinicalTrials.gov for ongoing or recently completed trials. The reference lists of included studies were reviewed for any additional eligible studies. A medical librarian designed the search strategy for all databases. A second medical librarian subsequently and independently reviewed the search strategy. 19 The search terms are available in supplemental eMethods 2. Two reviewers independently, and in duplicate, screened the list of titles and abstracts. Reviewers assessed the full texts of potentially eligible studies. To be eligible for inclusion the studies needed to use a randomized controlled trial design, including cluster randomized controlled trials and quasi-randomized controlled trials using the Cochrane suggested definitions of these study types 17 20 21 ; include hospital patients with hypoxemic respiratory failure due to covid-19; compare awake prone positioning with usual care (no prone positioning); and report on at least one of the outcomes of interest. Reviewers excluded non-randomized studies. Outcomes The primary outcome was endotracheal intubation at the longest time point reported. Secondary outcomes included mortality at the longest reported interval, hospital length of stay, ICU length of stay, invasive ventilator-free days, escalation of oxygen modality (defined as change from baseline to addition of high flow oxygen, non-invasive ventilation, or continuous positive airway pressure), changes in oxygenation and respiratory rate as reported by the authors, and adverse events (as defined in the included trials). Data extraction and risk of bias assessment Abstracted data included study characteristics (trial design, eligibility criteria, dates of recruitment, number of centers, countries); study population (age, sex, body mass index, severity of hypoxemia, and type of care unit (eg, ward or ICU) at enrolment); oxygenation modality at baseline; descriptions of trial intervention, control group, and co-interventions; and trial outcomes. Two authors, independently and in duplicate, assessed risk of bias using version 2 of the Cochrane risk-of-bias tool. Reviewers classified trials as low risk of bias, some concerns, or high risk of bias based on their assessment of five domains: bias arising from the randomization process, bias due to deviations from the intended intervention assignment, bias from missing outcome data, bias in measurement of the outcome, and bias in selection of the reported result. Data synthesis The primary analysis was conducted using a frequentist approach. Dichotomous variables were pooled using a random effects model (DerSimonian and Laird), and effect estimates were reported as relative risks with corresponding 95% confidence intervals, and continuous variables as mean differences with corresponding 95% confidence intervals. Mean values and standard deviations were estimated from median and interquartile range when required, as previously described. 22 Oxygen saturation to fraction of inspired oxygen (SpO 2 :FiO 2 ) ratios were estimated from arterial oxygen tension to fraction of inspired oxygen (PaO 2 :FiO 2 ) ratios as previously described. 23 For cluster randomized controlled trials we planned to account for the design effect 17 using intraclass correlation reported in the study or in other similar studies, but these trials did not contribute to any outcomes that were meta-analyzed. Trials with no events in both arms were excluded from primary analyses. We assessed the percentage of the total variance due to heterogeneity between trials using the I 2 statistic. 24 Intention-to-treat data were used whenever possible. Preplanned secondary bayesian analyses for endotracheal intubation and mortality outcomes were also performed to assess the robustness of results according to varying and prespecified prior beliefs about the effect of awake prone positioning. The bayesian approach differs from the conventional frequentist approach. We used established informative priors for heterogeneity between studies. 25 We used non-informative priors for mean effects, followed by those informed by a previously published meta-analysis of controlled observational studies involving 1526 patients pooled from 10 studies 9 (see supplemental eTable 1 for intubation priors and mortality priors) and hypothetical ones based on a proposed framework in critical care. 25 26 Priors were defined and declared a priori. Bayesian random effects meta-analysis was performed using normal-normal hierarchical models and a hybrid random walk Metropolis-Hastings algorithm with Gibbs updates and blocked model parameters, four chains, random initial chain values, a minimum of 40 000 Markov chain Monte Carlo samples with 10 000 burn-in, and thinning of 10 to estimate posterior distributions of effects. Convergence was confirmed visually and with Gelman-Rubin diagnostic statistics all less than 1.1 (see supplemental eFigure 7). Results from the bayesian analyses were reported as relative risks and corresponding centile based 95% credible intervals. Trial sequential analysis was performed to assess risks of random error in the conventional meta-analyses and if the required information size assumptions were met according to prespecified effect sizes of interest (see supplemental eMethods 3). 27 In addition, we performed several preplanned subgroup analyses according to risk of bias, duration of awake prone positioning, severity of baseline hypoxemia, geographic/economic setting, location at randomization, and baseline mode of oxygen delivery. The cut points defining the subgroups for duration of awake prone positioning (≥5 h/day v <5 h/day and severity of baseline hypoxemia (SpO 2 :FiO 2 <150 v ≥150) were chosen as they approximated the median values in the COVI-PRONE trial 14 and the Ehrmann et al prospective meta-analysis, 11 which represented the largest trials with data available to us at the time of protocol development. Our assumption was that these cut points would approximate the median of the medians across all trials. We conducted several preplanned sensitivity analyses: excluding unpublished trials (ie, abstracts and preprints), trials reported as stopping early, outcomes from the individual trials of the prospective meta-analysis (and instead substituting with pooled outcomes from the prospective meta-analysis of randomized trials), trials with no events in either arm, cluster randomized trials, quasi-randomized trials, and studies with more than low risk of bias. A post hoc sensitivity analysis was conducted with a random effects model using a restricted maximum likelihood approach with the Hartung-Knapp-Sidik-Jonkman confidence interval correction. 28 Because more randomized controlled trials were identified than anticipated, we modified the analysis plan post hoc to exclude any quasi-randomized trials from the primary and secondary outcome analyses and instead include such trials in a sensitivity analysis. We performed a preplanned meta-regression to assess the association between the average daily duration of awake prone positioning (predictor variable) and the primary outcome of endotracheal intubation. We examined small study effects by inspecting funnel plots and the results of Egger’s test. 29 Frequentist and bayesian analyses were performed in STATA (Stata version 16.0 and 17.0). We used trial sequential analysis software (version 0.9.5.10 Beta, Copenhagen Trial Unit, Center for Clinical Intervention Research, Rigshospitalet, Copenhagen, Denmark). Two sided P values <0.05 were considered statistically significant. GradePro software was used to summarize Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) recommendations and to calculate absolute effect calculations based upon the baseline risk and relative effect size. We used the GRADE approach to assess the certainty of evidence for every outcome based on the following domains: risk of bias, inconsistency, indirectness, imprecision, and publication bias. 30 Certainty of the evidence was classified as high, moderate, low, or very low. Patient and public involvement Two members of the public with experience of covid-19 were engaged about the systematic review and meta-analysis. They shared that awake prone positioning was important and any treatment that could reduce the likelihood of intubation was meaningful and important from a patient perspective. One patient partner associated with one of the centers reviewed the revised manuscript for feedback. Results Search results Of 2330 citations, 109 articles underwent full text review ( fig 1 ). Seventeen trials from 12 publications met the eligibility criteria and were included in the quantitative analysis. 11 12 14 15 16 31 32 33 34 35 36 37 Fig 1 Summary of trial identification for review and meta-analysis. *Twelve articles representing 17 separate trials were identified. One article was a prospective meta-analysis of six individual randomized trials Download figure Open in new tab Download powerpoint Trial and patient characteristics The 17 included trials enrolled 2931 patients ( table 1 ). 11 12 14 15 16 31 32 33 34 35 36 37 Six individual randomized controlled trials (1126 patients) were reported together in one publication as a prospective meta-analysis. 11 We extracted data and outcomes from each individual trial separately whenever possible. Fourteen conventional randomized controlled trials enrolled 2363 patients, 11 12 14 15 31 32 33 35 two cluster randomized controlled trials enrolled 67 patients, 34 36 and one quasi-randomized trial enrolled 501 patients. 16 Reviewers identified one unpublished trial that was included in a recent meta-analysis 12 and three trials based on trial registrations identified in our search that were subsequently published. 14 16 37 Table 1 Characteristics of included trials examining awake prone positioning in non-intubated adults with hypoxemic respiratory failure due to covid-19 View this table: View popup View inline Supplemental eTable 2 presents the enrollment criteria for each trial. The median proportion of women in the awake prone positioning groups was 36% (interquartile range 25-40%) and in the usual care groups was 33% (23-40%). Median baseline peripheral oxygen saturation to fraction of inspired oxygen ratio (SpO 2 :FiO 2 ) at randomization in the awake prone positioning groups was 169 (interquartile range 152-233) and in the usual care groups was 167 (156-220). Four trials (567 patients) were conducted exclusively in ICUs, 11 15 32 six trials (699 patients) were conducted on medical wards, 12 31 33 34 36 37 and seven trials (1665 patients) were conducted in mixed settings, including ICUs, high dependency units, and medical wards. 11 14 16 35 Management of the control group was usual care in 11 trials (1759 patients), 11 12 14 16 31 32 33 34 35 36 37 high flow nasal cannula (similar to the intervention group) plus usual care in five trials (1097 patients), 11 and non-invasive ventilation (similar to the intervention group) plus usual care in one trial (75 patients). 15 For the intervention group, the target duration of awake prone positioning ranged from as tolerated in eight trials 11 16 36 to at least 16 hours each day in one trial. 35 The actual duration of prone positioning was reported in 13 trials, 11 14 16 31 32 33 34 35 with a median of 2.8 (interquartile range 2.2-5) hours per day. Risk of bias in included studies Supplemental eTable 3 shows the risk of bias assessment for the primary outcome of endotracheal intubation, and supplemental eTable 4 shows the secondary outcome of mortality. Twelve of the 17 trials were classified as low risk of bias (2204 patients), 11 14 31 34 35 36 37 three trials had some concerns (151 patients), 12 32 33 and two trials (576 patients) 15 16 were classified as high risk of bias owing to allocation sequence generation 16 and selection of reported results. 15 Primary outcome: endotracheal intubation Pooled analysis of 14 trials (2363 patients) 11 12 14 15 31 32 33 35 37 for the primary outcome ( fig 2 ) showed that awake prone positioning reduced the risk of endotracheal intubation compared with usual care (2363 patients; crude average 24.2% with awake prone positioning v 29.8% with usual care; relative risk 0.83 (95% confidence interval 0.73 to 0.94); I 2 =0%; high certainty). The absolute effect was 55 fewer intubations per 1000 patients (95% confidence interval 87 to 19 fewer intubations) receiving awake prone positioning. Visual inspection of the funnel plot and using Egger’s test suggested low risk of small study effects (see supplemental eFigure 1). Fig 2 Forest plots for awake prone positioning compared with usual care for intubation and mortality in adults with hypoxemic respiratory failure due to covid-19. Six trials assessed intubation at 28 days (six Ehrmann trials), two trials assessed intubation at any time during hospital admission (Johnson, Fralick), three trials assessed intubation at 30 days (Alhazzani, Rosén, Harris), one trial assessed intubation at 14 days (Rampon), and two trials did not specify (Jayakumar, Hashemian). Two trials had no intubation events in both arms and were not included in this analysis (Taylor, Kharat). The quasi-randomized trial (Qian) was not included in this analysis. Six trials assessed mortality at 28 days (five Ehrmann trials, Harris), two trials assessed in-hospital mortality (Johnson, Fralick), two trials assessed mortality during intensive care unit admission (Jayakumar, Hashemian), one trial assessed mortality at 14 days (Rampon), one trial assessed mortality at 30 days (Rosén), and one trial assessed mortality at 60 days (Alhazzani). Three trials had no mortality events in both arms and were not included in this analysis (Ehrmann (Ireland), Taylor, Kharat). The quasi-randomized trial (Qian) was not included in this analysis. APP=awake prone positioning Download figure Open in new tab Download powerpoint Secondary outcomes Pooled analysis of 13 trials (2339 patients) 11 12 14 15 31 32 33 35 37 evaluating mortality ( fig 2 ) did not show a significant difference in mortality between the two groups (2339 patients; 15.6% with awake prone positioning v 17.2% with usual care; 0.90 (0.76 to 1.07); I 2 =0%; high certainty). Visual inspection of the funnel plot and results of Egger’s test suggested a low risk of small study bias for mortality (see supplemental eFigure 2). Three randomized trials (505 patients) reported ventilator-free days (see supplemental eFigure 3). 14 33 35 The mean difference between awake prone positioning and usual care was 0.97 days (95% confidence interval −0.5 to 3.4); I 2 =9.8%; low certainty). Length of stay in the ICU (see supplemental eFigure 4) was reported in 11 randomized controlled trials (1792 patients). 11 12 14 15 32 35 No significant difference was found between awake prone positioning and usual care (−2.1 (−4.5 to 0.4); I 2 =86%; low certainty). Eleven randomized trials (1980 patients) reported on hospital length of stay (see supplemental eFigure 5). 11 12 14 33 35 37 Little to no difference was found between awake prone positioning and usual care (−0.09 days (−0.69 to 0.51); I 2 =0%; moderate certainty). Escalation of oxygen modality was reported in nine trials (1611 patients, see supplemental eFigure 6), 11 14 32 33 with no difference between the two groups (21.4% with awake prone positioning v 23.0% with usual care; relative risk 1.04 (95% confidence interval 0.74 to 1.44); I 2 =57%; low certainty). The prospective meta-analysis of six trials and eight other trials reported on changes in oxygenation, 11 14 15 16 31 32 33 34 36 and seven trials reported on changes in respiratory rate 11 34 (see supplemental eTable 5). Significant heterogeneity in the reported oxygenation indices and time of outcome assessment precluded pooling of data. The most reported adverse events in the awake prone positioning groups (1469 patients) were unintentional dislodgement of vascular catheters (37 patients, 2.5%) and pain or discomfort (30 patients, 2%). Other reported adverse events in the awake prone positioning groups included nausea and vomiting (17 patients, 1.2%) and skin breakdown or pressure ulcers (10 patients, 0.7%) (see supplemental eTable 6). Bayesian analyses The bayesian analysis using non-informative priors ( table 2 , supplemental eFigure 7) for endotracheal intubation showed a mean relative risk of 0.83 (95% credible interval 0.70 to 0.97: posterior probability for relative risk <0.95=96%). Similar results were found in analyses using informative priors (see supplemental eTable 1) that were enthusiastic, minimally skeptical, or moderately skeptical as well as hypothetical priors ( table 2 ). Table 2 Bayesian meta-analysis of endotracheal intubation and mortality outcomes View this table: View popup View inline The bayesian analysis of mortality was concordant with the results of the frequentist analysis and suggested that the probability of benefit on mortality was relatively low, with a mean relative risk using a non-informative prior of 0.90 (95% credible interval 0.73 to 1.13: posterior probability for relative risk <0.95=68%, table 2 ). Table 2 presents estimates using the informative priors. Trial sequential analysis Using trial sequential analysis, the relative risk for endotracheal intubation was 0.83 (trial sequential analysis adjusted confidence interval 0.70 to 0.99), which conclusively favored awake prone positioning (see supplemental eFigure 8). For mortality, the relative risk was 0.90 (0.45 to 1.82). The acquired information size was less than the required information size and no boundaries were crossed, therefore the trial sequential analysis was inconclusive for mortality (see supplemental eFigure 9). Similarly, the trial sequential analysis did not favor awake prone positioning for the other secondary outcomes, including ventilator-free days and ICU and hospital length of stay (see supplemental eFigure 9). Sensitivity analyses Sensitivity analyses excluding one unpublished trial (354 patients) 12 (see supplemental eFigure 10), two high risk of bias trials (576 patients), 15 16 and three trials with some concern for risk of bias (151 patients) 12 32 33 (see supplemental eFigure 11) yielded results that were consistent with the primary analysis. Similarly, when excluding four trials (414 patients) that stopped early 12 31 33 35 (see supplemental eFigure 12), using overall pooled results from the prospective meta-analysis report 11 (supplemental eFigure 13), excluding three trials (91 patients) with no events in either arm 11 34 36 (see supplemental eFigure 14), and including one trial (501 patients) with quasi-randomized allocation 16 (see supplemental eFigure 15), results were consistent with the primary analysis. A sensitivity analysis including one quasi-randomized trial 16 did not change the posterior probabilities in the bayesian analysis for intubation and mortality. A post hoc sensitivity analysis was conducted with a random effects model using a restricted maximum likelihood approach with the Hartung-Knapp-Sidik-Jonkman confidence interval correction, which did not substantively change the results for endotracheal intubation and mortality outcomes (see supplemental eTable 7). Subgroup analyses Figure 3 , figure 4 , figure 5 , figure 6 , and figure 7 show the effect of awake prone positioning in prespecified subgroups for the primary outcome of endotracheal intubation. When trials were grouped according to trial level median duration of awake prone positioning, those with median duration of prone positioning ≥5 hours/day (three trials, 905 patients) showed a relative risk for endotracheal intubation of 0.78 (95% confidence interval 0.66 to 0.93; fig 3 ). 11 14 35 In trials with a median duration of awake prone positioning <5 hours/day (seven trials, 969 patients) 11 31 33 the relative risk was 0.92 (0.76 to 1.12, P for interaction=0.22). When trials were compared according to baseline severity of hypoxemia at trial level, the relative risk of endotracheal intubation in those with more severe hypoxemia (SpO 2 :FiO 2 <150; two trials, 830 patients) 11 14 was 0.77 (0.64 to 0.92; fig 4 ), whereas in those trials with less severe baseline hypoxemia (SpO 2 :FiO 2 ≥150; 10 trials, 1428 patients) 11 12 31 33 35 37 the relative risk was 0.92 (0.77 to 1.10, P for interaction=0.17). When the effect of awake prone positioning on endotracheal intubation was stratified by baseline oxygen mode of delivery, in trials exclusively using high flow oxygen or non-invasive ventilation at baseline (nine trials, 1583 patients) 11 14 15 35 the relative risk for endotracheal intubation was 0.81 (0.71 to 0.92; fig 5 ). In comparison, trials that used mixed modes of oxygen delivery (three trials, 369 patients) 12 31 32 had a relative risk of 1.07 (0.49 to 2.34), and trials using only low flow oxygen (three trials, 411 patients) 14 33 37 had a relative risk of 1.18 (0.63 to 2.19, P for interaction=0.81). One trial reported outcomes separately according to baseline mode of oxygen delivery and was pooled in two subgroups accordingly. 14 When trials were stratified by type of hospital unit at randomization, those performed exclusively in ICUs (four trials, 567 patients) 11 15 32 had a relative risk for endotracheal intubation of 0.86 (0.69 to 1.07) compared with 0.81 (0.69 to 0.95) in the six trials (1164 patients) performed in mixed settings ( fig 6 ). 11 14 35 In the four trials performed exclusively on general wards (632 patients), 12 31 33 37 the relative risk for endotracheal intubation was 0.96 (0.43 to 2.13, P for interaction=0.85). In 11 trials performed in high income countries (1798 patients), 11 12 14 31 33 35 37 the relative risk for endotracheal intubation was 0.89 (0.77 to 1.04) compared with 0.69 (0.55 to 0.87, P for interaction=0.07) in three trials (565 patients) 11 15 32 performed in low to middle income countries ( fig 7 ). Fig 3 Forest plot for subgroup analysis of awake prone positioning compared with usual care for endotracheal intubation in patients with hypoxemic respiratory failure due to covid-19 according to duration of awake prone positioning. Two trials had no intubation events in both arms (Taylor, Kharat) and four trials that did not report the median duration of prone positioning (Jayakumar, Hashemian, Rampon, Harris) were excluded from this analysis. APP=awake prone positioning Download figure Open in new tab Download powerpoint Fig 4 Forest plot for subgroup analysis of awake prone positioning compared with usual care for endotracheal intubation in patients with hypoxemic respiratory failure due to covid-19 according to median baseline oxygen saturation to fraction of inspired oxygen (SpO 2 :FiO 2 ). Two trials had no intubation events in both arms (Taylor, Kharat) and three trials did not report the baseline SpO 2 :FiO 2 (Johnson, Hashemian, Qian) and were excluded from this analysis. One trial reported baseline arterial oxygen tension to fraction of inspired oxygen (PaO 2 :FiO 2 ), which was converted to SpO 2 :FiO 2 . APP=awake prone positioning Download figure Open in new tab Download powerpoint Fig 5 Forest plot for subgroup analysis of awake prone positioning compared with usual care for endotracheal intubation in patients with hypoxemic respiratory failure due to covid-19 according to baseline mode of oxygen delivery. Two trials had no intubation events in both arms (Taylor, Kharat) and were excluded from this analysis. One trial reported outcomes separately according to baseline mode of oxygen delivery (Alhazanni). APP=awake prone positioning; NIV=non-invasive ventilation Download figure Open in new tab Download powerpoint Fig 6 Forest plot for subgroup analysis of awake prone positioning compared with usual care for endotracheal intubation in patients with hypoxemic respiratory failure due to covid-19 according to location in hospital. Two trials had no intubation events in both arms (Taylor, Kharat) and were excluded from this analysis. APP=awake prone positioning; ICU=intensive care unit Download figure Open in new tab Download powerpoint Fig 7 Forest plot for subgroup analysis of awake prone positioning compared with usual care for endotracheal intubation in patients with hypoxemic respiratory failure due to covid-19 according to country status (low or middle income and high income). Two trials had no intubation events in both arms (Taylor, Kharat) and were excluded from this analysis. Trials were classified as low or middle income countries or high income countries based on the Organisation for Economic Co-operation and Development in 2021. APP=awake prone positioning; HIC=high income countries; LMIC=low or middle income countries Download figure Open in new tab Download powerpoint When meta-regression was used, no significant association was found between the median daily duration of awake prone positioning and the log odds ratio for endotracheal intubation in 10 trials (1874 patients) that reported a mean or median duration of awake prone positioning (β coefficient −0.053, 95% confidence interval −0.14 to 0.03, P=0.19) (see supplemental eFigure 16). Certainty of evidence Table 3 summarizes the details of the GRADE assessment of certainty of the evidence for the primary and secondary outcomes. Table 3 Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) in included randomized controlled trials View this table: View popup View inline Discussion Principal findings In this systematic review and meta-analysis of 17 trials, awake prone positioning was associated with a decreased risk of endotracheal intubation compared with usual care in adults with hypoxemic respiratory failure due to covid-19. The evidence of reduction in endotracheal intubation with awake prone positioning was of high certainty and the results were consistent across multiple sensitivity and bayesian analyses. On average, awake prone positioning resulted in 55 fewer intubations per 1000 patients (95% confidence interval 87 to 19 fewer intubations). However, awake prone positioning probably had little to no effect on mortality, ventilator-free days, ICU length of stay, hospital length of stay, escalation of oxygen treatment, or mode of oxygen delivery. Awake prone positioning is generally safe, with infrequent adverse events that include unintentional catheter dislodgement, discomfort, nausea, and skin breakdown. Comparison with other studies As this systematic review represents a large number of patients and trials, the precision of the effect estimates is increased. 10 12 Including a larger number of trials addresses a limitation of previously published meta-analyses, 10 12 particularly by limiting any one trial from being excessively weighted in a meta-analysis. We also used two complementary statistical approaches (frequentist and bayesian) that supported the robustness of the results. The use of a bayesian approach allowed integration of prior information with our pooled data to determine a clinically useful summary of this information. Specifically, the bayesian approach provides probabilities of a benefit (or harm) with awake prone positioning given the observed data across varying previous beliefs (priors) about its effectiveness. For example, the posterior probability of a relative reduction of at least 5% in endotracheal intubation was high (≥0.90) across all degrees of prior beliefs about its effectiveness, given the data. In contrast, the posterior probability of a 5% relative reduction in mortality was 0.93 only if the prior beliefs about the effectiveness of awake prone positioning were strong (ie, using an enthusiastic prior). Many clinicians and patients would consider a 5% reduction in endotracheal intubation or mortality as clinically meaningful, particularly for a safe non-pharmacologic intervention. The findings in this review were robust through a variety of different sensitivity analyses. The studies included in this systematic review differed from those in another recent meta-analysis, 12 which included a randomized trial by Gad. 38 We excluded that trial because it compared awake prone positioning without non-invasive ventilation with non-invasive ventilation, so the groups differed by the presence of prone positioning and by mode of respiratory support. In contrast, a trial by Hashemian and colleagues was incorporated in our review as it included non-invasive ventilation in both the usual care and the prone positioning groups. 15 We a priori planned to include quasi-randomized trials in our analysis, anticipating a small number of eligible studies to be available for meta-analysis. One quasi-randomized trial was identified, 16 in which allocation was based on patients’ medical record numbers, with even numbers receiving usual care and odd numbers receiving awake prone positioning. Owing to lack of concealed randomization, this study was assessed to be at high risk of selection bias. Although this quasi-randomized trial was not included in the primary analysis, when it was included in a sensitivity analysis the effect estimate did not change notably, further supporting the robustness of the results. The meta-analysis’s results are of important clinical relevance, as awake prone positioning is an inexpensive, non-pharmacological treatment that can be applied in a variety of hospital settings. In addition, awake prone positioning can be used in both low and middle income countries and high income countries, as shown by the geographic location of the studies in this systematic review. Although we found no effect of awake prone positioning on mortality, a favorable effect cannot be excluded. Conversely, a reduction in the rate of endotracheal intubation was not associated with an increase in mortality, suggesting that patients were not put at risk by delaying intubation. To further support the safety of this intervention, the absolute rate of serious adverse events in the awake prone positioning group was low across trials. Also, downstream outcomes that could be associated with a reduction in endotracheal intubation, such as ventilator-free days and ICU and hospital length of stay were not statistically different between groups. Nevertheless, the effect estimates were consistently in the direction favoring awake prone positioning but with wide 95% confidence intervals. It may be that reducing intubation does not affect these outcomes, or that the lower number of studies reporting these secondary outcomes limited precision to detect small effect sizes. The mechanism for how awake prone positioning reduces endotracheal intubation remains uncertain. Adherence to longer duration of prone positioning may be an effect modifier on the outcome of endotracheal intubation. It has been hypothesized that longer duration of awake prone positioning may be more effective, similar to placing patients in the prone position who are receiving invasive ventilation. 5 13 However, unlike patients receiving invasive ventilation who were placed in the prone positioning, awake patients are not sedated and not receiving neuromuscular blocking agents. This key difference may explain why none of the included trials that specified target durations for awake prone positioning met the prescribed dose in their intervention group. The intervention may be limited by patient tolerance as data suggest that awake patients may not cope well with long periods of prone positioning. 33 Although many patients can place themselves in a prone position, others may need encouragement or assistance to do so for longer durations, which may require the availability of staff or other resources. Dedicated teams can increase adherence to prone positioning for intubated patients, 39 40 but data on the utility of this approach for non-intubated patients are limited. Other strategies to improve adherence, such as smart phone based guidance and reminders, did not result in better adherence in one trial. 37 Thus, the benefits of awake prone positioning need to be weighed against the resources and staff needed to ensure safe adherence to the intervention. Thus, it remains uncertain whether better adherence to longer duration of awake prone positioning does modify the effect of the intervention. Our subgroup analysis suggested that in trials in which the median duration of awake prone positioning was ≥5 hours/day, the reduction in endotracheal intubation risk was relatively greater. However, the interaction test P value was not significant. Similarly, using meta-regression, the association between duration of awake prone positioning at the trial level and the effect size was not significant. Although these analyses suggest a potential association between duration of awake prone positioning and efficacy, they may be underpowered or potentially confounded since duration of prone positioning was not randomized and should be considered hypothesis generating. Even if an association exists between duration and efficacy, the optimal duration of awake prone positioning remains unknown. This question could be better evaluated in future randomized trials comparing various durations of prone positioning that are balanced with tolerability. In our other subgroup analyses, trials with more severe baseline hypoxemia, those performed in mixed hospital settings, and those performed in low to middle income countries tended to have larger effects. None of the interaction test P values were, however, significant, so we caution against over-interpretation of these findings. To most appropriately and efficiently allocate resources to deliver this intervention, future studies could aim to determine which patient subgroups, if any, benefit most from awake prone positioning. Strengths and limitations of this study This meta-analysis should be interpreted within the context of its limitations. First, although we explored potential effect modification in subgroup analyses based on trial level characteristics, lack of individual patient data limited the ability to evaluate effect modification more precisely. For example, while many of the included trials overlapped the pre-vaccine and post-vaccine eras of the pandemic, it is unknown whether covid-19 vaccination status modifies the effectiveness of awake prone positioning. This could not be evaluated with the available data, but effect modifiers could be better studied using individual patient data meta-analysis. Second, owing to differences between the targeted and achieved duration of awake prone positioning across studies, we are unable to conclude whether there is an optimal duration of prone positioning for patients to benefit. Third, some of the planned analyses were limited because of heterogeneity in the definition and reporting of certain outcomes such as oxygenation, missing trial level data for some outcomes in the prospective meta-analysis, 11 or because a few studies reported some outcomes, limiting precision and certainty. Fourth, the decision to intubate a patient can vary, with no fixed criteria. Furthermore, factors influencing the decision to intubate a patient were likely variable between providers and institutions and may have changed over the course of the pandemic. Despite this variability, the meta-analysis suggests there is high certainty in this finding based on the wide range of study locations (14 trials conducted in 12 different countries), and this finding is further supported by a secondary bayesian analysis and multiple sensitivity analyses. Finally, studies that are still in progress or were unpublished at the time this meta-analysis was completed might not be included and could influence the results. Although given the size and number of studies included in this review, such an influence would be unlikely unless the unpublished study was large, had a large treatment effect, or had multiple studies showing alternative effects to what we found. Strengths of this study include the adherence to quality standards for meta-analysis, use of GRADE to assess the certainty of evidence, and duplicate review of the search strategy and analysis for the primary outcome. This report includes a larger number of trials and patients than previous meta-analyses, uses rigorous sensitivity analyses to challenge the robustness of the primary analysis, and uses complementary preplanned bayesian analyses with a priori assumptions in addition to the traditional frequentist approach. Conclusions Awake prone positioning compared with usual care reduced the risk of endotracheal intubation in adults with hypoxemic respiratory failure due to covid-19. Evidence on the effects of awake prone positioning on mortality or other secondary outcomes was, however, inconclusive. Adverse events related to awake prone positioning were uncommon, highlighting the safety of this intervention. However, adherence to the target duration of prone positioning was low in many trials. Thus, clinicians and patients must balance the goal of avoiding endotracheal intubation with the tolerability of awake prone positioning and availability of staff resources to encourage and assist patients. Future trials should aim to determine strategies to improve tolerability and adherence, assess the optimal duration of awake prone positioning, and determine the effect of awake prone positioning from other causes of hypoxemic respiratory failure. What is already known on this topic Awake prone positioning is an inexpensive, non-pharmacological treatment that can be applied readily and easily in a variety of hospital settings The effect of awake prone positioning in patients with covid-19 related hypoxemic respiratory failure on endotracheal intubation and other outcomes remains uncertain What this study adds In this systematic review and meta-analysis of 17 randomized trials, awake prone positioning for hypoxemic respiratory failure due to covid-19 reduced the risk of endotracheal intubation, but evidence for the effect on mortality or other outcomes was inconclusive Adverse events during awake prone positioning were uncommon and rarely serious Ethics statements Ethical approval Not required. Data availability statement No additional data available. Acknowledgments We thank Kate Nelson, Sarah Culgin, and Romina Soudavari for their logistical support and Erica Wright from Knowledge Resource Service for peer reviewing the search strategy. Derek K Chu is the recipient of an AAAAI Foundation Faculty Development Award. Footnotes Contributors: JW and KKSP contributed equally and are joint first authors. All authors conceived and designed the study, analyzed and interpreted the data, and critically revised the manuscript. JW, KKSP, ZA, KL, KS, and WA acquired the data. JW, KKSP, and WA drafted the manuscript. JW, KKSP, DC, AG, NS, and WA performed the statistical analyses. JW, KP, and WA are guarantors of the data and manuscript. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. Funding: This trial was supported by peer review grants from the Canadian Institutes of Health Research and University of Calgary Cumming School of Medicine Clinical Research Fund (CRF-COVID-202006). WA holds a McMaster University Department of Medicine Mid-Career Research Award. DJC holds a Canada Research Chair. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: support from the Canadian Institutes of Health Research and University of Calgary Cumming School of Medicine Clinical Research Fund; no financial relationships with any organizations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. The lead authors (JW, KKSP, and WA) affirm that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as originally planned (and, if relevant, registered) have been explained. Dissemination to participants and related patient and public communities: We plan to engage the public and patients with lay summaries of the study results, disseminated through social media, conferences, and newsletters. Provenance and peer review: Not commissioned; externally peer reviewed. This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: .
Patients admitted to the hospital with severe breathing difficulties due to COVID-19 are less likely to need a breathing tube if they lie face down in a prone position, but evidence for its effect on mortality or other outcomes is inconclusive, suggests an in-depth analysis of the latest evidence published by The BMJ today. Since the 1970s, prone positioning has been standard care for patients with severe acute respiratory distress syndrome, as it encourages a larger part of the lung to expand, so patients can take bigger breaths. Usually, it is done for critically ill patients who are sedated and intubated (breathing through a tube attached to a mechanical ventilator). But in February, 2020, reports emerged that prone positioning of conscious patients with COVID-19 might also be helpful, and it was widely adopted. Since then, several studies have examined its effectiveness in conscious patients with COVID-19, but results have been conflicting. To try and resolve this uncertainty, researchers trawled databases for randomized trials comparing conscious prone positioning to usual care for adult patients with COVID-19 hypoxemic respiratory failure (a serious condition that develops when the lungs can't get enough oxygen into the blood). They found 17 suitable trials involving 2,931 non-intubated patients who were able to breathe without mechanical assistance and who spent an average of 2.8 hours per day lying prone. Twelve trials were at low risk of bias, three had some concerns, and two were at high risk, but the researchers were able to allow for that in their analysis. The main measure of interest was endotracheal intubation (a breathing tube inserted into the windpipe to allow mechanical ventilation). Other (secondary) outcomes included mortality, ventilator-free days, intensive care unit (ICU) and hospital length of stay, change in oxygenation and respiratory rate, and adverse events. High certainty evidence from a pooled analysis of 14 trials showed that conscious prone positioning reduced the risk of endotracheal intubation compared with usual care (24.2% with conscious prone positioning vs. 29.8% with usual care). On average, conscious prone positioning resulted in 55 fewer intubations per 1,000 patients. However, high certainty evidence from a pooled analysis of 13 trials evaluating mortality did not show a significant difference in mortality between the two groups (15.6% with conscious prone positioning vs. 17.2% with usual care), but the study may have lacked statistical power to detect a difference. Conscious prone positioning did not significantly affect other secondary outcomes either, including ventilator-free days, and length of stay in the ICU or hospital, based on low- and moderate-certainty evidence. The researchers acknowledge several limitations, such as lack of individual patient data, differences between the targeted and achieved duration of conscious prone positioning, and variation in the definition and reporting of certain outcomes across studies. But further sensitivity analysis supported these results, suggesting a high probability of benefit for the endotracheal intubation outcome and a low probability of benefit for mortality. As such, the researchers conclude, "Conscious prone positioning compared with usual care reduces the risk of endotracheal intubation in adults with hypoxemic respiratory failure due to COVID-19 but probably has little to no effect on mortality or other outcomes." In a linked editorial, researchers point out that the benefits of prone positioning in patients with COVID-19 may be confined to those with more severe hypoxemia and longer duration of prone positioning, and they say it may be wise to focus efforts on these particular groups. Several unanswered questions remain, including the ideal daily duration of treatment, the level of hypoxemia that should prompt prone positioning, and how best to improve patient comfort and encourage adherence, they write. These questions may never be answered definitively in patients with COVID-19, as fortunately, far fewer are experiencing hypoxemic respiratory failure or critical illness, they explain. "The pandemic should, however, renew interest and encourage further evaluation of conscious prone positioning—an intervention that may benefit a wide range of patients with hypoxemia," they conclude.
10.1136/bmj-2022-071966
Biology
Ants—master manipulators for biodiversity, or sweet treats
Saori Watanabe et al. Ants improve the reproduction of inferior morphs to maintain a polymorphism in symbiont aphids, Scientific Reports (2018). DOI: 10.1038/s41598-018-20159-w Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-018-20159-w
https://phys.org/news/2018-02-antsmaster-biodiversity-sweet.html
Abstract Identifying stable polymorphisms is essential for understanding biodiversity. Distinctive polymorphisms are rare in nature because a superior morph should dominate a population. In addition to the three known mechanisms for polymorphism persistence, we recently reported a fourth mechanism: protection of the polymorphism by symbionts. Attending ants preferentially protect polymorphic aphid colonies consisting of green and red morphs. Here, we show that attending ants manipulate the reproductive rate of their preferred green morphs to equal that of the red morphs, leading to the persistence of the polymorphism within the colonies. We could not, however, explain how the ants maintained the polymorphism in aphid colonies regardless of inter-morph competition. Manipulation by symbionts may be important for the maintenance of polymorphisms and the resulting biodiversity in certain symbiotic systems. Introduction Many natural ecosystems exhibit fairly high biodiversity, and the reason for such high biodiversity is the most fundamental question in ecology and evolutionary biology 1 because interspecific competition excludes competing species 2 . However, natural communities consist of many competing species. A heritable genetic polymorphism is one such form of biodiversity for which persistence is difficult to explain. In this sense, the persistence of a polymorphism is a fundamentally important issue in ecology and evolution because it helps us understand the observed biodiversity in nature. Why does a polymorphism persist in a population? Previously, three major mechanisms were identified: (1) negative frequency-dependent selection, (2) balancing selection of two opposing factors, and (3) superdominance. Under negative frequency-dependence, any morph becomes advantageous when rare but disadvantageous when abundant 3 . Therefore, polymorphisms are stable because each morph is protected from extirpation. However, polymorphisms caused by two opposing factors are rather unstable because random processes (genetic drift) can easily lead to the extinction of any one morph. Therefore, an averaging mechanism that protects from random walk is necessary in these polymorphisms (e.g., spatial heterogeneity that yields patchy microhabitats) 4 . Superdominance is the phonemenon that the heterozygote shows a higher fitness than the homozygotes, and thus two alleles are maintained stably in a population. Recently, we found another mechanism for the maintenance of color polymorphism in an ant-aphid symbiotic system. The aphid Macrosiphoniella yomogicola has established an obligate symbiotic relationship with ants that promotes the survival of the aphid because the attending ants provide protection from strong predation 5 . M. yomogicola exhibits a heritable color polymorphism with green and red morphs (Fig. 1 ) 5 . We found that the attending ant Lasius japonicus (Fig. 1 ) is attracted to and most strongly guards a mixed aphid colony (approximately 65% green morph) 5 . Because L. japonicus most strongly protects the mixed aphid colonies, the color polymorphism in the aphid is maintained in a population. In fact, a previous study has shown that polymorphic aphid colonies showed lower mortality 6 . However, a stable polymorphism is not guaranteed in this symbiotic system. For example, if the morphs differ in competitive ability or growth rate, any aphid colony will soon lose the inferior morph. Here, we use the term “competitive ability” in a broad sense, and it includes differences in the rates of growth and reproduction even if actual competitive interactions (e.g., depriving a resource from the other morph) do not occur between the morphs. Note that any such difference between morphs leads to the exclusion of the inferior morphs. Surprisingly, nearly all colonies consist of both green and red morphs in the field. However, the mechanism that maintains these two morphs in a colony when the morphs present different reproductive rates has not been clarified. Figure 1 Red and green morphs of M. yomogicola with the attending ant L. japonicus . The top three aphids (small green and two large black) are green morphs, and the lower two aphids (small brown with an ant and a large brownish green) are red morphs. Photo by Ryota Kawauchiya. Full size image Ant-attended aphids are known to excrete high-quality honeydew when ants are present 7 , 8 . Ant attendance has a negative effect on the growth and reproduction of the attended aphids 9 . Therefore, trade-offs should occur between the quality of honeydew and the growth and fecundity of aphid individuals. Thus, if attending ants prefer the morph excreting a high-quality honeydew, such trade-offs and resulting competitive interactions are expected between the color morphs in M. yomogicola . The morph excreting high-quality honeydew is known to have a lower reproductive rate than the other morphs 9 , 10 . This fact implies that if the attending ants prefer one morph, this morph is expected to excrete high-quality honeydew. Note that honeydew quality is known to vary between preferred and non-preferred morphs 11 . Therefore, the preferred morph is expected to decrease in number over time and eventually disappear from a colony, even if the ants prefer them. Regardless of this inferior trait, the morph that is preferred by ants is more likely to survive from spring to autumn when M. yomogicola reproduces asexually and to late autumn when the aphid produces sexuparae. The mated sexual females lay eggs that will overwinter. Thus, a strong attractiveness to ants is important for an aphid clone. However, in our field, the coexistence of the two color morphs on every host plant continues from June (the appearances of alates) to November (the end of the season). How can the two color morphs achieve long-lasting coexistence on the same host plant when the competitive exclusion of the inferior morphs is expected? This study aims to understand the long-lasting coexistence of both color morphs in every colony of M. yomogicola on host plants. In this study, we investigated the population growth and parthenogenetic reproductive rates of both morphs of M. yomogicola under several experimental settings. First, we compared the reproductive rate of each monoclonal red morph and green morph on several cloned host plants. Second, we examined the difference in the reproductive rates between the color morphs using a single cloned host plant. Then, we measured the reproductive rate of each morph in mixed colonies free from predators with or without attending ants. Finally, we tested the ants’ preference for the color morphs. The results showed that attending ants improved the reproductive rate of their preferred green morph, and without attending ants, the green morphs had a lower reproductive rate than the red morphs. Because of the ants’ manipulation, the reproductive rate of the green morphs is equal to that of the red morphs. This manipulation enables an aphid colony to maintain a long-lasting coexistence of both morphs in a single colony on a host plant. Results Five clones each of both morphs were reared on one clonal shoot of a single host plant (mugwort: Artemisia vulgaris ). Without attending ants, the number of aphids increased quickly (7 days) but were soon saturated, and they then decreased because of the environmental deterioration caused by their own excreted honeydew (Fig. 2a ). This trend was more striking for the green morphs than for the red morphs (Fig. 2a ). We then compared the reproductive rates of both morphs for the first 4 records (7 days) that were not affected by the environmental deterioration. Model selection using the AIC provided a simple linear regression with log-transformed aphid numbers as the best model (Supplementary Table S1). The regression showed that the reproductive rate of red clones was significantly higher than that of green clones (Fig. 2b ). Then, we compared the reproductive rates of a red and a green clone on 5 different clones of the host plant. Again, on average, the reproductive rate of the red morphs was significantly higher than that of the green morphs on the five host plant clones (Fig. 2c ). Thus, the reproductive rate of the red morphs was significantly higher than that of the green morphs, irrespective of the difference in aphid clones (Fig. 2a and b ) or host plant clones (Fig. 2c ). Figure 2 Reproductive rates of both red and green morphs of M. yomogicola without ant attendance. ( a ) Changes in the number of five different clones of green and red morphs reared on a single clone of the host plant. After seven days, both morphs decreased because of environmental deterioration caused by their own excreted honeydew. ( b ) Linear regressions of the log-transformed aphid numbers over time for each morph (the marks are the average of the 5 clones) on a single host plant clone for the first 4 records (7 days) in Fig. 2a . The slope is steeper for the red morph than for the green morph (ANCOVA, t = 2.733, n = 6, P = 0.009), indicating that the red morph increased more rapidly than the green morph on the same host plant. ( c ) Linear regression of aphid numbers over time for a single clone of both color morphs on five different clones of the host plant. The slope is significantly steeper for the red morph than for the green morph (ANCOVA, t = −2.741, n = 5, P = 0.008), showing that the red clone increased more rapidly than the green morph, irrespective of variations in the host plant clones. Full size image The results of experiment II (effects of ant attendance on the reproductive rate of each morph in mixed aphid colonies) are shown in Fig. 3a–d . In mixed aphid colonies, the reproductive rates of each morph were affected differently by ant attendance (Fig. 3 ). The reproductive rate of the red morph did not change in the presence or absence of ants (Fig. 3a ; ANCOVA, F = 0.2049, df = 1, 26, p = 0.6546), whereas the reproductive rate of the green morph significantly increased when ants were in attendance (Fig. 3b ; ANCOVA, F = 9.5709, df = 1, 26, p = 0.0047). Under the ant-excluded conditions, the red morphs increased more rapidly than the green morphs (Fig. 3c ; ANCOVA, F = 3.094, df = 1, 26, p = 0.004), confirming the competitive superiority of the red morphs. However, this difference disappeared when ants were in attendance (Fig. 3d ; ANCOVA, F = 2.1007, df = 1, 26, p = 0.15919). In addition, the results of another ant removal experiment from wild mixed colony showed the same conclusion, i.e, the reproductive rate is equalized under ant attendance (ANCOVA, F = 0.1881, df = 1,14, p = 0.671) but the red morph showed a higher reproductive rate than the green morph under ant absence (ANCOVA, F = 30.7, df = 1, 18, p = 2.93e-5; Fig. 3e,f ). In conclusion, the attending ants equalized the reproductive rate of both the morphs such that the green morph became equally competitive with the red morph. Figure 3 Patterns in reproductive rates of both green and red morphs in mixed-color colonies in the field. ( a ) Reproductive rates of the red morphs with or without attending ants. No significant differences in the reproductive rates were detected with or without ants (ANCOVA, F = 0.2049, df = 1, 26, p = 0.6546). ( b ) Reproductive rates of the green morphs with or without attending ants. The green morphs with attending ants increased significantly faster than those without ants (ANCOVA, F = 9.5709, df = 1, 26, p = 0.006). ( c ) Under ant-absent conditions, the red morph showed a significantly faster increase than did the green morph (ANCOVA, F = 3.094, df = 1, 26, p = 0.004); ( d ) however, this difference disappeared when ants were present (ANCOVA, F = 0.002, df = 1, 26, p = 0.998). Note that each morph is likely to include multiple clones. ( e , f )Reproductive rates of both the red and green morphs with and without ant attendance for an independent data set from ( a – d ). There is no difference between the rates of both the red and green ( e ; ANCOVA, F = 0.1881, df = 1,14, p = 0.671) but when ants have been removed the red morph showed a higher reproductive rate than the green morph under ant absence ( f ; ANCOVA, F = 30.7, df = 1, 18, p = 2.93e-5). Full size image When wild ants were given a chance to select a red or a green monoclonal aphid colony, more ants per aphid selected the green colonies (Fig. 4 ). Another field data supported the same conclusion. Using 51 mixed colonies collected during late summer in 2014 it was shown that the numbers of attended ants has a more steeper regression slope (0.0793, t = 3.418, p = 0.0013) on the number of green morphs than that (0.03345, t = 7.639, p = 6.83e-10) on the number of red morph (for the difference, ANCOVA, F = 4.6983, df = 1, 98, p = 0.0326; Fig. 4b ). These results indicate that the attended ant ( L. japonicus ) prefers the green morph over the red morph. Figure 4 Relationships between the number of attending ants and the number of monoclonal aphids 4 days after the start of the experiment. ( a ) The slope of the regression line is significantly larger for the green morphs than for the red morphs (ANCOVA, t = 2.111, n = 17, P = 0.0432), indicating that more ants were attending to the green morphs when the number of aphids was equal. ( b ) An independent field samples showed the same result. In 51 mixed colonies collected in 2014, the numbers of attended ants have more steeper slope of regression to the number of the green morph than that to the red morph (ANCOVA, F = 4.6983, df = 1, 98, p = 0.0326). Both the results indicate that the green morph is more attractive to ants than the red one. Full size image Discussion This study demonstrates that a mutualistic species actively maintains a polymorphism in its symbiont partners. The population-growth experiments of aphids showed that the red morph increased faster than the green morph, irrespective of variations in aphids or host plants (different clones) (Fig. 2 ), and that the red morph had a higher reproductive rate than the green morph. Accordingly, the red morph should dominate within a mixed colony on a shoot of the host plant. However, in the field, most aphid colonies are mixed and contain both red and green morphs 5 . Our results showed that attending ants equalize the reproductive rate of both the morphs to neutralize the competitive superiority of the red morph (Fig. 3a–f ). As the two independent data sets reached the same result, the current conclusion should be more robust. In ant-attended aphids, a tradeoff occurs between the quality of excreted honeydew and the fecundity of individuals 8 . This tradeoff may occur in M. yomogicola because the ants preferred the green morph over the red one (Fig. 4 and the above result for this issue), although the former showed lower reproductive rates than the latter without ant attendance (Fig. 2a,c ). The attending ants likely improved the reproductive rate of the preferred green morph (Fig. 3b ) to obtain a large amount of high-quality honeydew. Improved reproductive rates of ant-attended aphids have previously been reported 11 , 12 . In the current ant-aphid system, ants preferred mixed colonies with 65% green ants 5 , and the ants neutralized the competitive (clonal reproduction) inferiority of the green morph to maintain constant proportions of each morph (current results). It is unclear why the ants do not remove the red morph from this symbiotic system. For attending ants, an increase in the less-valuable red morphs (with low-quality honeydew) should not be a preferable condition. In fact, L. niger (synonym of L. japonicus ) selectively predates aphid individuals that excrete less honeydew in symbioses with other aphid species 13 . Ant workers ( L. japonicas ) should be able to discriminate the morphs of M. yomogicola because they increased the number of green morphs only in the mixed colonies (Fig. 3a–d ). The workers also preferred a mixed colony with ca. 65% green morphs 5 . In a mixed colony, the ants only increased the reproductive rate of the green morphs. Ants are known to recognize opponents by the cuticular hydrocarbons of their body surface 14 , 15 , 16 , 17 , 18 . Therefore, in the current system, the recognition of green and red morphs by ants is suspected to be based on differences in the cuticular hydrocarbons on their body surface because ants selectively manipulate the reproductive rate of green morphs. In addition to their preference for the mixed colonies containing approximately 65% green morphs 5 , L. japonicus workers must have manipulated the aphid morphs to maintain the long-lasting coexistence of both morphs in every aphid colony. Therefore, the ants must have a reason for maintaining the red morph in their attended aphid colonies. One possibility for maintaining the red morph is for the sexual reproduction of overwintering eggs (stem mothers) for the next year. Most of the M. yomogicola colonies disappear after inflorescence budding in host plants before the aphid colonies have been able to sexually produce the overwintering eggs. The red morphs may be better able to suppress the development of flower buds in host plants, allowing their colony to survive and produce sexuparae in late autumn. The pure green colony may fail to produce sexuparae for laying overwintering eggs. Therefore, the red morph may be important for maintaining the available aphid colonies as a honeydew resource for the attending ants in the next year. Because L. japonicus is a perennial species, the persistence of aphid colonies to the following year guarantees the presence of an available resource. In this case, the ants invest in a future benefit by sacrificing the present benefit. This behavior would maximize the lifetime reproduction of sexual alates (=fitness) of a L. japonicus colony. This interesting hypothesis is currently being tested, and if supported, each of the three participants in this symbiotic system would receive fitness benefits from the long-lasting coexistence with genetically heterogeneous contributors. Few studies have focused on the effects of mutualism in community ecology 19 . Mutualism may contribute to the origin and maintenance of biodiversity 20 , 21 . For example, the extreme biodiversity of trees in tropical rainforests may be mediated by repeated speciation of tree-symbiont (animal seed dispersers) systems during glacial periods 22 . In this report, we discovered another case in which symbiosis actively maintains polymorphic diversity in aphids. Methods Study organisms Macrosiphoniella yomogicola is an aphid that colonizes mugwort ( Artemisia vulgaris ). Several color morphs occur in this aphid 5 , although in our study area (the campus of Hokkaido University at Sapporo, Hokkaido, Japan), two morphs (red or green) are usually found. In early May at Sapporo, a stem mother hatches from an overwintered egg that was laid by sexual reproduction the previous autumn. She produces clonal offspring by asexual reproduction, and her offspring continue asexual reproduction until autumn. During this period, the aphids inherit their body color. Although M. yomogicola colonies are always attended by several species of ant 6 , we only used colonies with the attendant-ant Lasius japonicus because most aphid colonies in our study area are attended by this species. Experiment I: Difference in reproductive rates between the two color morphs I-1. In May 2014, we reared one red adult and one green adult on separate mugwort shoots of the same clone plant without ants in the greenhouse of our facility. Each shoot was covered by a nylon mesh (30 × 20 cm) to protect the colonies from predators. The number of aphids was recorded once every 2 or 3 days. This process was repeated for five different clones of both red and green morphs on a single host plant clone, and the results were averaged for each morph (Fig. 2a ). I-2. From June to September 2014, five red and five green clones were reared independently on 10 shoots each of the same host plant clone without ants at the same location as experiment I-1. Predators were excluded using the same method as in I-1, and the number of aphids was recorded once every 3 days (Fig. 2b ). I-3. From May to August 2016, a red clone and a green clone were each reared on a shoot of the same host plant clone with ants, resulting in a large clonal colony. From the clone pool of each morph, 3 adult aphids were transferred to one of 5 shoots of 5 host plant clones. We recorded the number of aphids on each shoot without ants once every 3 days (Fig. 2c ). Predator exclusion was conducted in the same way as in the above two experiments. Experiment II: Equalization of the reproductive rate of both the morphs by ant attendance 2-1. In June 2014, thirty host plants were parasitized by mixed aphid colonies (containing both morphs), and 15 of the thirty plants were attended by ants by connecting each plant to a randomly selected mugwort that had an ant-attended aphid colony in the field. One pair of shoots was also selected randomly. One shoot remained with ant attendance, and the other shoot was rubbed with a sticky liquid (Tanglefoot®) at its base to remove attending ants. The latter shoot was covered with nylon mesh to protect it from predators. The number of each morph was recorded for all shoots before the experiment and then was recounted 3 days later (Fig. 3 ). The two sets of numbers were plotted to separate color morphs, and the slopes of the liner regressions were compared by treatment to determine whether the population growth of both morphs differed with and without ant attendance. 2-2. At 23 July and 24 September 2016, we randomly selected 14 and 8 mugwort shoots that were paratisized by mixed aphid colonies in the study area, respectively. For both the shoot groups the numbers of both the morphs were recorded at the start day (the above days). For the former 14 shoots, we removed attended ants by rubbing Tanglefoot® at the base of the mugworts’ stem, and the shoots were covered by a nylon-mesh bug to prevent the aphid colonies from predations. After 3days, the shoots were transported to the laboratory within the bugs and the numbers of both the morphs were counted under a byocular microscope (Olympus 243702 Olympus, Tokyo, Japan). For the later 8 shoots, the shoots were remained under ant attendance, and the numbers of both the morphs were recorded again at 3 days after. For both treatments, we regressed the numbers of each morph at the end of the experiment on the numbers at the start. Then, the slopes of the regression line of each morph were compared by ANCOVA for each data set. Experiment III: Ant preference on each morph 3-1. From June to September, 19 clonal shoots were prepared in a plastic pod (15 × 20 cm), and each shoot was parasitized with a clone of a color morph. In addition, 19 pairs of pods were prepared with monoclonal colonies of each color morph. Nineteen wild host plants with aphid colonies that were parasitized with ants were selected randomly, and a pair of pods with different color morphs was connected to a wild host plant using a long thin bamboo stick to enable the attending ants of the wild host plant to reach the connected reared monoclonal colonies. The numbers of aphids and attending ants were counted on each shoot 4 days later. In one pair, a green morph had been extirpated within 4 days; therefore, the data for this pair were removed from the analysis. In addition, in another pair, a green colony attracted too few ants and did not have a normal distribution of symbiont ants and aphids (P = 0.0001). Because the data for this pair were abnormal, they were also removed from the analysis. One colony included far more red morphs (749 individuals) than the other colonies (1–149 individuals), although the aphids/ants ratio (for which we calculated the linear regression) of this colony (0.03204) was within the lower 95% limits (0.00154) of the F distribution (df = 1,1). Thus, this colony was not considered an abnormal statistical outlier, and these data were included in the analysis. As a result, we examined the ants’ preference using 17 pairs of pods (Fig. 4 ). 3-2. We analyzed another field samples to confirm the ant preference to the green morph. During late summer in 2014, 51 mixed colonies including both the morphs were sampled from the study area with attending ants. The numbers of attending-ants, the red and green aphids were recorded for each shoot. The numbers of ants was regressed on each the number of reds or greens, and the slopes was compared by ANCOVA. Statistics All the statistics were performed by R ver. 3.2.1.
Symbiotic ants manipulate aphid reproduction rates to achieve a specific mix of green and red aphids, maintaining the inferior green aphids which produce the ants' favorite snack. Ants and aphids coexist in a symbiotic relationship that benefits both species. Ants protect aphids from predators, such as lady bugs and wasps, and aphids secrete nutritious honeydew for ants to eat. The aphid species Macrosiphoniella yomogicola comes in two "morphs" with distinct colors: red and green. When there is more than one physical trait of the same species, it is called polymorphism. Typically, competition for survival would lead to one morph dominating and the others disappearing from the gene pool. However, this rule can be broken in a few circumstances, including if an ant benefits from maintaining a mixture of color morphs. Previously, Associate Professor Eisuke Hasegawa of Hokkaido University and his colleagues had determined that Lasius japonicus ants prefer the nutrient-rich honeydew produced by green morphs. They also found that ants were most attracted to and most vigorously protected colonies with 65 percent green and 35 percent red aphids. In a new study published in the journal Scientific Reports, Hasegawa and his students, including Saori Watanabe, investigated how population growth of aphid morphs differs with or without the presence of ants. They found that ants actively manipulate morph populations by improving the reproduction rate of the inferior morph. In field experiments without the ants' presence, the red morphs had a much higher and superior reproduction rate than green morphs. Thus, red aphids should dominate. However, when ants were introduced to the experiment, the green morph reproduction rate equalized with the red morphs. The experimental evidence matches what researchers find in the wild: red and green morphs coexisting on the same plant shoots attended by ants. What remains a mystery is this: if the ants prefer the green morphs' honeydew, why keep the red morphs around at all? Hasegawa explains, "We theorize that the red morphs are able to provide a benefit that the green morphs can't, such as suppressing the development of lower buds on host plants. This might help both the red and green aphids survive and reproduce throughout more of the year, which could maximize long-term harvest of honey-dew from the green aphids." "In this case, the ants invest in a future benefit by sacrificing the present benefit," the researchers hypothesize. They plan to test this hypothesis next.
10.1038/s41598-018-20159-w
Chemistry
Self-templating, solvent-free supramolecular polymer synthesis
Zhen Chen et al, Solvent-free autocatalytic supramolecular polymerization, Nature Materials (2021). DOI: 10.1038/s41563-021-01122-z Journal information: Nature Materials
http://dx.doi.org/10.1038/s41563-021-01122-z
https://phys.org/news/2022-02-self-templating-solvent-free-supramolecular-polymer-synthesis.html
Abstract Solvent-free chemical manufacturing is one of the awaited technologies for addressing an emergent issue of environmental pollution. Here, we report solvent-free autocatalytic supramolecular polymerization (SF-ASP), which provides an inhibition-free template-assisted catalytic organic transformation that takes great advantage of the fact that the product (template) undergoes a termination-free nucleation–elongation assembly (living supramolecular polymerization) under solvent-free conditions. SF-ASP allows for reductive cyclotetramerization of hydrogen-bonding phthalonitriles into the corresponding phthalocyanines in exceptionally high yields (>80%). SF-ASP requires the growing polymer to form hexagonally packed crystalline fibres, which possibly preorganize the phthalonitriles at their cross-sectional edges for their efficient transformation. With metal oleates, SF-ASP produces single-crystalline fibres of metallophthalocyanines again in exceptionally high yields, which grow in both directions without terminal coupling until the phthalonitrile precursors are completely consumed. By taking advantage of this living nature of polymerization, multistep SF-ASP without/with metal oleates allows for the precision synthesis of multi-block supramolecular copolymers. Main Considering an emergent environmental issue caused by plastic waste 1 , supramolecular polymers are promising candidates for next-generation materials because their intrinsically dynamic nature possibly allows for excellent recyclability and recombinant usage 2 , 3 . Historically, supramolecular polymerization has been extensively studied in solution, and its mechanistic interpretation has been greatly elaborated in the last decade 4 , 5 , 6 . Nevertheless, from a viewpoint of practical applications, supramolecular polymerization under solvent-free conditions is considered more advantageous, because produced superstructures can be directly used as they are without losing their structural integrity. Needless to say, this process is also preferred for realizing a sustainable society 7 , 8 . It should be noted that, in the early days of research on supramolecular polymerization, noncovalent polymeric structures were reported to form from hydrogen-bonding (H-bonding) molecules in their crystalline and liquid-crystalline assemblies 9 , 10 , 11 . However, because of a preconception that molecular assembling events are difficult to control under solvent-free conditions, most interest in supramolecular polymerization moved to its solution process 12 , 13 , 14 . Here, we report solvent-free ‘autocatalytic’ supramolecular polymerization (SF-ASP). Together with solvent-free chemical manufacturing, the concept of autocatalysis, which is inspired by nature, is an awaited green technology because of its potentially high selectivity and efficiency. A chemical reaction can be called ‘autocatalytic’ if a product of the reaction serves to catalyse its own formation. Ideally, autocatalytic chemical reactions are supposed to show a sigmoidal time-course profile for the change in product concentration because the product facilitates its own formation 15 . Autocatalysis has been implicated in the emergence of life 16 , 17 and is intrinsic to many biological processes, such as the self-replication of biomolecules 18 , 19 , 20 . As a general strategy of autocatalytic chemical transformation, the product is designed to serve as a template (T) that can reversibly preorganize reactants A and B in the form of a ternary complex [A•T•B], which facilitates the reaction between A and B to produce T•T (refs. 21 , 22 ). However, if T•T does not serve as the template, the expected autocatalytic behaviour would not emerge unless T•T dissociates into monomeric T. This is called ‘product inhibition’. In 2010, Otto and coworkers reported a seminal work that, in the reversible oxidative cyclization of a dithiol having a β -sheet-forming peptide spacer, macrocyclic products with certain ring sizes, such as cyclic hexamers and heptamers, are selectively produced with a typical sigmoidal time-course profile if a shear force is continuously applied to the reaction mixture 23 . Why does the shear force have to be applied? In this case, the macrocycles selectively produced serve to template the macrocyclization. However, they tend to stack into nanofibres in the reaction medium, and such nanofibres also tend to combine together at their cross-sectional edges. Consequently, the concentration of active templates decreases, thereby hampering the autocatalytic process. However, if a shear force is continuously applied, the nanofibres can be cut into numerous short pieces with active edges for templating the reaction 24 . The decrease in active template concentration due to product inhibition and/or template aggregation is an essential problem in nonbiological organic autocatalysis and usually makes the expected sigmoidal time-course profile very obscure 25 , 26 , 27 . Otto and coworkers successfully avoided this problem. However, because the oxidative cyclization they used is intrinsically reversible, they have also suggested a possibility that the observed selectivity might be partly due to an equilibrium shift caused by the removal of preferred products as nanofibres 23 . In contrast with organic autocatalysis described above, some inorganic nanocrystals are known to form in an inhibition-free autocatalytic manner, where precursor metal ions in aqueous media preferentially adsorb onto the surface of seed nanocrystals and are electronically reduced to become a part of the nanocrystals to grow 28 , 29 . How can inhibition-free organic autocatalysis be achieved? Autocatalysis generally requires high dilution to avoid the assembly of products that act as templates. This creates a high barrier for its practical application to the large-scale manufacturing of chemicals. In contrast, self-replication events in living organisms usually operate far from thermodynamic equilibrium and, along this line, artificial out-of-equilibrium systems using chemical fuels 30 , chemical oscillations 31 , kinetic trapping 32 , 33 and microfluidic diffusion 34 as driving forces have recently been investigated. The SF-ASP (Fig. 1a ) reported here is very unique, because the monomer is autocatalytically produced in situ from its precursor in a very high yield. The monomers for SF-ASP are phthalocyanine ( H PC C n ) derivatives (Fig. 1b , left), which are produced via reductive cyclotetramerization of their phthalonitrile (PN C n ) precursors (Fig. 1c ) that adopt a fan shape with H-bonding amide groups. In SF-ASP, H PC C n monomer, if any is produced, nucleates and initiates H-bonding-mediated supramolecular polymerization to give one-dimensional (1D) single-crystalline fibres, which possibly preorganize PN C n molecules via an H-bonding interaction at the cross-sectional fibre edges and efficiently template their reductive cyclotetramerization to give H PC C n in an autocatalytic manner (Fig. 1a ). When SF-ASP is conducted in the presence of metal oleates, metal complexes of phthalocyanines ( M PC C4 ; Fig. 1b , right) solely form in an autocatalytic manner without contamination of their free bases. Fig. 1: Autocatalysis driven by solvent-free supramolecular polymerization. a , Schematic illustration of the concept of SF-ASP. The target product, if any is formed, nucleates and initiates supramolecular polymerization via a noncovalent interaction, affording 1D single-crystalline fibres. The cross-sectional fibre edges may certainly preorganize precursors for the monomers and efficiently promote their chemical transformation in an autocatalytic manner. Terminal coupling of fibres to attenuate the autocatalytic process is suppressed due to their sluggish diffusion under solvent-free conditions. b , Chemical structures of phthalocyanine ( H PC C n ) derivatives (left) and their metal complexes ( M PC C4 ) with Zn, Fe, Co and Cu (right) obtained by SF-ASP using PN C4 . c , Chemical structures of fan-shaped dithioalkylphthalonitrile (PN C n ) derivatives as precursors of the monomers for SF-ASP. Full size image We serendipitously found the basic principle of SF-ASP (Figs. 1a and 2a,b ) during a study on the ferroelectric nature of H-bonding PN derivatives 35 , where green-coloured thin fibres formed and elongated on heating liquid-crystalline PN C4 (Fig. 1c ) on a hot stage. As described in Methods , a powdery sample of PN C4 , sandwiched between glass plates, was heated to a hot melt and kept at 160 °C for 15 h. Approximately 4 h after heating, numerous green-coloured thin fibres began to appear and then developed entirely and elongated abruptly (Fig. 2c and Supplementary Video 1 ). By matrix-assisted laser desorption ionization time-of-flight (MALDI–TOF) mass spectrometry (Fig. 2d , black), we found that the fibres obtained in 24 h were composed of phthalocyanine H PC C4 (Fig. 1b ), while no precursor PN C4 was detected. This crude product, when simply washed with methanol, gave analytically pure H PC C4 (Supplementary Fig. 1 ). A change in the absorption intensity at 700 nm assignable to H PC C4 (Supplementary Fig. 1d ) clearly showed a sigmoidal time-course profile (Fig. 2a , black). Further systematic studies revealed that SF-ASP using PN C4 can be considerably affected by the reaction temperature (Supplementary Figs. 2 – 4 ). As shown in Fig. 2b and Supplementary Table 1 , when SF-ASP was carried out by elevating the temperature from 160 to 190 °C, the yield of H PC C4 after 24 h was considerably enhanced from 53 to 83%. This value is far better than those reported for the ordinary solution-phase synthesis of H PC derivatives (20–25%, Supplementary Methods ). However, when the temperature was elevated further, the yield of H PC C4 started to drop due to the formation of a considerable amount of side products. Similar to the case of PN C4 , SF-ASP using PN C3 carrying shorter hydrocarbon side chains (Fig. 1c ) showed a sigmoidal kinetic profile (Fig. 2a , green), affording thin fibres of H PC C3 (Figs. 1b and 2e ) in high yield (87%) and high selectivity (Fig. 2d , green). Fig. 2: Characterization of SF-ASP. a , Time-dependent absorption (Abs.) spectral changes at 700 nm of the reaction mixtures obtained by PN C3 (green, 190 °C), PN C4 (black, 160 °C), PN C5 (orange, 160 °C), PN C6 (blue, 160 °C) and PN C4 N -Me (red, 160 °C), sandwiched between glass plates on heating, where SF-ASP displayed a sigmoidal time-course feature. b , Weight fractions of H PC C4 (green bar) and PN C4 (black bar) as well as that of side products (orange bar) formed by SF-ASP using PN C4 on heating at different temperatures for 24 h. c , Optical images of the reaction mixture obtained by SF-ASP using PN C4 on heating at 160 °C (Supplementary Video 1 ). Scale bars, 100 µm. d , MALDI–TOF mass spectra of the reaction mixtures obtained by SF-ASP using PN C3 (green, 190 °C), PN C4 (black, 190 °C), PN C5 (orange, 160 °C), PN C6 (blue, 160 °C) and PN C4 N -Me (red, 160 °C) after heating for 24 h. e , Optical images of the reaction mixtures of SF-ASP using PN C3 (190 °C), PN C5 (160 °C), PN C6 (160 °C) and PN C4 N -Me (160 °C) after heating for 24 h. Scale bars, 100 µm. Source data Full size image We likewise heated PN C5 and PN C6 containing longer hydrocarbon chains than PN C4 (Fig. 1c ) but did not observe any autocatalytic feature (Fig. 2a , orange and blue, respectively). For example, the reaction mixture of PN C6 on heating at 160 °C was entirely green with no fibrous assembly (Fig. 2e ). MALDI–TOF mass spectrometry of the reaction mixture (Fig. 2d , blue) showed poor selectivity for H PC C6 (Fig. 1b and Supplementary Table 2 ). Although heating PN C5 at 160 °C resulted in the formation of short green fibres (Fig. 2e ), the selectivity for H PC C5 (Figs. 1b and 2d , orange) was moderate. Accordingly, compared with the successful examples of SF-ASP using PN C3 (Fig. 2e ) and PN C4 (Fig. 2c ), the number of produced fibres was smaller and their structural integrity was lower (Fig. 2e ). When N -methylated PN C4 N -Me (Fig. 1c ) was heated to 160 °C, no fibrous assembly appeared (Fig. 2e ) and, accordingly, no autocatalytic feature emerged (Fig. 2a , red), affording H PC C4 N -Me (Fig. 1b ) with very poor selectivity (Fig. 2d , red and Supplementary Table 2 ). A study using polarizing optical microscopy (POM) (Fig. 3a ) revealed that the as-formed H PC C4 fibres obtained by SF-ASP were highly crystalline. Powder X-ray diffraction (PXRD) analysis (Fig. 3b ) of the crystalline fibres of as-formed H PC C4 , denoted hereafter as [ H PC C4 ] CF (CF refers to crystalline fibre), displayed intense diffraction peaks that were indexed to those for a hexagonally packed columnar assembly (Fig. 3b , inset). Notably, through-view two-dimensional (2D) small-angle X-ray scattering (SAXS) analysis of a single fibre of [ H PC C4 ] CF (Fig. 3c ) revealed a single-crystal-like pattern, where spot-type reflections assignable to the (100), (110) and (300) planes of the hexagonal geometry appeared only in the direction perpendicular to the c axis of the crystalline lattice. This result demonstrates that the crystalline lattice of [ H PC C4 ] CF aligns along the longer axis of the fibre (Fig. 3b , inset). Its selected-area electron diffraction (SAED) pattern (Fig. 3d ) displayed two symmetric spots that were indexed to the (001) plane in the direction along the longer axis of the fibres, suggesting that each column comprised a cofacial π -stack of H PC C4 (Fig. 3e ). Likewise, in the polarized Fourier transform-infrared (FT-IR) spectra of [ H PC C4 ] CF (Fig. 3f ), the stretching vibrations due to the N–H (3,282 cm −1 ) and C = O (1,620 cm −1 ) groups both showed their maximum absorbance in the direction parallel ( θ = 0°) to the longer axis of the fibre. Consistent with the SAXS (Fig. 3c ) and SAED (Fig. 3d ) patterns described above, this dichroic feature indicates that the H-bonded amide groups align unidirectionally along the supramolecular polymer chain (Fig. 3e ). Nanomechanical analysis of [ H PC C4 ] CF (Supplementary Fig. 5 ) revealed that its elastic modulus (2.1 ± 0.4 GPa) and hardness (91 ± 5 MPa) were within the reported values for organic polymers 36 . Fig. 3: Characterization of [ H PC C4 ] CF obtained by SF-ASP. a , POM image of as-formed [ H PC C4 ] CF by SF-ASP, after washing with methanol at 25 °C. White arrows represent transmission axes of the polarizer (P) and analyser (A). Scale bar, 100 µm. b , PXRD pattern of as-formed [ H PC C4 ] CF by SF-ASP, after washing with methanol at 25 °C (Miller indices in parentheses) and schematic illustration of its columnar order with a 2D hexagonal geometry. c , Through-view 2D SAXS pattern of a single fibre of [ H PC C4 ] CF (inset; scale bar, 100 µm) obtained by SF-ASP, after washing with methanol at 25 °C (Miller indices are in parentheses). The circle in inset represents the area exposed to an X-ray beam. d , SAED pattern of a single fibre of [ H PC C4 ] CF obtained by SF-ASP, after washing with methanol at 25 °C. The c axis of the crystalline lattice is parallel to the longer axis of the fibre, while the ab plane is perpendicular to it. e , Wireframe representation of a possible structure of [ H PC C4 ] CF , where hydrogen atoms and side chains are omitted for clarity. Red broken lines denote the H-bonding interaction of the amide units. f , Polarized FT-IR spectra at different azimuthal angles ( θ ) from 0° to 90° of a single fibre of [ H PC C4 ] CF (inset; scale bar, 100 µm) obtained by SF-ASP, after washing with methanol at 25 °C. θ is defined as 0° when the polarizing direction of incident light (P) is parallel to the c axis of the crystal. Source data Full size image Except for H PC C4 N -Me , chromatographically isolated H PC C n ( n = 3–6), on being slowly cooled from their hot melts, all assembled into a hexagonal columnar structure ([ H PC C n ] COL ; Supplementary Fig. 6 ). As expected, the intercolumnar distance was larger as the hydrocarbon side chains were longer. We also found that their melting behaviours are different from one another and possibly affect the SF-ASP profile. For SF-ASP to occur properly, precursor PN C n should be in hot melts, while produced H PC C n must be in the form of hexagonally packed crystalline fibres [ H PC C n ] CF . By means of differential scanning calorimetry (DSC) (Supplementary Fig. 7 ), we evaluated the thermal behaviours of PN C n ( n = 3–6) as well as those of the corresponding columnar assemblies of [ H PC C n ] COL ( n = 3–6). The observed thermal behaviours were consistent with the optical images shown in Fig. 2e,c . The crystalline fibres of [ H PC C3 ] CF and [ H PC C4 ] CF are stable enough thermally to survive in the hot melts of PN C3 and PN C4 , respectively, and promote SF-ASP. However, once their fibrous crystalline features were lost due to overheating, the autocatalytic activity could no longer be retrieved even though the temperature was properly readjusted later. Note that SF-ASP is difficult or impossible when PN C5 and PN C6 are used, because the resulting H PC C5 and H PC C6 do not assemble into thermally stable crystalline fibres. As illustrated in Fig. 1a , the cross-sectional fibre edges of [ H PC C4 ] CF may certainly preorganize the PN C4 molecules via a H-bonding interaction, thereby allowing them to reductively cyclotetramerize into H PC C4 efficiently. Then, a new set of the PN C4 molecules can likewise be preorganized on the newly formed cross-sectional fibre edges. Repetition of the sequence of these elementary steps should result in elongation of [ H PC C4 ] CF . Together with a colourless feature of the non-fibre area in the reaction mixture (Fig. 2c ), the newly produced H PC C4 probably remains undetached from the cross-sectional edges of [ H PC C4 ] CF . As described above, [ H PC C4 ] CF emerged 3–4 h after heating (Supplementary Video 1 ), and then continuously elongated and increased in their number and thickness over a period of additional 4 h (Supplementary Fig. 8a,b ). In the subsequent stage, the formation of new crystalline fibres subsided, but the preformed fibres still became continuously thicker to increase the total cross-sectional area, thereby promoting the autocatalytic transformation of PN C4 into H PC C4 (Supplementary Fig. 8c ). Namely, H PC C4 was produced continuously without product inhibition at the cross-sectional fibre edges until PN C4 was completely consumed. Equally important, terminal coupling of the crystalline fibres, leading to a decrease in the total cross-sectional area for templating the reaction, barely occurred in SF-ASP, certainly due to their very sluggish diffusion in the hot melt of PN C4 under solvent-free conditions. Analogous to other reported examples of organic autocatalysis 15 , 24 , the transformation of PN C4 into H PC C4 in the fibre elongation stage (7–11 h) showed a pseudo first order kinetics (Supplementary Fig. 9 ), where no intermediate transition was suggested by tracking with FT-IR and electronic absorption spectroscopy (Supplementary Fig. 10 ). In support of the template-assisted mechanism, when PN C4 containing separately prepared crystalline fibres of [ H PC C4 ] CF as the seed was heated to 160 °C, the fibres immediately elongated (Supplementary Video 2 ) without any induction period (Supplementary Fig. 11a ). In relation to the mechanism of SF-ASP, we found that H PC C4 N -Me with N -methylated amide units in its side chains interferes with SF-ASP. For example, when PN C4 containing H PC C4 N -Me (20 wt%, 5 mol%) was heated to 160 °C, fibrous [ H PC C4 ] CF was not produced in 10 h (Supplementary Fig. 11b ). The same held true even when fibrous [ H PC C4 ] CF was used as the seed, where neither the selective formation of H PC C4 nor the elongation of [ H PC C4 ] CF took place (Supplementary Fig. 11c ). Here, H PC C4 N -Me certainly adsorbs onto the active edges of the nuclei or crystalline fibres and interferes with the preorganization of PN C4 for its autocatalytic transformation into H PC C4 . Although the mechanism has yet to be clarified 37 , the cyclotetramerization of PN C4 into H PC C4 is a H + -mediated reductive process: 4PN C4 + 2H + + 2e − → H PC C4 . Therefore, the solution-phase synthesis of phthalocyanines is often conducted in protic solvents, such as alcohols. For SF-ASP, we consider that surface silanol groups on the glass substrate may play a similar role to alcohols. Although the number of surface silanol groups is limited, a siloxane bridge (≡Si–O–Si≡) on heating over 160 °C was reported to cleave off homolytically to produce radical species ≡Si• and •O–Si≡, which in turn react with a water molecule to generate H + and e – (refs. 38 , 39 ). This process is considered essential for the H + -mediated reductive cyclotetramerization of PN C4 . Accordingly, when PN C4 was freeze-dried and used for SF-ASP in a dry N 2 atmosphere, the yield of H PC C4 dropped notably from 81 to 32% (Supplementary Fig. 12 ). SF-ASP also works for the selective synthesis of metallophthalocyanines, which have the higher potential for practical applications than free-base phthalocyanines 40 , 41 , 42 . Examples in the present work include zinc (Zn), iron (Fe), cobalt (Co) and copper (Cu) phthalocyanines ( M PC C4 , Fig. 1b ), which were selectively produced in high yields (77–90%, Supplementary Table 2 ) simply by heating PN C4 in the presence of metal oleate salts ( Methods ). Typically, a 2:1 molar mixture of PN C4 and Zn(oleate) 2 , sandwiched between glass plates, was heated to 160 °C for 12 h, where a change in its absorption intensity at 700 nm due to Zn PC C4 (Supplementary Fig. 13a ) clearly displayed a sigmoidal time-course profile (Fig. 4a , blue) with a shorter induction period than that without Zn(oleate) 2 (Fig. 2a , black). We also found that, after the induction period, green-coloured crystalline fibres developed entirely (Fig. 4b , blue). By means of elemental mapping using scanning electron microscopy–energy dispersive X-ray spectroscopy (SEM–EDX; Supplementary Fig. 13b ) together with MALDI–TOF mass spectrometry (Supplementary Fig. 13c ), we confirmed that the as-formed fibres are composed solely of Zn PC C4 without any trace of free-base H PC C4 . The same held true for SF-ASP using PN C4 in the presence of other oleate salts of Fe (Fig. 4a,b , orange and Supplementary Fig. 14 ), Co (Fig. 4a,b , purple and Supplementary Fig. 15 ) and Cu (Fig. 4a , b , red and Supplementary Fig. 16 ). Heating free-base H PC C4 with the above metal oleates for 24 h resulted in only poor yields of M PC C4 (Supplementary Fig. 17 ), suggesting that the metal ion is involved in the transition state of the autocatalytic process of SF-ASP. Fig. 4: Sequence and orientation controls of crystalline fibres obtained by SF-ASP. a , Time-dependent absorption spectral changes at 700 nm of the reaction mixtures obtained by SF-ASP using PN C4 with Zn(oleate) 2 (blue), Fe(oleate) 3 (orange), Co(oleate) 2 (purple) and Cu(oleate) 2 (red), sandwiched between glass plates on heating at 160 °C. b , Optical images of the reaction mixtures obtained by SF-ASP using PN C4 with Zn(oleate) 2 (blue), Fe(oleate) 3 (orange), Co(oleate) 2 (purple) and Cu(oleate) 2 (red) on heating at 160 °C for 12 h. Scale bars, 100 µm. c , Optical images of the reaction mixtures obtained on heating at 180 °C for 2–4 h by multistep SF-ASP using PN C4 with/without Zn(oleate) 2 , Fe(oleate) 3 , Co(oleate) 2 and Cu(oleate) 2 , sandwiched between glass plates covered by CYTOP thin films. Scale bars, 30 µm. d , e , Optical images showing the changes of the [ H PC C3 ] CF seeds obtained by SF-ASP in a hot melt of PN C4 ( d ) and the [ H PC C4 ] CF seeds obtained by SF-ASP in a hot melt of PN C3 ( e ) at 180 °C for 4 h. Scale bars, 50 µm. f , Optical images of the reaction mixtures obtained by SF-ASP using PN C4 with 1-dodecanethiol (DCTH), sandwiched between two parallelly oriented PTFE-rubbed glass plates (left) and KBr plates (right) after heating at 180 °C for 12 h. Scale bars, 50 µm. g , Optical images of the reaction mixtures obtained by SF-ASP using PN C4 with Fe(oleate) 3 (left) and Co(oleate) 2 (middle) or without metal oleates (right) in a 10 T magnetic field after heating at 160 °C for 6 h. Scale bars, 50 µm. Black and blue arrows represent the directions of [ Fe PC C4 ] CF and the magnetic flux line applied, respectively. Source data Full size image Although SF-ASP using PN C4 to form [ H PC C4 ] CF or [ M PC C4 ] CF should, in principle, follow the step-growth mechanism, due to the sluggish diffusion kinetics under solvent-free conditions, chain coupling is prevented, so that the crystalline fibres grow continuously in both directions until PN C4 is completely consumed, similar to living chain-growth processes 43 , 44 , 45 , 46 . Hence, we envisioned that one could synthesize block copolymers by multistep SF-ASP using PN C4 in combination with different metal oleates ( Methods ). As a typical example, active seeds of [ Cu PC C4 ] CF were prepared by chopping its as-formed fibres for 10 s in methanol, and the resulting suspension was cast onto a glass plate and air-dried. This glass plate was covered with powdery PN C4 , and the mixture was sandwiched between glass plates and heated to 180 °C, where [ Cu PC C4 ] CF in a hot melt of PN C4 started to grow uniformly in both directions, affording the ABA-type of triblock copolymer [ H PC C4 ] CF -[ Cu PC C4 ] CF -[ H PC C4 ] CF in 4 h (Supplementary Fig. 18d ). Although its block segments were easily differentiated by their intrinsic colours (Fig. 4c , [ H PC C4 ] CF -[ Cu PC C4 ] CF -[ H PC C4 ] CF ), elemental mapping with SEM–EDX (Supplementary Fig. 19 ) allowed us to confirm that copper, as expected, was localized only in its middle block segment, whereas sulfur was distributed over the entire fibre. Meanwhile, the through-view 2D SAXS patterns collected from the [ Cu PC C4 ] CF and [ H PC C4 ] CF segments (Supplementary Fig. 20 ) revealed that their structural integrities were both very high. These observations allow us to conclude that the cross-sectional fibre edges template the epitaxial growth of the [ H PC C4 ] CF segment, affording a 1D supramolecular heterojunction 47 , 48 , 49 , 50 . However, note that SF-ASP in the second stage, when conducted for more than 4 h (Fig. 2c ), concomitantly gave a nonnegligible amount of homotropic [ H PC C4 ] CF . After struggling, we eventually found that this unfavourable process was suppressed when SF-ASP was conducted using glass plates coated with an amorphous perfluoropolymer called CYTOP (Supplementary Fig. 21 ) and successfully obtained a variety of ABA and even ABCBA types of multi-block copolymer (Fig. 4c and Supplementary Fig. 18 ). Another important key to obtain well-defined multi-block copolymers was to combine block segments whose intercolumnar distances should match with less than a 2% difference (Supplementary Fig. 22 and Table 3 ). In fact, in the multistep SF-ASP using PN C4 and PN C3 , where the difference in the intercolumnar distances between [ H PC C4 ] CF and [ H PC C3 ] CF exceeds 9% (Supplementary Table 4 ), the blocked segments were highly branched (Fig. 4d,e ). Finally, we point out that in SF-ASP, the growth direction of the crystalline fibres can be controlled by the type of substrate used. For example, when PN C4 was heated between glass plates whose surfaces were rubbed in advance with a polytetrafluoroethylene (PTFE) rod and parallelly oriented, the resulting [ H PC C4 ] CF were preferentially oriented along the rubbed direction (Fig. 4f , left). Of particular interest, when single-crystalline potassium bromide (KBr) plates were used to sandwich PN C4 , SF-ASP gave grid-like 2D crosslinked crystalline fibres (Fig. 4f , right). We also found that SF-ASP using PN C4 , when conducted with Fe(oleate) 3 in a 10 T magnetic field, resulted in the formation of [ Fe PC C4 ] CF that were preferentially oriented orthogonal to the magnetic flux line (Fig. 4g , left). In contrast, SF-ASP using PN C4 (Fig. 4g , right) and PN C4 /Co(oleate) 2 (Fig. 4g , middle) under the same conditions did not form oriented crystalline fibres. Since [ Fe PC C4 ] CF once formed were not magnetically orientable afterwards, we consider that the magnetic field surely affected the nucleation process of SF-ASP. Computational simulations (Supplementary Fig. 23 ) suggested that crystalline nuclei consisting of roughly 10 5 molecules of Fe PC C4 probably align perpendicular to the magnetic flux line, whereas those consisting of H PC C4 and Co PC C4 align randomly. Outlook There exists a preconception that, in a solvent-free condensed phase, supramolecular polymerization would not properly proceed because many undesirable kinetic traps possibly interfere the delicate noncovalent chain propagation event. However, in this article, we updated this preconception through detailed investigation of our serendipitous finding that green-coloured thin fibres formed and elongated on heating a liquid-crystalline PN on a hot stage. SF-ASP developed here provides an inhibition-free template-assisted catalytic organic transformation that takes great advantage of the termination-free nucleation–elongation assembly (living supramolecular polymerization) of its product (template) under solvent-free conditions. Considering its potential applicability to the synthesis of other π -electronic and macrocyclic monomers, SF-ASP that allows for precision macromolecular engineering using in situ produced monomers under solvent-free conditions might be one of the ideal forms of polymer manufacturing for the sustainable future. Methods Materials Unless otherwise noted, the reagents were used as received from Tokyo Chemical Industry ( N -hydroxysuccinimide, sodium hydride, iodomethane, 1,8-diazabicyclo(5.4.0)undec-7-ene, 1-dodecanethiol (DCTH)) and Wako Pure Chemical Industry ( N,N’ -dicyclohexylcarbodiimide, trifluoroacetic acid, triethylamine (Et 3 N), potassium carbonate (K 2 CO 3 ), zinc(II) acetate, 1-pentanol and other anhydrous solvents, such as 1,4-dioxane, tetrahydrofuran, dichloromethane (CH 2 Cl 2 ) and chloroform (CHCl 3 )). CYTOP was purchased from the AGC Chemical Company. Single-crystalline potassium bromide (KBr) plates (5.0 × 5.0 mm) were purchased from JACSO Int. Co., Ltd. 4,5-Bis( tert -butyl ethylcarbamate-2-thio)phthalonitrile 51 , 3,4,5-trialkyloxybenzoic acids 52 , 53 and zinc(II) 54 , iron(III) 55 , cobalt(II) 56 and copper(II) 57 oleates were prepared according to the reported procedures. Highly oriented glass plates rubbed with a PTFE rod were fabricated according to a reported method 58 . Dithioalkylphthalonitrile derivatives 51 were synthesized according to the previously reported procedures and unambiguously characterized by 1 H and 13 C nuclear magnetic resonance (NMR) spectroscopy and MALDI–TOF mass spectroscopy. The details of synthetic procedures and characterization data are provided in Supplementary Methods . General Column chromatography was carried out with Wako C-300 silica gel (particle size 45–75 µm). 1 H and 13 C NMR spectra were recorded with a JEOL JNM-ECA500 spectrometer operated at 500 and 125 MHz, respectively, where chemical shifts (in ppm) were determined with respect to tetramethylsilane used as an internal reference. MALDI–TOF mass spectrometry was performed with an Applied Biosystems MDS SCIEX 4800 Plus MALDI–TOF/TOF analyser using dithranol as the matrix. Infrared spectra were recorded at 25 °C by a JASCO FT/IR-4100 FT-IR spectrometer equipped with attenuated total reflection (PRO450-S). Polarized FT-IR spectra were recorded at 25 °C using a JASCO FT/IR-4100 FT-IR spectrometer connected to an Irtron IRT-5000 microscope unit. Electronic absorption spectra and time-dependent spectra obtained by detection at a fixed wavelength were recorded with a JASCO V-670 ultraviolet-visible light-near infrared spectrophotometer with an FP82HT hot stage. DSC was performed with a Mettler–Toledo DSC1 star system, where temperature and enthalpy were calibrated with In (430 K, 3.3 J mol –1 ) and Zn (692.7 K, 12 J mol –1 ) standard samples using sealed Al pans. The heating/cooling scan rate was 10 °C min −1 . Cooling and heating profiles were analysed using the Mettler–Toledo STAR e software system. Optical microscopy and POM were performed with a Nikon Eclipse LV100POL polarizing optical microscope equipped with a Mettler–Toledo FP90 controller attached to an FP82HT hot stage and a high-definition camera. Optical microscopy images recorded for the formation of crystalline fibres with time were analysed using ImageJ software. To evaluate the number and thickness of crystalline fibres, all of the as-formed fibres obtained in an optical microscopy image were measured and fitted by normal distribution functions. Size-exclusion chromatography was performed at 40 °C with a TOSOH HLC-8320GPC system equipped with a refractive index detector, and CHCl 3 was used as an eluent and introduced at a flow rate of 0.35 ml min −1 through linearly connected two polystyrene TSKgel SuperHM-1000 columns and one polystyrene TSKgel SuperHM-2000 column in order. PXRD measurements were performed with a Rigaku SmartLab powder X-ray diffractometer equipped with a 3 kW Cu anode (Cu K α radiation, λ = 1.54 Å). The 2 θ angles and the position of the incident X-ray on the detector were calibrated using several reflections obtained from layered silver behenate ( d = 58.380 Å). Crystalline fibres were placed in a 1.5-mm ϕ glass capillary at 25 °C. SAXS experiments were carried out at BL45XU (X-ray wavelength, λ = 1.08 Å) in Spring-8 (Hyogo) with an R-AXIS IV++ imaging plate area detector (Rigaku). The sample-to-detector distance used for the SAXS measurements was 1.50 m. Nanoindentation experiments were performed using an Asylum MFP-3D atomic force microscope with a diamond tip mounted onto a metal foil cantilever under low load (1–100 µN). The loading/unloading rate was 30 µN s − 1 . The elastic modulus and hardness of a crystalline fibre consisting of H PC C4 were evaluated by a reported method 59 . SAED collected by transmission electron microscopy was performed with an FEI Titan3 TM80-300S/transmission electron microscopy system with an accelerating voltage of 80 kV. Scanning electron microscopy/energy dispersive X-ray (SEM–EDX) spectroscopy were performed using a Hitachi SU8230 field emission scanning electron microscope operated with an accelerating electron beam voltage of 25 kV and equipped with a Bruker X-Flash 6160 EDX detector. A JASTEC JMTD-10T100 superconducting magnet with a vertical bore size of 100 mm was used for the magnetic orientation of crystalline fibres. Solvent-free synthesis of single-crystalline phthalocyanines and their metal complexes Typically, a powdery sample (1 mg) of PN C4 was sandwiched between two identical glass plates (25 × 25 mm). On heating to the isotropic state, a hot melt of PN C4 was fully wetted between glass plates and kept at 160 °C for 24 h. After being cooled to 25 °C, as-formed [ H PC C4 ] CF were isolated by washing with methanol (10 ml) to remove unreacted PN C4 and side products. By a procedure similar to that for H PC C4 except heating at 160 °C for 12 h under N 2 , metallophthalocyanines were obtained from the mixtures of PN C4 (67 mol%) and metal (Zn, Fe, Co and Cu) oleates (33 mol%). As-formed [ Zn PC C4 ] CF , [ Fe PC C4 ] CF , [ Co PC C4 ] CF and [ Cu PC C4 ] CF were isolated by washing with methanol (10 ml) and hexane (10 ml) to remove unreacted PN C4 , side products and excessed oleate salts. Chemical analysis of the reaction mixtures obtained by SF-ASP Typically, PN C4 (M mg) was sandwiched between glass plates and heated at 160 °C for 24 h. The reaction mixture was completely dissolved in CHCl 3 (1.0 ml) on sonication, and the solution ( C M ) was subjected to size-exclusion chromatography (SEC) equipped with a refractive index detector (Supplementary Fig. 3 ). H PC C4 ( C PC ) and unreacted PN C4 ( C PN ) were quantified by analysing the integrals of their SEC elution peaks with the calibration curves of H PC C4 and PN C4 (Supplementary Fig. 4 ). The resuslts were summarized in Supplementary Table 1 , where the weight fractions (wt%) of PN C4 , H PC C4 and others in individual reactions were obtained by equations ( 1 )–( 3 ), $${{{\mathrm{wt}}}}\% _{{{{\mathrm{PN}}}}} = \left( {C_{{{{\mathrm{PN}}}}}/C_{{{\mathrm{M}}}}} \right) \times 100\%$$ (1) $${{{\mathrm{wt}}}}\% _{{{{\mathrm{PC}}}}} = \left( {C_{{{{\mathrm{PC}}}}}/C_{{{\mathrm{M}}}}} \right) \times 100\%$$ (2) $${{{\mathrm{wt}}}}\% _{{{{\mathrm{others}}}}} = 100\% - {{{\mathrm{wt}}}}\% _{{{{\mathrm{PN}}}}} - {{{\mathrm{wt}}}}\% _{{{{\mathrm{PC}}}}}$$ (3) Kinetic analysis of SF-ASP The chemical transformation of PN C4 into H PC C4 at 160 °C in the fibre elongation stage (7–11 h) was fitted to follow the pseudo first order kinetics (Supplementary Fig. 9 ) using the equation ( 4 ), $${{{\mathrm{ln}}}}\left( {1 - y_{{{\mathrm{t}}}}} \right) = - kt$$ (4) where k is the kinetic parameter and y t is defined as the yield of H PC C4 at certain heating time t . y t is evaluated by equation ( 5 ), $$y_{{{\mathrm{t}}}} = \left( {A_{{{\mathrm{t}}}}/A_\infty } \right) \times y_\infty$$ (5) where A t and A ∞ represent the absorption intensities at 700 nm on heating for t and 24 h, respectively. y ∞ represents the yield of H PC C4 at 160 °C for 24 h. Sequence control of crystalline block fibres obtained by multistep SF-ASP Typically, active crystalline seeds were prepared by chopping as-formed [ H PC C4 ] CF or [ M PC C4 ] CF for 10 s in methanol. The resulting suspension was cast onto a glass plate coated with a transparent CYTOP thin film and air-dried at 25 °C. This glass plate was covered with PN C4 premixed with or without metal oleates (33 mol%), and the mixture, which was sandwiched between another glass plate coated by CYTOP, was heated to 180 °C for 2–4 h (Supplementary Fig. 18 ). The crystalline fibres were isolated by washing with hexane (10 ml) and methanol (10 ml), affording the ABA-type of triblock fibres. By repeating the above procedure using the ABA-type of triblock fibres as active seeds, the ABCBA-type of multi-block fibres was obtained. Orientation control of crystalline fibres obtained by SF-ASP Typically, a mixture of PN C4 (67 mol%) and DCTH (H + /e − donor, 33 mol%) was sandwiched between two parallelly oriented glass plates rubbed in advance by a PTFE rod. After heating at 180 °C for 12 h, as-formed [ H PC C4 ] CF were obtained and aligned parallel to the rubbed direction of PTFE chains (Fig. 4f , left). By using a procedure similar to the above except using the single-crystalline KBr plates as substrates, as-formed [ H PC C4 ] CF were orthogonally grid-like crosslinked (Fig. 4f , right). According to the method reported for the magnetic orientation 60 , a heater with a glass cell of PN C4 and its mixtures of Fe(oleate) 3 and Co(oleate) 2 (33 mol%) was placed in the bore of a superconducting magnet and then kept at 160 °C for 6 h. As-formed [ Fe PC C4 ] CF were obtained and aligned perpendicular to the direction of the applied magnetic flux line (Fig. 4g , left), whereas either [ Co PC C4 ] CF (Fig. 4g , middle) or [ H PC C4 ] CF (Fig. 4g , right) were randomly oriented under the same conditions. Data availability Data generated or analysed during this study are provided as source data or included in the Supplementary Information. Further data are available from the corresponding authors on request. Source data are provided with this paper.
A polymer that catalyzes its own formation in an environmentally friendly solvent-free process has been developed by an all-RIKEN team of chemists. The discovery could lead to the development of inherently recyclable polymer materials that are made using a sustainable process. Polymers are ubiquitous today, but they are detrimental to the environment through the accumulation of plastic waste and the unsustainable nature of conventional polymer manufacture. Polymers are generally made by linking together strings of building blocks, known as monomers, using covalent bonds. But these strong bonds make it difficult to take used, end-of-life plastic items and de-polymerize them to recover the monomers for reuse. Supramolecular polymers, in contrast, consist of arrays of monomers held together by interactions such as hydrogen bonds, which are weaker, and hence more reversible, than covalent bonds. However, the solvents used for manufacturing supramolecular polymers limit their application and sustainability. "Within two decades, solvent-free chemical manufacturing may be the only approved chemical processes, since the large volumes of solvent used in other manufacturing processes are too damaging to the environment," says Takuzo Aida from the RIKEN Center for Emergent Matter Science (CEMS). Aida's team has been investigating wedge-shaped molecules called phthalonitriles. Now, he and nine CEMS colleagues have discovered that these substances melt when heated and then form multiple thin green crystalline fibers, which are rod-shaped supramolecular polymers. The polymerization process is initiated when four of the wedge-shaped pieces come together to form a flat, circular disk. The disk-shaped assembly—the monomer—can then act as a template surface for the next four wedge-shaped precursor molecules to combine, adding a layer to the structure. This process is repeated, with each new layer of the polymer catalyzing the formation of the next, until long rod-like structures form (Fig. 1). The team named their process solvent-free autocatalytic supramolecular polymerization. "The mechanical properties of the crystalline fibers of these supramolecular polymers are analogous to those of poly(alkyl methacrylates)," Aida says. These conventional polymers are used for a range of applications, including plexiglass. The team could create more complex versions of their supramolecular polymers by adding alternate precursor molecules into the mix at certain time points, thereby forming 'block copolymers' with bands of different monomers along the length of each rod. Solvent-free autocatalytic supramolecular polymerization could be used to make supramolecular polymers from a range of starting materials. "It may be applicable to the synthesis of polycyclic aromatic hydrocarbons and cyclic peptides," Aida says.
10.1038/s41563-021-01122-z
Medicine
Researchers identify variation in gene PLD3 can increase risk of late-onset Alzheimer’s disease
"Rare coding variants in the phospholipase D3 gene confer risk for Alzheimer's disease." Carlos Cruchaga, Celeste M. Karch, Sheng Chih Jin, Bruno A. Benitez, Yefei Cai, Rita Guerreiro, Oscar Harari, Joanne Norton, John Budde, Sarah Bertelsen, Amanda T. Jeng, Breanna Cooper, Tara Skorupa, David Carrell, Denise Levitch, Simon Hsu, Jiyoon Choi, Mina Ryten, UK Brain Expression Consortium (UKBEC), Celeste Sassi, Jose Bras, J. Raphael Gibbs, Dena G. Hernandez, Michelle K. Lupton, John Powell et al. Nature (2013) DOI: 10.1038/nature12825 Journal information: Nature
http://dx.doi.org/10.1038/nature12825
https://medicalxpress.com/news/2013-12-variation-gene-pld3-late-onset-alzheimers.html
Abstract Genome-wide association studies (GWAS) have identified several risk variants for late-onset Alzheimer's disease (LOAD) 1 , 2 . These common variants have replicable but small effects on LOAD risk and generally do not have obvious functional effects. Low-frequency coding variants, not detected by GWAS, are predicted to include functional variants with larger effects on risk. To identify low-frequency coding variants with large effects on LOAD risk, we carried out whole-exome sequencing (WES) in 14 large LOAD families and follow-up analyses of the candidate variants in several large LOAD case–control data sets. A rare variant in PLD3 (phospholipase D3; Val232Met) segregated with disease status in two independent families and doubled risk for Alzheimer’s disease in seven independent case–control series with a total of more than 11,000 cases and controls of European descent. Gene-based burden analyses in 4,387 cases and controls of European descent and 302 African American cases and controls, with complete sequence data for PLD3 , reveal that several variants in this gene increase risk for Alzheimer’s disease in both populations. PLD3 is highly expressed in brain regions that are vulnerable to Alzheimer’s disease pathology, including hippocampus and cortex, and is expressed at significantly lower levels in neurons from Alzheimer’s disease brains compared to control brains. Overexpression of PLD3 leads to a significant decrease in intracellular amyloid-β precursor protein (APP) and extracellular Aβ42 and Aβ40 (the 42- and 40-residue isoforms of the amyloid-β peptide), and knockdown of PLD3 leads to a significant increase in extracellular Aβ42 and Aβ40. Together, our genetic and functional data indicate that carriers of PLD3 coding variants have a twofold increased risk for LOAD and that PLD3 influences APP processing. This study provides an example of how densely affected families may help to identify rare variants with large effects on risk for disease or other complex traits. Main The identification of pathogenic mutations in APP , presenilin 1 ( PSEN1 ) and PSEN2 , and the association of apolipoprotein E ( APOE ) genotype with disease risk led to a better understanding of the pathobiology of Alzheimer’s disease, and the development of novel animal models and therapies for this disease 3 . Recent studies using next-generation sequencing have also identified a protective variant in APP 4 , and a low-frequency variant in TREM2 associated with Alzheimer’s disease risk 5 , 6 , 7 , 8 with odds ratio close to that of one APOE4 allele. These studies have led to the identification of functional variants with large effects on Alzheimer’s disease pathogenesis, in contrast to the loci identified through GWAS 1 , 2 . Low-frequency coding variants not detected by GWAS may be a source of functional variants with a large effect on LOAD risk 5 , 6 , 7 , 8 ; however, the identification of such variants remains challenging because most study designs require WES in very large data sets. One potential solution is to perform WES or whole-genome-sequencing in a highly selected population at increased risk for disease followed by a combination of genotyping and deep re-sequencing of the variant or gene of interest in large numbers of cases and controls. We reported previously that families with a clinical history of LOAD in four or more individuals are enriched for genetic risk variants in known Alzheimer’s disease and frontotemporal dementia (FTD) genes, but some of these families do not carry pathogenic mutations in the known Alzheimer’s disease or FTD genes 9 , 10 , suggesting that additional genes may contribute to LOAD risk. We ranked 868 LOAD families from the National Institute on Aging (NIA)-LOAD study based on number of affected individuals, number of generations affected, the number of affected and unaffected individuals with DNA available, the number of individuals with a definite or probable diagnosis of Alzheimer’s disease, early age at onset (AAO) and APOE genotype (discarding families in which APOE4 segregates with disease status), and 14 were selected to perform WES. In the 14 selected families, there were at least four affected individuals per family, with DNA available for at least three of these individuals. We sequenced at least two affected individuals per family, prioritizing distantly related affected individuals with the earliest AAO. We also sequenced one unaffected individual in nine families and two unaffected individuals in one family. In total, we performed WES on 29 affected individuals and 11 unaffected individuals from 14 families of European American ancestry ( Supplementary Table 1 and Supplementary Fig. 2 ). All variants shared by affected individuals but absent in unaffected individuals within a family, with a minor allele frequency (MAF) lower than 0.5% in the Exome Variant Server (EVS; ) were selected and genotyped in the remaining family members to determine segregation with disease ( Supplementary Information ). We next examined whether individual variants or variants in the same gene segregated with disease in more than one family. A single variant, rs145999145 (Val232Met, PLD3, chromosome 19q13.2), segregated with disease in two independent families ( Fig. 1 and Supplementary Fig. 1 ). We then sought to determine whether this variant was associated with increased risk for sporadic Alzheimer’s disease in seven independent data sets (4,998 Alzheimer’s disease cases and 6,356 controls of European descent from the Knight Alzheimer’s Disease Research Centre (ADRC), NIA-LOAD, NIA-UK data set, Cache-County study, the Universities of Toronto, Nottingham and Pittsburgh, the National Institutes of Mental Health (NIMH) Alzheimer’s disease series, and the Wellderly study 7 , 11 , 12 , 13 , 14 ; Extended Data Table 1 ). PLD3 ( V232M ) was associated with both Alzheimer’s disease risk ( P = 2.93 × 10 −05 , odds ratio = 2.10, 95% CI = 1.47–2.99; Table 1 ) and AAO ( P = 3 × 10 −3 ; Extended Data Fig. 1 ). The frequency of PLD3 ( V232M ) was higher in Alzheimer’s disease cases compared to controls in each age–gender–ethnicity matched data set, with a similar estimated odds ratio for each data set ( Extended Data Table 1 and Extended Data Fig. 2 ), suggesting that the association is unlikely to be a false positive due to population stratification. This was confirmed when population principal components derived from GWAS data were included ( Supplementary Information , and Supplementary Figs 2 and 3 ). The association of the Val232Met variant with Alzheimer’s disease risk was also independent of APOE genotype ( Supplementary Information , Supplementary Table 3 and Supplementary Fig. 4 ). Figure 1: Summary of the main genetic findings. The diagram shows the steps used to filter the variants identified by exome-sequencing, which led to the identification of the PLD3 ( V232M ) variant. The diagram also shows the subsequent genetic analyses in large case–control data sets that validated the association of the Val232Met variant and PLD3 with risk for Alzheimer’s disease. CI, confidence interval; OR, odds ratio. PowerPoint slide Full size image Table 1 Association between PLD3 ( V232M ) and Alzheimer’s disease risk in individuals of European descent. Full size table LOAD risk variants, such as APOE4 , are most common in Alzheimer’s disease cases with a family history of disease and least common in elderly controls without disease 8 , 9 . We examined the frequency of Val232Met in three groups of elderly individuals without dementia stratified by age (>65 years, >70 years and >80 years; Table 1 ) and compared them with sporadic versus familial Alzheimer’s disease cases. As predicted for an Alzheimer’s disease risk allele, Val232Met showed age-dependent differences in frequency among controls with the lowest frequency in the Wellderly data set, a series composed of healthy individuals without dementia, who were older than 80 years (carrier frequency 0.27%). Similarly, no Val232Met carriers were found among the 303 individuals without dementia who had normal cerebrospinal fluid Aβ42 and tau profiles, suggesting that the calculated odds ratio for the Val232Met variant when compared to all controls may be an underestimation ( Supplementary Information and Supplementary Table 4 ). As proposed, the frequency of Val232Met was higher in familial cases than in sporadic cases (2.62% in familial versus 1.36% in sporadic cases). Several risk variants have been observed in APP , PSEN1 and PSEN2 and APOE, supporting the role of these genes in Alzheimer’s disease risk 3 , 4 . To identify additional risk variants in PLD3 , we sequenced the PLD3 coding region in 2,363 cases and 2,024 controls of European descent ( Extended Data Tables 2 and 3 ). Fourteen variants were observed more frequently in cases than in controls, including nine variants that were unique to cases ( Fig. 2a and Supplementary Information ). The gene-based burden analysis resulted in a genome-wide significant association of carriers of PLD3 coding variants among Alzheimer’s disease cases (7.99%) compared to controls (3.06%; P = 1.44 × 10 −11 ; odds ratio = 2.75, 95% CI = 2.05–3.68). When the Val232Met variant was excluded, the association remained highly significant, still passing genome-wide multiple-test correction ( P = 1.58 × 10 −8 ; odds ratio = 2.58, 95% CI = 1.87–3.57; Extended Data Table 3 ), indicating that there are additional variants in PLD3 that increase risk for Alzheimer’s disease independent of Val232Met. There were two additional highly conserved variants ( Supplementary Fig. 5 ), that were nominally associated with LOAD risk: Met6Arg ( P = 0.02; odds ratio = 7.73, 95% CI = 1.09–61), and Ala442Ala ( P = 3.78 × 10 −7 ; odds ratio = 2.12, 95% CI = 1.58–2.83). The Ala442Ala variant showed an association with LOAD risk in four independent series ( Extended Data Table 4 ). This variant was included in the gene-based analysis because our bioinformatic and functional analyses indicate that this variant affects splicing and gene expression (see below). Figure 2: Most of the PLD3 coding variants are located in exon 11, and the Ala442Ala variant affects splicing. a , Schematic representation of PLD3 and the relative position of the PLD3 variants. PLD3 has two PLD phosphodiesterase domains, which contain an HKD signature motif (H-X-K-X(4)-D-X(6)-G-T-X-N, where X represents any amino acid residue). The scheme also shows the exon composition of the longest PLD3 mRNA and the position of the variants found in this study. *Variants significantly associated with Alzheimer’s disease risk. †Variants found only in Alzheimer’s disease cases. ‡Variants that are more frequent in Alzheimer’s disease cases than in controls. b , PLD3 neuronal gene expression is significantly lower in Alzheimer’s disease cases compared to controls. We used the Gene Expression Omnibus data set GSE5281 (ref. 26 ), in which neurons were laser-captured to analyse whether PLD3 mRNA expression levels are different between Alzheimer’s disease cases and cognitively normal elderly individuals. c , d , The PLD3 ( A442A ) variant is associated with lower total PLD3 mRNA expression and lower levels of exon11 containing transcripts. Primers specific to exons 7, to 11 (two pairs of primers) were designed with PrimerExpress ( c ). cDNA from 8 PLD3 ( A442A ) carriers and 10 age-, gender-, APOE- , clinical dementia rating (CDR)- and post-mortem interval (PMI)-matched individuals were extracted from parietal lobe. Relative expression of exon 11 compared to the other exons was calculated by the ΔCt (changes in cycle threshold) method. Exon-11-containing transcripts were 20% lower in Ala442Ala carriers ( P < 0.05) in comparison to exon-7–10-containing transcripts. Graphs represent the mean ± s.e.m. Real-time PCR was used to quantify total PLD3 mRNA and standardized using GADPH mRNA as a reference ( d ). P value in d is for the gene-expression levels of major allele carriers versus minor allele carriers after correcting for dementia severity. PowerPoint slide Full size image If the association of PLD3 with Alzheimer’s disease risk is real, it is possible that rare coding variants in PLD3 in other populations will also increase risk for Alzheimer’s disease. We therefore sequenced PLD3 in 302 African American Alzheimer’s disease cases and controls. Both the Val232Met and the Ala442Ala variants were found in Alzheimer’s disease cases but not controls, and the Ala442Ala variant showed a significant association with Alzheimer’s disease risk ( P = 0.03). There was also a significant association with LOAD risk at the gene level ( P = 1.4 × 10 −3 ; odds ratio = 5.48, 95% CI = 1.77–16.92; Fig. 1 , Extended Data Table 5 and Supplementary Information ). This consistent evidence of association with Alzheimer’s disease risk, at the single-nucleotide polymorphism (SNP) and gene level in two different populations strongly supports PLD3 as an Alzheimer’s disease risk gene. To begin to understand the link between PLD3 and Alzheimer’s disease, we analysed PLD3 expression in Alzheimer’s disease case and control brains. In human brain tissue from cognitively normal individuals, PLD3 showed high levels of expression in the frontal, temporal and occipital cortices and hippocampus ( Supplementary Fig. 6 ). Using data from gene expression in laser-captured neurons from Alzheimer’s disease cases and controls, PLD3 gene expression was significantly lower in Alzheimer’s disease cases compared to controls ( P = 8.10 × 10 −10 ; Fig. 2b ). This result was replicated in three additional independent data sets ( Supplementary Information and Extended Data Fig. 3 ). Bioinformatic analyses predicted that the Ala442Ala variant affects alternative splicing ( Supplementary Fig. 7 and Supplementary Information ). We found that Ala442Ala is associated with lower levels of total PLD3 messenger RNA ( Fig. 2D ) and lower levels of transcripts containing exon 11 ( Fig. 2c and Supplementary Fig. 8 ), supporting the functional effect of this variant. PLD3 is a non-classical, poorly characterized member of the PLD superfamily of phospholipases. PLD1 and PLD2 have been previously implicated in APP trafficking and Alzheimer’s disease 15 , 16 , 17 . To determine whether PLD3 also affects APP processing, wild-type human PLD3 was overexpressed in mouse neuroblastoma (N2A) cells that stably express wild-type human APP695 ( APP695-WT ; cells termed N2A-695). In this system extracellular Aβ42 and Aβ40 were decreased by 48% and 58%, respectively, compared to the empty vector ( P < 0.0001; Fig. 3a ). Conversely, knockdown of endogenous PLD3 expression by short hairpin RNA (shRNA) in N2A-695 cells resulted in higher levels of extracellular Aβ42 and Aβ40 than in cells transfected with scrambled shRNA ( Fig. 3b ). To determine whether the observed effects on APP processing were unique to PLD3 or common among the phospholipase D protein family, we co-expressed APP695-WT with PLD1, PLD2 and PLD3 in human embryonic kidney (HEK293T) cells. Overexpression of PLD3, but not empty vector, PLD1 or PLD2, resulted in a substantial decrease in full-length APP levels ( Fig. 3c ). Extracellular Aβ42 and Aβ40 levels were significantly reduced in cells overexpressing PLD1, PLD2 and PLD3 compared to control ( Fig. 3c ). Interestingly, overexpression of catalytically inactive PLD1 and PLD2 variants ( PLD1 ( K898R ) and PLD2 ( K758R )) restored extracellular Aβ42 and Aβ40 levels to control values, demonstrating that this is in part a phospholipase-activity-dependent effect ( Fig. 3c ). Overexpression of a PLD3 dominant-negative variant ( PLD3 ( K418R )) that inhibits myotube formation 18 failed to restore full-length APP and Aβ42 and Aβ40 to normal levels ( Fig. 3c ). Furthermore, PLD3 can be co-immunoprecipitated with APP in cultured cells ( Extended Data Fig. 4 ). Together, these studies demonstrate that PLD3 has a role in APP processing that is functionally distinct from PLD1 and PLD2. These findings are consistent with the human genetic and brain expression data presented above; lower PLD3 expression and function is correlated with higher APP and amyloid-β levels and with more extensive Alzheimer’s-disease-specific pathology ( Supplementary Table 4 ). Figure 3: PLD3 affects APP processing. a , b , Overexpression and knockdown of PLD3 produce opposing effects on extracellular amyloid-β levels. N2A cells stably expressing human APP695-WT were transiently transfected with vectors containing no insert ( pcDNA3 ), human PLD3-WT , scrambled shRNA (Origene), or mouse PLD3 shRNA (Origene) for 48 h. Cell media were analysed with Aβ40 and Aβ42 ELISAs and corrected for total intracellular protein. Amyloid-β levels were then expressed relative to pcDNA3. Graphs represent the mean ± s.e.m. Overexpression of human PLD3 produces significantly less extracellular Aβ42 and Aβ40 ( a ). * P < 0.0001. Knockdown of endogenous PLD3 cells produces significantly more extracellular Aβ42 and Aβ40 ( b ). * P < 0.002. c , Members of the PLD protein family have different effects on APP processing. HEK293T cells were transiently transfected with vectors containing human APP-WT and an empty vector ( pcDNA3 ), PLD1, PLD2 or PLD3-WT , or PLD1 , PLD2 , PLD3 carrying a dominant-negative mutation. Left panel, PLD3 affects full-length APP levels. Cell lysates were extracted in non-ionic detergent, analysed by SDS–PAGE and immunoblot with antibodies to the Myc-tag on APP (9E10) or β-tubulin. Middle (Aβ42) and right (Aβ40) panels, cell media were analysed with Aβ40 and Aβ42 ELISAs and corrected for total intracellular protein. Graphs represent the mean ± s.e.m. * P < 0.01, different from pcDNA3; ** P = 0.002, different from PLD1-WT; *** P < 0.0001, different from PLD2-WT. Images are representative of at least three replicate experiments. PowerPoint slide Full size image Here we provide extensive genetic evidence that PLD3 is an Alzheimer’s disease risk gene: genome-wide significant evidence that rare variants in PLD3 increase risk for Alzheimer’s disease in multiple data sets and two populations. In addition, our functional studies confirm that PLD3 affects APP processing, in a manner that is consistent with increased risk for Alzheimer’s disease 3 , 19 . This work also provides a second example of a novel gene containing rare variants that influence risk for Alzheimer’s disease 5 , 7 , 8 . Although these variants have low population attributable fraction (proportion of cases in the population attributable to PLD3 variants) and diagnostic utility owing to their rarity, they provide important and novel insights into Alzheimer’s disease pathogenesis. Our success in identifying multiple families carrying the Val232Met variant and the enrichment of this variant in LOAD families compared to sporadic Alzheimer’s disease cases demonstrates the power of using a highly selected sample of multiplex LOAD families for variant discovery. The studies on TREM2 (refs 5 , 6 , 7 , 8 ), and this report, suggest that next-generation sequencing projects will identify additional low-frequency and rare variants associated with Alzheimer’s disease. Methods Summary Participants Samples were obtained from seven independent data sets totalling 4,998 Alzheimer’s disease cases and 6,356 controls of European descent from the Knight ADRC, NIA-LOAD, NIA-UK data set, Cache-County study, the Universities of Toronto, Nottingham and Pittsburgh, the NIMH-AD series, and the Wellderly study 7 , 11 , 12 , 13 , 14 . Exome sequencing Enrichment of coding exons and flanking intronic regions was carried out using a solution hybrid selection method with the SureSelect human all exon 50-Mb kit (Agilent Technologies) as previously described 20 . SNP genotyping SNPs were genotyped using the Illumina Golden Gate, Sequenom, KASPar 21 , 22 and/or Taqman. PLD3 sequencing PLD3 was sequenced using a pooled-DNA sequencing design as described previously 9 , 23 , 24 . All rare missense or splice site variants were then validated by Sequenom and KASPar genotyping. Gene-expression and alternative splicing analyses Total RNA was extracted using the RNeasy mini kit (Qiagen). Complementary DNA was prepared from the total RNA, using the High-Capacity cDNA Archive kit (ABI). Gene-expression levels were analysed by real-time polymerase chain reaction (PCR), using an ABI-7900 real-time PCR system. Statistical analyses All of the single SNP analyses were performed using a Fisher’s exact test. Allelic association with risk for Alzheimer’s disease was tested using ‘proc logistic’ in SAS, including APOE genotype, age, principal component (PC) factors, from population stratification analyses and study as covariates when available. Gene-based analyses were performed using the optimal SNP-set Kernel Association Test (SKAT-O) 25 . Cell-based studies To assess the effects of PLD3 expression on APP cleavage, vectors containing PLD3-WT or PLD3 shRNA were transiently transfected in mouse N2A cells stably expressing human APP695-WT . Aβ40 and Aβ42 were measured in conditioned media by enzyme-linked immunosorbant assay (ELISA) (Invitrogen). PLD3 silencing was confirmed by quantitative PCR (qPCR). To assess the effects of PLD proteins on APP cleavage, HEK293T cells were transiently transfected with vectors containing PLD1 , PLD2 and PLD3-WT or dominant-negative mutations. Aβ40 and Aβ42 were measured in conditioned media by ELISA. Full-length APP levels were measured by immunoblot analysis of cell lysates. Online Methods Participants and study design The Institutional Review Board (IRB) at Washington University School of Medicine approved the study. Written informed consent was obtained from participants and their family members by the Clinical Core of the Knight ADRC. The approval number for the Knight ADRC Genetics Core is 93-0006. Knight-ADRC samples The Knight-ADRC sample included 1,114 late-onset Alzheimer’s disease (LOAD) cases and 913 cognitively normal controls (377 older than 70 years), of European descent, and 302 African American Alzheimer’s disease cases and controls, matched for age, gender and ethnicity. These individuals were evaluated by Clinical Core personnel of the Knight ADRC at Washington University. Cases received a clinical diagnosis of Alzheimer’s disease dementia in accordance with standard criteria, dementia severity was determined using the Clinical Dementia Rating (CDR) 27 . Cerebrospinal fluid (CSF) levels data set: A subset ( n = 528) of the Knight-ADRC samples had total tau protein and Aβ42 levels measured in the CSF by ELISA. Of these, 528, 303 did not have dementia (CDR = 0) and were elderly (over 65 years of age), with high CSF Aβ42 levels (>500 pg ml −1 ). A description of the CSF data set used in this study can be found in another paper 11 . CSF collection and Aβ42, tau and phosphorylated tau181 measurements were performed as described previously 28 . NIA-LOAD Participants from the National Institute of Ageing Late Onset Alzheimer Disease (NIA-LOAD) Family Study included a single individual with dementia from each of 868 families with at least three Alzheimer’s disease-affected individuals, and 881 unrelated control individuals who were elderly and did not have dementia (545 individuals were older than 70 years of age). All Alzheimer’s disease cases were diagnosed with dementia of the Alzheimer's type (DAT) using criteria equivalent to the National Institute of Neurological and Communication Disorders and Stroke-Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA) for probable Alzheimer’s disease 29 . NIA-LOAD families were ascertained based on the following criteria: probands (the affected individual through whom the family was recruited into the study) were required to have a diagnosis of definite or probable LOAD (onset after 60 years of age) and a sibling with definite, probable or possible LOAD with a similar age at onset. A third biologically related family member (first, second or third degree) was also required, regardless of affection status. This individual had to be ≥60 years of age if unaffected, or ≥50 years of age if diagnosed with LOAD or mild cognitive impairment 12 . Within each pedigree, we selected a single individual for the case–control series by identifying the youngest affected family member with the most definitive diagnosis (that is, individuals with autopsy confirmation were chosen over those with clinical diagnosis only). Unrelated controls without dementia who were used for the NIA-LOAD case–control series had no family history of Alzheimer’s disease and were matched to the cases as previously described 12 . Only individuals of European descent based on the principal component (PC) factors from population stratification analyses were included. Written informed consent was obtained from all participants, and the study was approved by local IRB committees. Wellderly Study The Scripps Translational Science Institute’s Wellderly study has recruited more than 1,000 healthy elderly participants. Inclusion criteria specify informed consent, age >80 years, blood or saliva donation, compliance with protocol-specified procedures, and no or mild ageing-related medical conditions. Exclusion criteria includes self-reported cancer (excluding basal and squamous cell skin cancer), coronary artery disease or myocardial infarction, stroke or transient ischaemic attack, deep vein thrombosis or pulmonary embolism, chronic renal failure or haemodialysis, Alzheimer’s or Parkinson’s disease, diabetes, aortic or cerebral aneurysm, or the use of oral chemotherapeutic agents, anti-platelet agents (excluding aspirin), cholinesterase inhibitors for Alzheimer’s disease, or insulin. All genotyped individuals were of European descent. Cache-County study The Cache-County Study was initiated in 1994 to investigate the association of APOE genotype and environmental exposures on cognitive function and dementia. A cohort comprised of 5,092 Cache County, Utah, residents (representing 90% of all individuals in the county who were aged 65 or older) has been followed continually for over 15 years, completing four triennial waves of data collection including clinical assessments 13 . Genotypes were obtained for 255 demented individuals and 2,471 elderly cognitively normal individuals 13 . All individuals genotyped were of European descent. UK-NIA data set A description of the UK-NIA data set can be found in another paper 7 . In brief, this data set includes WES from 143 Alzheimer’s disease cases and 183 elderly control individuals without dementia. All subjects were of European descent. University of Pittsburgh data set The PLD3 ( V232M ) variant was genotyped in 2,211 subjects including 1,253 Alzheimer’s disease cases (62.6% females) and 958 elderly control individuals without dementia(64.3% females). A complete description of the data set can be found in another paper 14 . All individuals were of European descent. Toronto data set The Toronto data set was composed of 269 unrelated Alzheimer’s disease cases (53% females) and 250 unrelated controls without dementia (56% females) of European descent. The mean (s.d.) age at onset of Alzheimer’s disease was 73 (±8) years, and the mean age (s.d.) at last examination of the controls was 73 (±10) years. The study was approved by the IRBs of the University of Toronto. Exome sequencing Enrichment of coding exons and flanking intronic regions was performed using a solution hybrid selection method with the SureSelect human all exon 50Mb kit (Agilent Technologies) following the manufacturer’s standard protocol. This step was performed by the Genome Technology Access Center at Washington University. The captured DNA was sequenced by paired-end reads on the HiSeq 2000 sequencer (Illumina). Raw sequence reads were aligned to the reference genome hg19 using Novoalign (Novocraft Technologies). Base and SNP calling was performed by SNP Samtools. SNP annotation was carried out using version 5.07 of SeattleSeq Annotation server (see URL) 20 . On average, 95% of the exome had greater than eightfold coverage. SNP calls were made using SAM tools 30 . SNPs identified with a quality score lower than 20 and a depth of coverage lower than 5 were removed. More than 2,500 novel variants in the coding region were found per individual. We identified all variants shared by the affected individuals in a family. Variants not present in 1,000 genome project or the Exome Variant Server (EVS: ) or with a frequency lower than 0.5% in the EVS were selected. On average, 80 coding variants were selected for each family. The selected variants were then genotyped in the remaining sampled family members. We validated more than 98% of the selected variants, confirming the high specificity of our exome-sequencing method and analysis. On average, we genotyped a total of 13 family members (7 cases and 6 controls) per family. SNP genotyping SNPs were genotyped using the Illumina Golden Gate, Sequenom, Kaspar and/or Taqman genotyping technologies. Only SNPs with a genotyping call rate higher than 98% and in Hardy–Weinberg equilibrium were used in the analyses. The principle of the MassARRAY system is PCR-based, with different size products analysed by SEQUENOM MALDI-TOF mass spectrometry 21 , 31 . The KBioscience Competitive Allele-Specific PCR (KASP) system is FRET-based endpoint-genotyping technology, v4.0 SNP (KBioscience) 21 , 31 . Genotype call rates were greater than 98%. PLD3 sequencing PLD3 was sequenced in 2,363 cases and 2,027 controls of European origin, and 130 cases and 172 controls of African American descent using a pooled-DNA sequencing design as described previously 9 , 23 , 24 , 32 . In brief, equimolar amounts of individual DNA samples were pooled together following quantification using the Quant-iT PicoGreen reagent. Pools contained 100 ng of DNA per individual, from 94 individuals. The coding exons and flanking regions (a minimum of 50 bp each side) were individually PCR amplified using specific primers and Pfu Ultra high-fidelity polymerase (Stratagene). An average of 20 diploid genomes (approximately 0.14 ng DNA) per individual were used as input. PCR products were cleaned using QIAquick PCR purification kits, quantified using Quant-iT PicoGreen reagent and ligated in equimolar amounts using T4 Ligase and T4 Polynucleotide Kinase. After ligation, concatenated PCR products were randomly sheared by sonication and prepared for sequencing on an Illumina HighSeq2000 according to the manufacturer’s specifications. pCMV6-XL5 amplicon (1,908 base pairs) was included in the reaction as a negative control. As positive controls, ten different constructs ( p53 gene) with synthetically engineered mutations at a relative frequency of one mutated copy per 188 normal copies was amplified and pooled with the PCR products. Paired-end reads (101 bp) were aligned to the human genome reference assembly build 36.1 (hg19) using SPLINTER 32 . SPLINTER uses the positive control to estimate sensitivity and specificity for variant calling. The wild type: mutant ratio in the positive control is similar to the relative frequency expected for a single mutation in one pool (1 chromosome mutated in 94 samples = 1 in 188 chromosomes). SPLINTER uses the negative control (first 900 bp) to model the errors across the 101-bp Illumina reads and to create an error model from each sequencing run. Based on the error model SPLINTER calculates a P value for the probability that a predicted variant is a true positive. A P value at which all mutants in the positive controls were identified was defined as the cut-off value for the best sensitivity and specificity. All mutants included as part of the amplified positive control vector were found upon achieving >30-fold coverage at mutated sites (sensitivity = 100%) and only ∼ 80 sites in the 1,908-bp negative control vector were predicted to be polymorphic (specificity = 95%). The variants with a P value below this cut-off value were considered for follow-up genotyping confirmation. All rare missense or splice-site variants were then validated by Sequenom and KASPar genotyping in each individual included in the pools. To avoid any batch or plate effects, cases and controls were included in each genotyping plate and all genotyping was performed in a single experiment. Finally, to confirm all of the heterozygous calls, we created a custom DNA plate including all of the heterozygotes (cases and controls) for all of the variants, and then genotyped them again by Sequenom, creating a new Sequenom set. Gene-expression and alternative splicing analyses Total RNA was extracted using the RNeasy mini kit (Qiagen) following the manufacturer’s protocol from 82 Alzheimer’s disease cases and 39 individuals without dementia. Extracted RNA was treated with DNase1 to remove any potential DNA contamination. cDNAs were prepared from the total RNA, using the High-Capacity cDNA Archive kit (ABI). Gene-expression levels were analysed by real-time PCR, using an ABI-7900 real-time PCR system. The PLD3 ( A442A ) variant was genotyped in DNA extracted from parietal lobe of 82 Alzheimer’s disease cases and 39 individuals without dementia by KASPar as explained below. A total of eight carriers for the Ala442Ala variant were identified. Total PLD3 expression: gene expression was analysed by real-time PCR, using an ABI-7500 real-time PCR system. TaqMan assays were used to quantify PLD3 mRNA levels. Primers and TaqMan probe for the reference gene, GAPDH, were designed over exon–exon boundaries, using Primer Express software, v3 (ABI) (sequences available on request). Cyclophilin A (ABI: 4326316E) was also used as a reference gene. Each real-time PCR run included within-plate triplicates and each experiment was performed at least twice for each sample. Alternative splicing: we selected eight Ala442Ala carriers as well as eight CDR-, age-, APOE- and PMI-matched individuals to analyse the expression level of exon 11 containing transcripts, the exon in which the Ala442Ala variant is located. Real-time PCR assays were used to quantify PLD3 exon 7 (forward primer, 5′-GCAGCTCCATCCCATCAACT-3′; reverse, 5′-CTTGGTTGTAGCGGGTGTCA-3′), exon 8 (forward primer, 5′-CTCAACGTGGTGGACAATGC-3′; reverse, 5′-AGTGGGCAGGTAGTTCATGACA-3′), 9 (forward primer, 5′-ACGAGCGTGGCGTCAAG-3′; reverse, 5′-CATGGATGGCTCCGAGTGT-3′), 10 (forward primer, 5′-GGTCCCCGCGGATGA-3′; reverse, 5′-GGTTGACACGGGCATATGG-3′) and 11 (first pair of primers: forward primer, 5′-CCAGCTGGAGGCCATTTTC-3′; reverse, 5′-TGTCAAGGTCATGGCTGTAAGG-3′; second pair forward primer, 5′-GCTGCTGGTGACGCAGAAT-3′; reverse, 5′-AGTCCCAGTCCCTCAGGAAAA-3′). Two pairs of primers were designed for exon 11 as an internal control. SYBR-green primers were designed using Primer Express software, v3 (ABI). Each real-time PCR run included within-plate duplicates and each experiment was performed at least twice for each sample. Real-time data were analysed using the comparative Ct method. Only samples with a standard error of <0.15% were analysed. The Ct values for exon 11 were normalized with the Ct value for the exons 7–10. The relative exon 11 levels for the Ala442Ala carriers versus the non-carriers were compared using a t -test. PLD3 gene expression in public databases We also used the GEO data sets GSE15222 (ref. 33 ) and GSE5281 (ref. 26 ) to analyse the association of PLD3 gene expression and case-control status. In the GSE15222 data set, there are genotype and expression data from 486 late onset Alzheimer’s Disease cases and 279 neuropathologically normal individuals without dementia. In the GSE5281 data set, samples were laser-captured from cortical regions of 16 normal elderly humans (10 males and 4 females) and from 33 Alzheimer’s disease cases (15 males and 18 females). Mean age of cases and controls was 80 years. All samples were run on the Affymetrix U133 Plus 2.0 array. RNA data were re-normalized to an average expression of 8 units on a log 2 scale. As potential covariates we analysed the brain region, gender and age for each sample. Stepwise discriminant analysis was used to identify the potential covariates to be included in the analysis of covariance (ANCOVA). For this data set we also extracted the gene-expression levels for APP (probe 211277_x_at), PSEN1 (1559206_at) and PSEN2 (203460_s_at) to examine the correlation between PLD3 and APP, PSEN1 and PSEN2 using the Pearson correlation method. Human brain samples and analysis of the Affymetrix Human Exon 1.0 ST array Quantification and analysis of PLD3 gene expression in brains was performed as previously described 34 . In brief, the human data used here were provided by the UK Human Brain Expression Consortium 34 and consisted of 101 control post-mortem brains. All samples originated from individuals with no significant neurological history or neuropathological abnormality and were collected by the MRC Edinburgh Brain Bank 35 , ensuring a consistent dissection protocol and sample handling procedure. A summary of the available demographic details of these samples including a thorough analysis of their effects on array quality is provided in another paper 36 . All samples were accompanied by fully informed consent for retrieval and were authorized for ethically approved scientific investigation (Research Ethics Committee number 10/H0716/3). Total RNA was isolated from human post-mortem brain tissues using the miRNeasy 96-well kit (Qiagen). The quality of total RNA was evaluated by the 2100 Bioanalyzer (Agilent) and RNA 6000 Nano Kit (Agilent) before processing with the Ambion WT Expression Kit and Affymetrix GeneChip Whole Transcript Sense Target Labelling Assay and hybridization to the Affymetrix Exon 1.0 ST. All arrays were pre-processed using Robust Multi-array Average using Partek Genomics Suite v6.6 (Partek). The resulting expression data were corrected for individual effects (within which are nested post-mortem interval, brain pH, sex, age at death and cause of death) and experimental batch effects (date of hybridization). Transcript-level expression was calculated for 26,993 genes using Winsorized means (Winsorizing the data below 10% and above 90%). RNA-pathway analysis To evaluate the biological and functional relevance of co-expressed genes within the PLD3 -containing modules, we used Weighted Gene Co-expression Network Analysis (WGCNA) and DAVID v6.7 ( ), the database for annotation, visualization and integrated discovery 37 . We restricted WGCNA to 15,409 transcripts that passed the Detection Above Background (DABG) criteria ( P < 0.001 in at least 50% of samples in at least one brain region), had a coefficient of variation >5% and expression values exceeding 5 in all samples in at least one brain region. We followed a step-by-step network construction and module detection. In short, for each brain region, the Pearson correlations between all genes across all relevant samples were derived. We then calculated a signed-weighted co-expression adjacency matrix, allowing us to consider only positive correlations. A power 12, the default soft-threshold parameter for constructing a signed weighted network 38 , was used in all brain regions, after checking that this threshold recapitulated scale-free topology 39 . Topological overlap, a more biologically meaningful measure of node interconnectedness (similarity) 9 , 23 than correlation, was subsequently calculated and genes were hierarchically clustered using 1 − topological overlap as the distance measure. Finally, modules were determined by using a dynamic tree-cutting algorithm. WGCNA led to the identification of several co-expression modules, ranging in number and size between the ten brain regions. We examined the overrepresentation (that is, enrichment) of the three Gene Ontology (GO) categories (biological processes, cellular components and molecular function) and KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways for each list of co-expressed genes with PLD3 for each tissue by comparing numbers of significant genes annotated with this biological category with chance. Statistical analyses All of the single SNP analyses were performed using a Fisher’s exact test, with no covariates included. Allelic association with risk for Alzheimer’s disease was tested using ‘proc logistic’ in SAS including APOE genotype, age, PCs and study as covariates when available. Odds ratios with 95% confidence intervals and relative risks were calculated for the alternative allele compared to the most common allele using SAS. Association with age at onset (AAO) was carried out using the Kaplan–Meier method and tested for significant differences, using a proportional hazards model (proc PHREG, SAS) including gender and study as covariates. Controls without dementia were included in the analyses as censored data. The inclusion of these samples did not change the association. Gene-based analyses were performed using the optimal SNP-set (Sequence) Kernel Association Test (SKAT-O) 25 . Population attributable risk We calculated the Population attributable risk (PAR) using the relative risk obtained in the study and the MAF from the EVS database ( ) and in the Cache-County data set, which is a population-based data set, using the equation: where P e is the carrier frequency in the population and RR e is the relative risk for the different variants. Neuropathology studies All study procedures were approved by Washington University’s Human Research Protection Office. At autopsy, brain tissue was obtained from participants according to the protocol of the Knight-ADRC. Alzheimer’s disease neuropathologic change was assessed according to the criteria of the National Institute on Ageing-Alzheimer’s Association (NIA-AA) 40 . Dementia with Lewy bodies was assessed using the criteria given in another paper 41 . Cell-based studies The following plasmids were used in this study: pCMV6-XL5 human PLD3-WT (Origene), pCS2-Myc human APP695 - WT 42 , pCGN- PLD - WT 43 and Lys758Arg 44 , pCGN- PLD2 - WT 45 and Lys898Arg 44 , pGFL-GFP 46 , pGFP-V-RS- PLD3 -shRNA-GI548821 (Origene) and pGFP-V-RS-Scr-shRNA-TR30013 (Origene). A dominant-negative mutation (Lys418Arg) 18 was introduced into the pCMV6-XL5 human PLD3 - WT vector by site-directed mutagenesis using the QuikChangeII Site-Directed Mutagenesis kit (Agilent). All constructs were verified by Sanger sequencing. Cell-culture assays Human embryonic kidney (HEK293T) cells were cultured in Dulbecco’s modified eagle medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 1% l -glutamine and penicillin/streptomycin (solution containing penicillin and streptomycin). HEK293T cells were grown in 6-well lysine-coated plates. Mouse neuroblastoma (N2A) cells stably expressing human APP695 wild type were cultured in DMEM and Opti-MEM (50:50) supplemented with 5% FBS, 1% l -glutamine, penicillin/streptomycin and 500 µg ml −1 G418. After reaching confluency, cells were transiently transfected with Lipofectamine 2000 (Invitrogen). Culture media were replaced after 24 h, and cells were incubated for another 24 h. Conditioned media were collected, treated with protease inhibitor cocktail and centrifuged at 3000 g at 4 °C for 10 min to remove cell debris. Cell pellets were extracted on ice in lysis buffer (50 mM Tris, pH 7.6, 2 mM EDTA, 150 mM NaCl, 1% NP40, 0.5% Triton X-100, protease inhibitor cocktail) and centrifuged at 14,000 g . Protein concentration was measured by the bicinchoninic acid (BCA) method as described by the manufacturer (Pierce-Thermo). Real-time PCR and quantitative PCR To confirm effective knockdown of endogenous mouse PLD3 in mouse N2A-695 cells, RNA was extracted from cell lysates with an RNeasy kit (Qiagen) according to the manufacture’s protocol. Extracted RNA (10 µg) was converted to cDNA by PCR using a High-Capacity cDNA Reverse Transcriptase kit (ABI). Gene expression was analysed by quantitative PCR (qPCR) using an ABI-7900 Real-Time PCR system (ABI). Taqman real-time PCR assays were used to quantify expression for mouse PLD3 (Mm01171272_m1; ABI) and GAPDH (Hs02758991_g1; ABI). Samples were run in triplicate. To avoid amplication interference, expression asays were run in separate wells from the housekeeping gene GAPDH . Real-time data were analysed by the comparative C T method. Average C T values for each sample were normalized to the average C T values for the housekeeping gene GAPDH . The resulting value was corrected for assay efficiency. Samples with a standard error of 20% or less were analysed. Immunoblot analysis Standard SDS–PAGE was performed in 4–20% Criterion Tris-HCl gels (Bio-Rad). Samples were boiled for 5 min in Laemmli sample buffer before electrophoresis 47 . Immunoblots were probed with antibodies: PLD3 (Sigma), 9E10 (Sigma) and β-tubulin (Sigma). Enzyme-linked immunosorbent assay The levels of Aβ40 and Aβ42 were measured in cell culture media by sandwich ELISA as described by the manufacturer (Invitrogen). ELISA values were obtained (measured in pg ml −1 ) and corrected for total intracellular protein (measured in µg ml −1 ) based on BCA measurements. Immunoprecipitation Cell lysates were incubated with Protein G beads (Thermo Scientific) to remove proteins from the solution that are prone to non-specifically bind to the beads (pre-cleared). Pre-cleared supernatants were incubated overnight at 4 °C with the antibodies indicated. Supernatant–antibody complexes were then incubated with Protein G beads at room temperature for 2 h. After washing, proteins were dissociated from the Protein G beads by incubating the beads in Laemmli sample buffer 47 supplemented with 5% β-mercaptoethanol at 95 °C for 10 min. Bioinformatics analysis SIFT ( ) and Polyphen ( ) algorithms were used to predict the functional effect of the identified variants. To determine the effect of the Ala442Ala variant on splicing we used the ESEfinder ( ). Multiple sequence alignment was performed by ClustalW2, and the PLD3 orthologues were downloaded from Ensembl ( ). Accession codes Data deposits The authors declare competing financial interests: details are available in the online version of the paper. Exome-sequencing data is available on NIAGADs ( , accession number NG00033).
(Medical Xpress)—A new study, part-funded by the Medical Research Council (MRC), the Wellcome Trust and Alzheimer's Research UK, has shown that a fault in a gene called phospholipase D3 (PLD3) can contribute to the overproduction of amyloid-beta in the brain. Increased levels of this chemical are associated with an increased chance of developing Alzheimer's disease and the results show that, in certain cases, this can double an individual's risk. An international team of researchers in the UK and the US have been using genome-wide association studies (GWAS) to identify common genetic traits in the population that can influence a person's risk of contracting Alzheimer's disease after the age of 60 (known as Late Onset Alzheimer's Disease or LOAD). They then cross-analysed the data from the GWAS studies using a process known as whole-exome sequencing on 14 families with four or more members affected by Alzheimer's. Focussing on families heavily affected by the disease, the team used a new process to identify less common genes that could have the most severe effect. By improving researchers' understanding of how this gene's activity is linked to amyloid-beta production and Alzheimer's disease, this study will open up new avenues of research for drug development and could potentially help identify people who are more vulnerable to the disease. Dr Carlos Cruchaga the study's lead author at Washington University School of Medicine in St. Louis said "We were very excited to be able to identify a gene that contains some of these rare variants. And we were surprised to find that the effect of the gene was so large. After adjusting for other factors that can influence risk for the disease, we found that people with certain gene variants were twice as likely as those who didn't have the variants to develop Alzheimer's." Professor John Hardy who led the UK work at University College London (UCL) and was funded by the Medical Research Council and the Wellcome Trust said, "The use of the new technologies of whole genome and whole-exome sequencing in Alzheimer's disease is now yielding a rich harvest of genetic variants which influence our risks of developing disease. These re-enforce the critical role of amyloid deposition and breakdown in the brain as one of the 'main events' that can cause Alzheimer's disease." Rebecca Wood, Chief Executive of Alzheimer's Research UK, the UK's leading dementia research charity, said, "Advances in genetic technology are allowing researchers to understand more than ever about the genetic risk factors for the most common form of Alzheimer's. This announcement, made just off the back of the G8 dementia research summit, is a timely reminder of the progress that can be made by worldwide collaboration. We know that late-onset Alzheimer's is caused by a complex mix of risk factors, including both genetic and lifestyle. Understanding all of these risk factors and how they work together to affect someone's likelihood of developing Alzheimer's is incredibly important for developing interventions to slow the onset of the disease. Alzheimer's Research UK is proud to have contributed to this discovery, both by funding researchers and through the establishment of a DNA collection that has been used in many of the recent genetic discoveries in Alzheimer's."
10.1038/nature12825
Physics
Physics experiment with ultrafast laser pulses produces a previously unseen phase of matter
Light-induced charge density wave in LaTe3, Nature Physics (2019). DOI: 10.1038/s41567-019-0705-3 , nature.com/articles/s41567-019-0705-3 Journal information: Nature Physics
http://dx.doi.org/10.1038/s41567-019-0705-3
https://phys.org/news/2019-11-physics-ultrafast-laser-pulses-previously.html
Abstract When electrons in a solid are excited by light, they can alter the free energy landscape and access phases of matter that are out of reach in thermal equilibrium. This accessibility becomes important in the presence of phase competition, when one state of matter is preferred over another by only a small energy scale that, in principle, is surmountable by the excitation. Here, we study a layered compound, LaTe 3 , where a small lattice anisotropy in the a – c plane results in a unidirectional charge density wave (CDW) along the c axis 1 , 2 . Using ultrafast electron diffraction, we find that, after photoexcitation, the CDW along the c axis is weakened and a different competing CDW along the a axis subsequently emerges. The timescales characterizing the relaxation of this new CDW and the reestablishment of the original CDW are nearly identical, which points towards a strong competition between the two orders. The new density wave represents a transient non-equilibrium phase of matter with no equilibrium counterpart, and this study thus provides a framework for discovering similar states of matter that are ‘trapped’ under equilibrium conditions. Main A major theme in condensed matter physics is the relationship between proximal phases of matter, where one ordered ground state gives way to another as a function of some external parameter such as pressure, magnetic field, doping or disorder. It is in such a neighbourhood that we find colossal magnetoresistance in manganites 3 and unconventional superconductivity in heavy fermion, copper oxide and iron-based compounds 4 . In these materials, the nearby ground states can affect one another in several ways. For example, phases can compete, impeding the formation of one state in place of another. This scenario occurs, for example, in La 2− x Ba x CuO 4 at x = 1/8, where the development of alternating charge and spin-ordered regions prevents the onset of superconductivity 5 , 6 . On the other hand, fluctuations of an adjacent phase can help another be realized, such as in 3 He, where ferromagnetic spin fluctuations enable the atoms to form Cooper pairs and hence a p -wave superfluid 4 , 7 . In more complicated situations, such as in manganites, nanoscale phase separation occurs, where local insulating antiferromagnetism coexists next to patches of metallic ferromagnetism, resulting in large magnetic and electrical responses to small perturbations 3 . In each case, the macroscopic properties of a material are heavily influenced by the nearby presence of different phases. Intense light pulses have recently emerged as a tool to tune between neighbouring broken-symmetry phases of matter 8 , 9 , 10 , 11 , 12 , 13 . Conventionally, light pulses are used to restore symmetry, but in certain cases symmetries can also be broken. For example, exposing SrTiO 3 to mid-infrared radiation has led to ferroelectricity 8 , 9 , while ferromagnetism has been induced in a manganite with near-infrared light 11 . In this Letter, we examine a quasi-two-dimensional (2D) material, LaTe 3 , where a unidirectional charge density wave (CDW) phase is only present along the c axis, with no counterpart along the nearly equivalent, perpendicular a axis. We show that femtosecond light pulses can be used to break translational symmetry and unleash an a -axis CDW. Using ultrafast electron diffraction (UED), we visualize this process and track both order parameters simultaneously, gaining a unique perspective on both orders in the time domain. LaTe 3 is a member of the rare-earth tritellurides ( R Te 3 , where R denotes a rare-earth element). These materials possess a layered, quasi-tetragonal structure (Fig. 1a ) with a slight in-plane anisotropy ( a ≥ 0.997 c ; Supplementary Fig. 4b ) 2 , 14 , which leads to a preferred direction for the CDW order along the c axis. The Fermi surface, which is similar for all R Te 3 , arises from the nearly square-shaped Te sheets; the rare-earth atoms, with different radii, effectively serve to apply chemical pressure 1 , 15 . The normal-state Fermi surface is depicted in Fig. 1a , along with the CDW wavevector for LaTe 3 . Depending on the specific rare-earth element in R Te 3 , some of the members display a CDW only along the c direction while others have an additional CDW along the orthogonal a direction (Fig. 1b ). As one moves from lighter to heavier rare-earth elements, the transition temperature of the CDW along the c axis, T c 1 , decreases, while that along the a axis, T c 2 , is first finite in TbTe 3 and increases with atomic number (Fig. 1b ). This relationship strongly suggests that the two CDWs compete in equilibrium. The competition can be understood as follows. Once the c -axis CDW forms, large portions of the Fermi surface open up a gap; the corresponding loss of states near the Fermi energy therefore inhibits the formation of the a -axis CDW 15 . In the material we study here, LaTe 3 , T c 1 is estimated to be ~670 K (ref. 16 ), and a CDW along the a axis does not exist. Fig. 1: Observation of a transient CDW induced by an 80 fs, 800 nm laser pulse. a , Left: Schematic of the LaTe 3 crystal structure, where dashed lines indicate the unit cell. Middle: Schematic of the normal state Fermi surface arising from the planar Te sheets, which are nearly square-shaped with a slight in-plane anisotropy ( a = 0.997 c at room temperature 2 , 14 ). The wavevector q c of the equilibrium c -axis CDW is indicated. Right: Schematic of the ultrafast electron diffraction set-up in transmission mode. Both 26 keV and 3.1 MeV electrons were used (see Methods and Supplementary Note 1 ). Exfoliated samples were mounted on a 10-nm-thick silicon nitride window. CCD, charge-coupled device. b , Summary of the two CDW transition temperatures, T c 1 and T c 2 , across the rare-earth series. Insets: Schematics of the unidirectional CDW below T c 1 (left) and bidirectional CDW below T c 2 (right), respectively. c , Electron diffraction patterns before (left) and 1.8 ps after (right) photoexcitation with a near-infrared laser pulse, taken at 3.1 MeV electron kinetic energy. Blue and red arrows indicate the equilibrium CDW peaks along the c axis and the light-induced CDW peaks along the a axis, respectively. The right half is a mirror reflection of the left half, so they show the same set of peaks in the diffraction pattern. Due to the transverse nature of atomic displacements, the a -axis CDW peaks are most visible along c * and the opposite is true for the c -axis CDW peaks. A full diffraction image is shown in Supplementary Fig. 1a . \({\mathbf{a}}^ \ast \equiv (2{\mathrm{\pi}} /|{\mathbf{a}}|)\widehat {\mathbf{a}}\) and \({\mathbf{c}}^ \ast \equiv (2{\mathrm{\pi}} /|{\mathbf{c}}|)\widehat {\mathbf{c}}\) are reciprocal lattice unit vectors. Source data Full size image To follow the temporal evolution of the CDW after light excitation, we used transmission ultrafast electron diffraction (Fig. 1a ), which allows us to capture the ( H 0 L ) plane, with ( HKL ) denoting the Miller indices. In the left panel of Fig. 1c , we show a static diffraction pattern of LaTe 3 taken before the arrival of the pump laser pulse, where satellite peaks (blue arrows) flanking the main Bragg peaks are observed only along the c axis. These peaks are due to the existence of the equilibrium CDW. In the right panel, we show the diffraction pattern 1.8 ps after photoexcitation by an 80 fs, 800 nm (1.55 eV) laser pulse, which creates excitations across the single-particle gap and suppresses the CDW along the c axis. As the equilibrium CDW is weakened, new peaks emerge along the a direction (red arrows) independent of the pump laser polarization, a change that can also be visualized in the differential intensity plot in Fig. 2a . Here, the appearance of a new lattice periodicity along the a axis is clear and we interpret these peaks as signalling the emergence of an out-of-equilibrium CDW (see Supplementary Video). This observation was replicated in four different samples in two separate UED set-ups, using 3.1 MeV and 26 keV electron kinetic energies, respectively (see Methods and Supplementary Note 1 ). Fig. 2: Dynamics of the light-induced CDW. a , Change in intensities with respect to the diffraction pattern before photoexcitation. Snapshots are taken at three selected pump–probe time delays, as indicated by the triangles in b . b , Time evolution of integrated intensities of the equilibrium c -axis CDW peak ( I c ), the transient a -axis CDW peak ( I a ) and thermal diffuse scattering ( I TDS ). Integration areas are marked by dashed circles in c , with corresponding colours. Peaks in multiple crystallographic Brillouin zones are averaged for improved signal-to-noise ratio. I a and I TDS are vertically offset to have their values zeroed before photoexcitation. I c is normalized by its value before photoexcitation. Error bars represent the s.d. of noise for t < 0. c , An enlarged view of the dashed square in a at t = 1.8 ps. Each integration region has a diameter equal to 1.5 times the full-width at half-maximum (FWHM) of the equilibrium CDW peak. The incident pump laser fluence for all panels was 1.3 mJ cm −2 . Source data Full size image This non-equilibrium CDW is ephemeral and only lasts for a few picoseconds. In Fig. 2b , we show the temporal evolution of the integrated intensity of the peaks along both the a and c axes. The intensity of the a -axis CDW peak reaches a maximum around 1.8 ps and then relaxes over the next couple of picoseconds to a quasi-equilibrium value. The residual intensity at long time delays is due to laser pulse-induced heating that causes a thermal occupation of phonons, which is shown in the diffuse scattering trace in Fig. 2b and as the overall red background in Fig. 2a,c . The intensity of the c -axis CDW peak shows the opposite behaviour: it first reaches a minimum around 0.5 ps before recovering to a quasi-equilibrium. The initial decay of the c -axis CDW occurs markedly faster than the rise of the transient CDW. This is because suppression of the equilibrium CDW involves a coherent motion of the lattice ions, whose timescale is tied to the period of the 2.2 THz CDW amplitude mode 17 , 18 , 19 . On the other hand, incoherent fluctuations dictate the ordering of the a -axis CDW, which occurs on a slower timescale (Supplementary Note 2 ). Despite the disparity in the initial timescales, the relaxation times are nearly identical and the overall intensity changes are perfectly anti-correlated, which suggest that these latter properties are governed by a single underlying mechanism. Figure 2b shows that one CDW forms at the cost of the other and the two recover back to quasi-equilibrium simultaneously, which, for this fluence, takes a characteristic time of τ a ≈ τ c ≈ 1.7 ps. The agreement in trends of both the intensities and the characteristic relaxation times is even more striking when we examine the data at different photoexcitation fluences. As shown in Fig. 3a,b and summarized in Fig. 3c,d , for each fluence, the two CDWs reach anti-correlated extremum values and relax in almost perfect correspondence. Such a strong correlation in both the intensities and the relaxation timescales naturally points towards a phase competition in this non-equilibrium context where the transient CDW cannot exist once the equilibrium CDW recovers. Fig. 3: Dependence of equilibrium and transient CDW peaks on pump laser fluence. a , b , Time evolution of the integrated intensities for the transient a -axis CDW peaks ( a ) and the equilibrium c -axis CDW peaks ( b ). Each colour denotes an incident fluence. Error bars are obtained from the s.d. of noise before photoexcitation. Curves are single-exponential fits to the relaxation dynamics. In b , the intensity I c does not transiently reach zero at high fluence because of background intensities in the diffraction pattern and non-uniform illumination of all layers of the sample due to a shorter pump laser penetration depth (44 nm at 800 nm wavelength) compared to the sample thickness ( ≲ 60 nm) used in this case. c , Left axis: Minimum value of the integrated intensity for the equilibrium c -axis CDW peaks. Right axis: Maximum value of the integrated intensity for the transient a -axis CDW peaks. The saturation at high fluence reflects a complete suppression of the c axis order in the pumped volume. d , Characteristic relaxation times at different fluences for the recovery of the c -axis CDW peaks ( τ c ) and the disappearance of the a -axis CDW peaks ( τ a ). The increasing trend reflects the longer time taken for topological defects in the c -axis CDW to disappear at higher fluence 17 . Error bars, if larger than the symbol size, denote 1 s.d. in the corresponding single-exponential fits in a , b . Source data Full size image On close scrutiny of the transient CDW wavevector, \(\tilde q_a\) , it appears that this CDW is a genuinely non-equilibrium state (we use a tilde to denote the non-equilibrium value). Notably, the wavevector does not resemble values seen in other rare-earth tritellurides that exhibit an equilibrium a -axis CDW 20 , 21 , 22 . The transient CDW has an incommensurate wavevector, \(\tilde q_a = 0.291(13)\) , expressed in reciprocal lattice units (red square, Fig. 4a and Supplementary Note 3 ). In contrast, the equilibrium q a values measured in other rare-earth tritellurides are noticeably larger (Fig. 4a ). According to the trend of q a with rare-earth mass, LaTe 3 would exhibit the largest q a . Instead, \(\tilde q_a\) is closer in value to the markedly smaller wavevector of the c -axis CDW, q c . Thus, the observed \(\tilde q_a\) of the transient CDW highlights that it is not a trivial extension to an equilibrium a -axis CDW. Fig. 4: Transient CDW seeded by topological defects. a , Summary of CDW wavevectors across the rare-earth series. r.l.u., reciprocal lattice unit. Values of q c (blue diamonds) were taken from ref. 14 at the highest temperature below T c 1 . For q a , only TbTe 3 (red circle and ref. 22 ), DyTe 3 (red diamond and ref. 21 ) and ErTe 3 (red triangle and ref. 20 ) were accurately measured by X-ray diffraction (see Supplementary Note 3 for a discussion on the temperature dependence of q c and q a at equilibrium). The white square denotes the calculated wavevector of the soft phonon along the a axis, q a ,soft , which was confirmed by inelastic X-ray scattering at T c 1 (ref. 21 ). Red and orange squares denote the wavevectors of the transient a -axis CDW, probed by time-resolved MeV and keV electron diffraction, respectively. Dashed lines are guides to the eye, highlighting a monotonic trend for both q a and q c across the rare-earth elements. The shaded region represents extrapolated values of q a for light rare-earth elements (La to Gd) if a bidirectional CDW were to form. Error bars, if larger than symbol size, denote reported uncertainty in the literature or, for \(\tilde q_a\) , the s.d. of values obtained in multiple Brillouin zones and diffraction images (see Supplementary Note 3 for details of determining \(\tilde q_a\) ). b , Schematic of CDW order parameter amplitudes, ψ c and ψ a , near a topological defect in the equilibrium c -axis CDW (Supplementary Note 5 ). Characteristic length scales of suppression in ψ c and enhancement in ψ a are labelled by λ c and λ a , respectively. r denotes the distance from the vortex centre. c , Schematics of charge density waves in real space before (left) and ~2 ps after (right) photoexcitation. Stripe brightness indicates the strength of the CDW amplitude. A dislocation (black arrow) is used as an example of a topological defect in the c -axis CDW after photoexcitation. Source data Full size image We can gain some insight into the origin of the anomalous wavevector from previous inelastic X-ray scattering measurements and density functional theory calculations on DyTe 3 , which is in the same CDW family 21 . In DyTe 3 , when the c -axis CDW develops at 308 K, strong CDW fluctuations are also seen along the a direction in the form of phonon softening, namely, a marked decrease in the phonon frequency. As shown in Fig. 4a , these fluctuations occur at a wavevector q a ,soft (white square), which is comparable in magnitude to that of c -axis CDW, q c . However, when the a -axis CDW eventually forms at 50 K, it does so at a larger wavevector, q a (so that q a ,soft ≈ q c ≪ q a ). The reason for these relationships among the wavevectors is the following. At high temperature, the Fermi surface has negligible a / c anisotropy ( q a ,soft ≈ q c ); however, when the a -axis CDW forms at low temperature, it does so after the c -axis CDW has already opened a gap at portions of the Fermi surface, which changes the nesting conditions ( q c ≪ q a ) 23 . Returning to LaTe 3 , we observe \(\tilde q_a \approx q_c\) (Supplementary Note 3 ), which suggests that the transient a -axis CDW looks more akin to one that would have formed at high temperature had the c -axis CDW not prevented it from doing so. To explain all of these observations within a consistent framework, we propose a picture where the non-equilibrium CDW arises due to the generation of topological defect/anti-defect pairs in the c -axis CDW through local absorption of high-energy photons (Fig. 4c ) 17 , 24 . The presence of these defects was recently evidenced and extensively characterized in LaTe 3 upon photoexcitation 17 and visualized by scanning tunnelling microscopy of Pd-intercalated ErTe 3 (ref. 25 ). In spatial regions where the dominant c -axis order is suppressed, for example near topological defects, the normal-state Fermi surface is restored (Fig. 1a ). The normal-state Fermi surface is unstable to the formation of a CDW along both the a and c axes 21 . However, at a defect core in LaTe 3 , the instability along the c axis is topologically inhibited, which allows the sub-dominant a axis phase to form instead (Fig. 4b ) 26 , 27 , 28 . The benefit of this picture is that it can explain several observations that are difficult to capture in other theoretical scenarios (Supplementary Note 4 ). First, the transient CDW forms despite only a partial suppression of the c -axis CDW, as shown in Fig. 3a,b . In equilibrium, we know that any finite c -axis CDW amplitude necessarily forbids the presence of an a -axis CDW in LaTe 3 . The presence of topological defects, however, explains this apparent puzzle considering that it allows for the local suppression of the c -axis CDW. This local constraint also accounts for the anomalous wavevector, \(\tilde q_a\) , because the transient CDW nucleates in the absence of the c -axis CDW. Furthermore, the coincidence of relaxation timescales is naturally explained in this scenario: as the defects annihilate, the transient CDW can no longer be sustained and the equilibrium c -axis CDW necessarily recovers 17 . To place the proposed mechanism on a firmer theoretical footing, we performed a Ginzburg–Landau analysis in 2D space involving two complex order parameters, ψ c and ψ a , which denote the equilibrium and transient CDW orders, respectively. As we show in Supplementary Note 5 , in the presence of phase competition, the minimum-energy solution near a defect core in ψ c yields a nonzero ψ a . In addition, we find that the characteristic length scale of the transient CDW, λ a , can extend well beyond the confines of the defect core, λ c (Fig. 4b ). In particular, the ratio of λ a / λ c can become large if the normal-state anisotropy between the a and c axes is small, which makes the observation of the transient CDW possible even though the defects may be local. This work provides an illustration of the kind of phenomena that can be observed in far-from-equilibrium systems where phase competition plays a significant role in determining material properties. We expect the mechanism of seeing competing states near topological defects to be general, and that other ordered states of matter will exhibit a similar phenomenology under the influence of photoexcitation. Not only does this result provide a path forward to discovering other states of matter in the presence of phase competition, it also paves the way for the manipulation and control of other ordered phases with light. Methods Sample preparation Single crystals of LaTe 3 were grown by slow cooling of a binary melt 29 . Samples were prepared via mechanical exfoliation down to an average thickness of ≤60 nm, as characterized by atomic force microscopy (AFM) measurements. Thin flakes were transferred to a commercial 10-nm-thick silicon nitride window (SiMPore), which was mounted on a copper sample card for UED measurements. All preparations were performed in an inert gas environment, as R Te 3 compounds are known to degrade in air 29 . MeV ultrafast electron diffraction The experiments were carried out in the Accelerator Structure Test Area facility at SLAC National Laboratory 30 , 31 . The 800 nm (1.55 eV), 80 fs pump pulses from a commercial Ti:sapphire regenerative amplifier (RA) laser (Vitara and Legend Elite HE, Coherent) were focused to an area larger than 500 × 500 μm 2 (FWHM) in the sample at an incidence angle around 5° from sample normal. The 3.1 MeV electron bunches were generated by radiofrequency photoinjectors at a repetition rate of 180 Hz. The electron beam was normally incident on the sample with a 90 × 90 μm 2 (FWHM) spot size. The laser and electron pulses were spatially overlapped on the sample, and their relative arrival time was adjusted by a linear translation stage. The diffraction pattern was imaged by a phosphor screen (P-43) and recorded by an electron-multiplying charge-coupled device (Andor iXon Ultra 888). A circular through hole in the centre of the phosphor screen allowed the passage of the undiffracted electron beam to prevent camera saturation. Samples were maintained at 307 K during the measurement. The overall temporal resolution is approximately 300 fs, as determined via electron streaking by an intense, single-cycle terahertz pulse 32 . keV ultrafast electron diffraction The light-induced CDW was reproduced in a separate UED set-up at MIT with a different pump pulse wavelength and probe electron kinetic energy (Supplementary Note 1 ). The set-up adopts a compact geometry 17 . The 1,038 nm (1.19 eV), 190 fs output of a Yb:KGW RA laser system (PHAROS SP-10-600-PP, Light Conversion) was focused to a 500 × 500 μm 2 (FWHM) area in the sample. The electron beam was generated by focusing the fourth harmonic (260 nm, 4.78 eV) to a gold-coated sapphire photocathode in high vacuum (<4 × 10 −9 torr). Excited photoelectrons were accelerated to 26 kV in a d.c. field and focused to an Al-coated phosphor screen (P-46) by a magnetic lens, with a 270 × 270 μm 2 (FWHM) beam spot at the sample position. Diffraction patterns were recorded by a commercial intensified charge-coupled device (PI-MAX II, Princeton Instruments). The laser repetition rate used was 10 kHz, and the operating temporal resolution was ~1 ps, as determined from the initial response of the CDW peak intensity 17 . Measurements in this set-up were performed at room temperature. Data availability The data represented in Figs. 1 b, 2 b, 3 and 4a are available with the online version of this paper. All other data that supports the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
Adding energy to any material, such as by heating it, almost always makes its structure less orderly. Ice, for example, with its crystalline structure, melts to become liquid water, with no order at all. But in new experiments by physicists at MIT and elsewhere, the opposite happens: When a pattern called a charge density wave in a certain material is hit with a fast laser pulse, a whole new charge density wave is created—a highly ordered state, instead of the expected disorder. The surprising finding could help to reveal unseen properties in materials of all kinds. The discovery is being reported today in the journal Nature Physics, in a paper by MIT professors Nuh Gedik and Pablo Jarillo-Herrero, postdoc Anshul Kogar, graduate student Alfred Zong, and 17 others at MIT, Harvard University, SLAC National Accelerator Laboratory, Stanford University, and Argonne National Laboratory. The experiments made use of a material called lanthanum tritelluride, which naturally forms itself into a layered structure. In this material, a wavelike pattern of electrons in high- and low-density regions forms spontaneously but is confined to a single direction within the material. But when hit with an ultrafast burst of laser light—less than a picosecond long, or under one trillionth of a second—that pattern, called a charge density wave or CDW, is obliterated, and a new CDW, at right angles to the original, pops into existence. This new, perpendicular CDW is something that has never been observed before in this material. It exists for only a flash, disappearing within a few more picoseconds. As it disappears, the original one comes back into view, suggesting that its presence had been somehow suppressed by the new one. Gedik explains that in ordinary materials, the density of electrons within the material is constant throughout their volume, but in certain materials, when they are cooled below some specific temperature, the electrons organize themselves into a CDW with alternating regions of high and low electron density. In lanthanum tritelluride, or LaTe3, the CDW is along one fixed direction within the material. In the other two dimensions, the electron density remains constant, as in ordinary materials. The perpendicular version of the CDW that appears after the burst of laser light has never before been observed in this material, Gedik says. It "just briefly flashes, and then it's gone," Kogar says, to be replaced by the original CDW pattern which immediately pops back into view. Gedik points out that "this is quite unusual. In most cases, when you add energy to a material, you reduce order." "It's as if these two [kinds of CDW] are competing—when one shows up, the other goes away," Kogar says. "I think the really important concept here is phase competition." The idea that two possible states of matter might be in competition and that the dominant mode is suppressing one or more alternative modes is fairly common in quantum materials, the researchers say. This suggests that there may be latent states lurking unseen in many kinds of matter that could be unveiled if a way can be found to suppress the dominant state. That is what seems to be happening in the case of these competing CDW states, which are considered to be analogous to crystal structures because of the predictable, orderly patterns of their subatomic constituents. Normally, all stable materials are found in their minimum energy states—that is, of all possible configurations of their atoms and molecules, the material settles into the state that requires the least energy to maintain itself. But for a given chemical structure, there may be other possible configurations the material could potentially have, except that they are suppressed by the dominant, lowest-energy state. "By knocking out that dominant state with light, maybe those other states can be realized," Gedik says. And because the new states appear and disappear so quickly, "you can turn them on and off," which may prove useful for some information processing applications. The possibility that suppressing other phases might reveal entirely new material properties opens up many new areas of research, Kogar says. "The goal is to find phases of material that can only exist out of equilibrium," he says—in other words, states that would never be attainable without a method, such as this system of fast laser pulses, for suppressing the dominant phase. Gedik adds that "normally, to change the phase of a material you try chemical changes, or pressure, or magnetic fields. In this work, we are using light to make these changes." The new findings may help to better understand the role of phase competition in other systems. This in turn can help to answer questions like why superconductivity occurs in some materials at relatively high temperatures, and may help in the quest to discover even higher-temperature superconductors.Gedik says, "What if all you need to do is shine light on a material, and this new state comes into being?" The work was supported by the U.S. Department of Energy, SLAC National Accelerator Laboratory, the Skoltech-MIT NGP Program, the Center for Excitonics, and the Gordon and Betty Moore Foundation.
10.1038/s41567-019-0705-3
Medicine
Simple test to detect diabetes risk after pregnancy
M. Köhler et al. Development of a simple tool to predict the risk of postpartum diabetes in women with gestational diabetes mellitus, Acta Diabetologica (2015). DOI: 10.1007/s00592-015-0814-0
http://dx.doi.org/10.1007/s00592-015-0814-0
https://medicalxpress.com/news/2015-10-simple-diabetes-pregnancy.html
Abstract Aims Women with gestational diabetes mellitus (GDM) have an increased risk of diabetes postpartum. We developed a score to predict the long-term risk of postpartum diabetes using clinical and anamnestic variables recorded during or shortly after delivery. Methods Data from 257 GDM women who were prospectively followed for diabetes outcome over 20 years of follow-up were used to develop and validate the risk score. Participants were divided into training and test sets. The risk score was calculated using Lasso Cox regression and divided into four risk categories, and its prediction performance was assessed in the test set. Results Postpartum diabetes developed in 110 women. The computed training set risk score of 5 × body mass index in early pregnancy (per kg/m 2 ) + 132 if GDM was treated with insulin (otherwise 0) + 44 if the woman had a family history of diabetes (otherwise 0) − 35 if the woman lactated (otherwise 0) had R 2 values of 0.23, 0.25, and 0.33 at 5, 10, and 15 years postpartum, respectively, and a C-Index of 0.75. Application of the risk score in the test set resulted in observed risk of postpartum diabetes at 5 years of 11 % for low risk scores ≤140, 29 % for scores 141–220, 64 % for scores 221–300, and 80 % for scores >300. Conclusions The derived risk score is easy to calculate, allows accurate prediction of GDM-related postpartum diabetes, and may thus be a useful prediction tool for clinicians and general practitioners. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction Gestational diabetes mellitus (GDM) affects 2–6 % of all pregnancies in industrialized countries [ 1 ] and is associated with an increased risk of diabetes postpartum [ 2 , 3 ]. We have recently reported that lactation provides long-term protection against the development of postpartum diabetes in women with GDM, while insulin treatment during pregnancy and maternal obesity are associated with increased risk [ 4 ]. These findings indicate that further stratification of GDM women according to their actual postpartum diabetes risk is possible already shortly after delivery. We aimed to develop an easily applicable tool for diabetes risk stratification in GDM women, which may be helpful in clinical settings, particularly in the context of targeted and more efficient programs to detect and prevent the development of postpartum diabetes. For this purpose, we analyzed our data with a novel approach, using advanced statistical methods to obtain a risk score based on clinical and anamnestic variables that could improve long-term prediction of postpartum diabetes in women with GDM. Methods Participants The prospective German GDM study followed women with GDM from delivery for up to 20 years to detect the development of postpartum diabetes. A detailed description of the study design has been previously reported [ 4 – 6 ]. In brief, a total of 304 patients with GDM were recruited between 1989 and 1999 across Germany and followed up until 2014. Patients were followed for the development of diabetes postpartum by means of an OGTT at 2 and 9 months; 2, 5, 8, 11, 15, and 19 years after pregnancy; or until the diagnosis of diabetes. OGTTs were performed by the patient physician. In addition, if women presented with symptoms of diabetes between follow-up visits, physicians took blood glucose measurements to test for clinical diabetes. Postpartum diabetes onset was defined according to American Diabetes Association criteria, which included unequivocal hyperglycemia with acute metabolic decompensation or the observation of a 2-h plasma glucose level >200 mg/dL after an oral glucose challenge, or a random blood glucose level >200 mg/dL if accompanied by unequivocal symptoms. Since 1997, a fasting blood glucose level >126 mg/dL on two occasions also was included as a diabetes diagnosis criteria in the study. Postpartum diabetes developed in 147 women (48.4 %). The median postpartum follow-up time from delivery was 8.1 years for women who did not develop diabetes. Statistical analysis To develop the risk score, we made use of the following variables: the mother’s body mass index (BMI) in early pregnancy, GDM treatment (insulin or diet), family history of diabetes (yes vs. no), lactation duration (never, ≤3, or >3 months, coded as two dummy variables), and maternal age at delivery. These variables were collected by questionnaires or interviews and were chosen because they were easily ascertainable anamnestic information and had been previously associated with postpartum diabetes risk [ 4 , 7 ]. Of the 304 women with GDM, 257 were islet autoantibody negative and had complete data for the selected variables of interest; those 257 women were used to develop and validate the risk score (suppl. table S1). Of these women, 110 developed postpartum diabetes. In order to derive and test the prediction model in two independent data sets, the data were split into a training set (2/3; n = 171 women) and a test set (1/3; n = 86 women) using stratified random sampling. In the training set, the Lasso method for Cox regression [ 8 ] was used to reduce the final model to the most important variables to predict postpartum diabetes. We used a bootstrap approach to assess the robustness of the selection process with regard to the split into training and test sets. Therefore, we created 1000 different training sets of the same sample size (171 each) by sampling with replacement from the full data and assessed how often each risk factor was selected as a relevant predictor of diabetes development in those 1000 bootstrap data sets (Table 1 ). The risk score was calculated based on the regression coefficients of the variables chosen for the final model, rounded to two decimal places, and multiplied with the factor 100. Table 1 Description and modeling results of risk factors for postpartum development of diabetes after gestational diabetes Full size table The predictive performance of the risk score in the test set was assessed with three measures: R 2 at different times after delivery to assess the overall model performance [ 9 ]; the C-Index, which is analogous to the area under the curve of logistic regression models, to assess discrimination [ 10 ]; as well as the calibration slope, which should be around 1 for a well-calibrated score, and reflects the similarity of the absolute values of the predicted risk and the observed risk over the whole time period [ 11 ]. The risk score was divided into four equally spaced intervals, and the predicted risk estimates of postpartum diabetes development in each category in the training set were compared with the respective observed diabetes rates in the test set. All analyses were done using R version 3.0.2 [ 12 ]. Results In the training set, insulin treatment during pregnancy, family history of diabetes, BMI in early pregnancy, and lactation were selected as predictors of postpartum diabetes by the Lasso method (Table 1 ). As the same variables were also chosen in the majority of the 1000 bootstrap data sets, we were confident that our variable selection was robust to the split into training and test sets. The risk score was derived from their regression weights as follows: Risk score = 5 × BMI in early pregnancy (per kg/m 2 ) + 132 if GDM was treated with insulin (otherwise +0) + 44 if the woman had a family history of diabetes (otherwise +0) − 35 if the woman lactated (otherwise −0). The risk score ranged from 54 to 380 in the training set (Fig. 1 ). The risk score showed a prediction performance in the test set with R 2 values of 0.23, 0.26, and 0.33 at 5, 10, and 15 years, a stable C-Index of 0.75, 0.75, and 0.76, respectively, and a calibration slope of 1.13. Fig. 1 Histogram of the observed values of the risk score within the training set, including cutoff values for low, medium, high, and very high risk Full size image The risk score showed values of 54 to 380 in the training set and was divided into four equally spaced risk categories corresponding to values of ≤140 (low), 141–220 (medium), 221–300 (high), and >300 (very high risk). The predicted survival estimates in the training set at 5 years postpartum were 13, 31, 60, and 90 % for the mid-interval score of the four categories. These corresponded well with the observed risks in the test set of 11, 29, 64, and 80 %, respectively (Table 2 ). Additionally, all observed risks in the test set fell within the confidence limits of the predicted risks (Fig. 2 ). Table 2 Predicted risk and observed rates of diabetes at 5, 10, and 15 years postpartum by risk category according to risk score cut-off values Full size table Fig. 2 Predicted (training set) and observed (test set) survival curves for the four risk categories with 95 % confidence interval for each predicted survival curve Full size image Conclusions In the present study, we developed a simple risk score to predict the long-term risk of postpartum diabetes in women with GDM. The risk score may be attractive for clinicians and general practitioners because it allows them to make long-term predictions on the risk of developing postpartum diabetes based on anamnestic variables that are generally recorded during pregnancy or shortly after delivery. We identified four meaningful risk categories, which may allow the scoring system to be applied in clinical settings. All of the variables included in the final risk score calculation—insulin treatment during pregnancy, family history of diabetes, BMI in early pregnancy, and lactation—have been shown to predict the risk of postpartum diabetes in earlier studies [ 7 , 13 – 16 ], supporting the plausibility of this model. The strengths of the present article lie in the unique prospective data and long-term follow-up used to generate the models. Further, we used an established variable selection method, assessed the stability of the variable selection through bootstrap, and tested the prediction performance of the risk score in an independent test set. The latter was necessary because we were unsuccessful in our attempts to include GDM cohorts from other study groups to validate the score. Established measures of prediction performance, including the overall model fit, discrimination, and calibration, suggested the final risk score displayed good performance, comparable to risk scores used in other medical domains [ 17 ]. For example, the algorithm developed by Hippisley-Cox and Coupland to predict the risk of venous thromboembolism based on age, BMI, smoking status, history of several cardiovascular diseases, and recent hospital admission had R 2 values of 0.33 and 0.34 in women and men, respectively, and an area under the curve of 0.75 for both sexes. Furthermore, the observed rates of postpartum diabetes in the test set were very close to the predicted risks in the training set for each risk category, indicating high validity. Other diabetes-related risk scores have been reported in the literature. Elevations in at least four out of six cardiometabolic risk factors (BMI, fasting glucose, insulin, triglycerides, high-density lipoprotein cholesterol, and systolic blood pressure) were found to determine a high-risk group for postpartum diabetes 10 years after delivery in 150 Australian GDM women, using a cluster-analysis-based approach [ 18 ]. However, data were not divided into training and test sets in this study, so that the prediction performance of the risk stratification could not be assessed, and overfitting might be an issue. In another study, a weighted genetic risk score based on 48 variants was found to slightly improve prediction of postpartum diabetes development within approximately 4 years after delivery in 395 Korean women with GDM history, if added to a complex model based on age, family history of diabetes, prepregnancy BMI, blood pressure, fasting glucose, and fasting insulin levels [ 15 ]. Unfortunately, our data did not contain metabolic or genetic measurements, which are useful for diabetes prediction in general [ 19 – 22 ]. However, with a C-Index of 0.75 for diabetes development after 5 years, the prediction performance of our risk score was almost identical to that of the Korean study, where C-indices of 0.77 and 0.74 with and without genetic markers were observed, respectively. Furthermore, measuring metabolic and particularly genetic markers requires efforts in time and money, while our risk score has the advantage of only using variables that can easily be obtained shortly after delivery. The variables in our risk score are likely to reflect underlying glycemic parameters [ 23 ], and two of them (BMI and insulin treatment) have already been suggested to be used for identification of high-risk groups suitable for targeted intervention measures [ 24 ]. Analyses of a large prospective data set suggested that, in addition to weight and family history of diabetes, a number of further easily obtainable factors such as waist circumference, hypertension, or short stature might be relevant predictors of type 2 diabetes [ 25 ]. However, these results were based on data of elderly people (aged 45–64 years) and may therefore not directly be transferable to women who recently had a GDM pregnancy, particularly as insulin treatment during pregnancy was found to be the most important predictor of diabetes development in these women. The generalization of our findings to populations of different (i.e., non-Caucasian) ethnic backgrounds requires verification. Furthermore, the criteria for GDM diagnosis have changed since the time when our patients were recruited, and the cohort is likely to contain also cases which would now be diagnosed as diabetes with first onset during pregnancy rather than GDM according to the current World Health Organization guidelines [ 1 ]. Due to the missing blood glucose levels at GDM diagnosis, we were not able to re-classify women according to the current WHO criteria. In conclusion, this risk score may be an important contribution to the prediction of GDM-related postpartum diabetes, allowing practitioners to easily estimate the diabetes risk in women with prior GDM to help tailor their follow-up examinations, and clinicians to identify high-risk women for targeted prevention studies.
Gestational diabetes is one of the most common conditions that can occur during pregnancy. Although the symptoms generally disappear after delivery, women suffering from gestational diabetes are at increased risk of developing postpartum diabetes in the following years. Researchers at the Helmholtz Zentrum München have now developed an accurate method of predicting the probability of developing this progressive disease following childbirth. Their findings were published recently in Acta Diabetologica. For their study, the scientists from the Institute of Diabetes Research (IDF), Helmholtz Zentrum München, which is one of the partners of the German Center for Diabetes Research (DZD), collected data from 257 cases of gestational diabetes (a type of diabetes that affects women during pregnancy) which occurred between 1989 and 1999 and were followed up for a period of 20 years after delivery. One hundred and ten of the women observed during this period developed postpartum diabetes. In order to be able to predict in which mother the disease would manifest itself after delivery, the team headed by Prof. Anette-Gabriele Ziegler, Director of the Institute of Diabetes Research, tested various parameters that are known to play a significant role in the genesis of the disease. Personal risk is easy to calculate "Body mass index (BMI) and genetic predisposition both play a role in our calculation, as does the question of whether the mother breastfed her baby and whether her gestational diabetes had to be treated with insulin," explains Meike Köhler, first author of the study. On the basis of these parameters, the researchers introduced a point system to enable them to predict a woman's likelihood of developing postpartum diabetes. For low-risk scores, the probability of developing diabetes within five years after delivery was only about eleven percent; in the medium-risk category it ranged from 29 to 64 percent, while for the highest-risk scores it was more than 80 percent. "The test we developed is very easy to apply and in the future could be used in hospitals as a tool for predicting postpartum diabetes," Prof. Ziegler added. "This means that both the doctor and the patient are aware of the respective risk, and it allows diabetes checks to be more closely tailored to the patient's individual needs."
10.1007/s00592-015-0814-0
Physics
Quantum kisses change the color of nothing
The paper 'Capturing the Quantum Regime in Tunneling Plasmonics' will be published in the 07 November edition of Nature. doi:10.1038/nature11653 Journal information: Nature
http://dx.doi.org/10.1038/nature11653
https://phys.org/news/2012-11-quantum.html
Abstract When two metal nanostructures are placed nanometres apart, their optically driven free electrons couple electrically across the gap. The resulting plasmons have enhanced optical fields of a specific colour tightly confined inside the gap. Many emerging nanophotonic technologies depend on the careful control of this plasmonic coupling, including optical nanoantennas for high-sensitivity chemical and biological sensors 1 , nanoscale control of active devices 2 , 3 , 4 , and improved photovoltaic devices 5 . But for subnanometre gaps, coherent quantum tunnelling becomes possible and the system enters a regime of extreme non-locality in which previous classical treatments 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 fail. Electron correlations across the gap that are driven by quantum tunnelling require a new description of non-local transport, which is crucial in nanoscale optoelectronics and single-molecule electronics. Here, by simultaneously measuring both the electrical and optical properties of two gold nanostructures with controllable subnanometre separation, we reveal the quantum regime of tunnelling plasmonics in unprecedented detail. All observed phenomena are in good agreement with recent quantum-based models of plasmonic systems 15 , which eliminate the singularities predicted by classical theories. These findings imply that tunnelling establishes a quantum limit for plasmonic field confinement of about 10 −8 λ 3 for visible light (of wavelength λ ). Our work thus prompts new theoretical and experimental investigations into quantum-domain plasmonic systems, and will affect the future of nanoplasmonic device engineering and nanoscale photochemistry. Main Subwavelength metallic structures can concentrate light into nanoscale dimensions well below the diffraction limit 16 owing to reduced field penetration through a dense electron sea. Nanocavities formed inside a nanogap control the coupling of localized plasmons, thus allowing cavity-tuning 6 , 7 , 8 targeted to desirable applications that exploit the enhanced optical fields. But as the cavity gaps shrink to atomic length scales, quantum effects emerge and standard classical approaches to describe the optics of these systems fail. One effect of confining electronic wavefunctions to small metallic nanoparticles is to slightly modify the screening that tunes the plasmons 17 . However, tunnelling plasmonics in the quantum regime has more profound effects that cannot be explained through hydrodynamic models that account for quantum effects through smearing of the electronic localization 18 . Recent theories show that quantum tunnelling across the cavity strongly modifies the optical response 19 , 20 , but computational limits restrict these quantum simulations to very small systems below a few nanometres in size. Furthermore, the extreme difficulty of creating and probing subnanometre cavities has limited experimental investigations of plasmonics in the quantum regime. Top-down and self-assembly nanofabrication achieves gaps as small as 0.5 nm between plasmonic nanoparticles 21 , 22 , but these fail to reach the quantum tunnelling regime. Small cavities are also accessed in scanning tunnelling microscopes and electro-migrated break-junctions which show optical mixing, emission and rectification phenomena 23 , 24 , 25 , 26 , but the effect of quantum tunnelling on optical plasmon coupling across subnanometre cavities remains unexplored. Here we present broadband optical spectra that probe dynamically controlled plasmonic cavities and reveal the onset of quantum-tunnelling-induced plasmonics in the subnanometre regime. Two gold-nanoparticle-terminated atomic force microscope (AFM) tips are oriented tip-to-tip ( Fig. 1a ). The tip apices define a cavity supporting plasmonic resonances created via strong coupling between localized plasmons on each tip 7 , 27 . This dual AFM tip configuration provides multiple advantages. First, independent nanometre-precision movement of both tips is possible with three-axis piezoelectric stages. Second, conductive AFM probes provide direct electrical connection to the tips, enabling simultaneous optical and electrical measurements. Third, the tips are in free space and illuminated from the side in a dark-field configuration ( Fig. 1b, c and Supplementary Information ). This arrangement provides (for the first time, to our knowledge) background-free broadband spectroscopic characterization of the tip–tip plasmonic nanocavity throughout the subnanometre regime. A supercontinuum laser (with polarization parallel to the tip–tip axis) is used as an excitation source, providing high-brightness illumination over a wide wavelength range (450–1,700 nm) and reducing integration times to a few milliseconds. The tips are aligned using a recently developed electrostatic-force technique 28 . The inter-tip separation d is initially set to 50 nm and then reduced while recording dark field scattering spectra and direct currents simultaneously. Figure 1: Formation and characterization of a nanoscale plasmonic cavity. a , Scheme for simultaneous optical and electrical measurements of plasmonic cavity formed between two Au-coated tips, shown in dark-field microscope images ( b ) and false-colour scanning electron microscope image ( c ) of a typical tip, end radius R = 150 nm. d , Measured dark field scattering spectra from one inter-tip cavity at different cavity widths d . Plasmonic cavity resonances are labelled A–C. PowerPoint slide Full size image Each piezoelectric scan investigates three interaction regimes: capacitive near-field coupling (50 nm > d > 1 nm), quantum regimes (1 nm > d > 0 nm) and physical contact with conductive coupling ( d ≤ 0 nm). Crucially, this set-up allows us to resolve the gradual transition between each regime dynamically. The measured dark-field scattering spectra ( Fig. 1d ) within the capacitive regime ( d ≈ 40 nm) show a single plasmonic scattering peak centred near λ = 750 nm (mode A). As d is reduced, this peak redshifts owing to increasing near-field interactions between the tip plasmons. As the cavity shrinks below 20 nm, a second scattering peak emerges at shorter wavelengths (mode B, λ = 550 nm) and quickly increases in intensity. Modes A and B smoothly redshift until an estimated separation d = 5 nm, whereupon attractive inter-tip forces overwhelm the AFM-cantilever restoring force and snap the tips into close proximity 29 . However no current flow is detectable because this snap-to-contact point does not coincide with conductive contact and the metal surfaces remain separated. Snap-to-contact reduces d to ∼ 1 nm, significantly increasing the plasmonic interaction and dramatically changing the plasmonic scattering resonances (blue curve, Fig. 1d ). This increased coupling further redshifts modes A and B and reveals a new higher-order resonance (mode C) at λ = 540 nm. After snap-to-contact, increased piezoelectric displacement applies an additional compressive force (0.1 nN per nm of displacement), pushing the tips into closer proximity. After 11.4 nN of force in this run, current flow is detected through the tips, indicating metal-to-metal surface contact. Detailed calculations confirm that the coupled plasmonic modes observed are tightly confined in the nanocavity (see below, and Supplementary Information ). Simultaneously monitoring the optical and electrical properties during approach reveals in detail the plasmon evolution through the subnanometre regime ( Fig. 2a–c ). As the applied force increases and d is reduced, all three modes redshift and modes A and B weaken while mode C intensifies. The line widths of modes A and B decrease in concert with this reduced scattering strength while mode C broadens owing to increased scattering loss. These spectral changes are well-reproduced by simulations that include quantum tunnelling ( Fig. 2d ). The calculations employ the quantum corrected model (QCM), a new approach derived from time-dependent density-functional theory that allows incorporation of quantum coherent electron tunnelling into a classical electromagnetic framework to treat large plasmonic systems ( Supplementary Information ) 15 . For d ≳ 0.4 nm, plasmon interactions are consistent with the classical picture ( Fig. 2e ), accounting for rapidly increasing redshifts as d decreases and a transfer of oscillator strength from modes A and B to mode C, as observed. Although these higher-order coupled modes have been predicted theoretically 7 , 8 , they are clearly revealed here dynamically on approach. Figure 2: Onset of quantum tunnelling in sub-nm plasmonic cavities. a , b , Simultaneously measured electrical conductance ( a ) and dark-field optical back-scattering ( b ) with increasing force applied to the inter-tip cavity after snap-to-contact. Conductive contact (CC) indicates d = 0, with onset of quantum regime at d QR . Lines track peak positions. c , Selected experimental spectra from the last 1 nm to contact in b , shown vertically shifted. d , Theoretical total scattering intensity from a tip–tip system incorporating quantum mechanical tunnelling. The threshold (at d QR ) indicates where quantum-tunnelling-induced charge screening overcomes the near-field capacitive interaction between plasmons. e , Theoretical scattering intensity as in d but using purely classical calculations. PowerPoint slide Full size image At an applied force greater than 8 nN, a new regime is seen that deviates strongly from the classical predictions; modes A and B are now shifting back to shorter wavelengths instead of redshifting divergently. This crossover is clearly seen in the QCM simulations at d ≈ 0.31 nm ( Fig. 2d ). The quantum and classical predictions diverge at this crossover point ( d QR ) because the plasmon interactions enter the quantum regime (QR) when d is sufficiently small to support a critical electron tunnelling rate between the surfaces. Although electron confinement within each tip is minimally affected at this separation, the quantum tunnelling here dramatically modifies the correlations between electronic fluxes 15 . The net result is that quantum-tunnelling charge transfer screens the localized plasmon surface charge, decreasing the enhanced fields and reducing plasmonic coupling. For d < d QR , this quantum tunnelling increases exponentially and quickly dominates, creating charge-transfer plasmon modes that blueshift towards d = 0. The redshift-to-blueshift crossover corresponds to the threshold at which quantum-tunnelling charge reduction starts balancing the near-field capacitive coupling. We can estimate d QR roughly by considering charge tunnelling between the surfaces over an optical half-cycle. In the simplest model of a rectangular barrier, when half the plasmon charge is transferred across the junction for a critical gap ( ) marking the onset of the quantum regime, , where q is the semiclassical electron tunnelling wavenumber, λ is the optical plasmon wavelength and α ≈ 1/137 is the fine structure constant (see Supplementary Information ). At λ = 850 nm, this zeroth-order calculation gives . Full calculations show that at d QR ≈ 0.31 nm sufficient screening has already developed via the quantum transport to overcome the increasing charge build-up, consistent with the estimate for above. Realistically including the coherent quantum transport strongly enhances the tunnelling rate compared to the estimate for the square tunnel barrier used above, increasing the distance at which tunnelling plasmonics takes over, and dictating the emergence of quantum-tunnelling charge-transfer plasmons. After conductive contact ( d = 0), when the conductance G first jumps above G 0 = 2 e 2 / h (here e is the charge on the electron), two scattering peaks are observed—modes D (800 nm) and E (640 nm). Tracking modes A, B and C through the quantum regime as d → 0 shows the gradual nature of this contact transition, in marked contrast to the singular transition predicted classically ( Fig. 2e ), where a dense continuum of modes builds up in the touching limit 7 , 8 . At d QR mode A weakens sharply, before a new peak appears which blueshifts and intensifies towards conductive contact. While mode B weakens at d QR , mode C strengthens as predicted by the QCM. In both theory and experiment, modes B and C are replaced by mode E at contact. Spectra and conductance change minimally for increasing contact force after conductive contact, indicating a stable final contact. Experiments on a variety of tips show repeatable crossover behaviour at d < d QR (see Supplementary Information ), which forms an optical fingerprint of the new tunnelling plasmonics regime. To understand plasmon evolution through the quantum regime, the cavity field-distribution is calculated within the QCM accounting for quantum effects ( Fig. 3 ). This allows us to construct a model of tunnelling plasmonics ( Fig. 3a ). For d > d QR (I in Fig. 3a ), spectra are dominated by the near-field interaction of the cavity-localized surface charges and plasmons couple according to classical models. Once d ≈ d QR , the system enters the quantum regime (II in Fig. 3a ) and tunnelling opens a conductance channel between the surfaces, modifying the plasmon charge distribution, screening the electric field and reducing the interaction strength. The tunnelling (which is strongly concentrated across the thinnest barrier region) pinches off the field distribution in the centre of the gap, systematically separating the single field lobe into two lobes in the crevices on either side of the neck ( Fig. 3b ). As d is reduced further (III in Fig. 3a, b ), these quantum-tunnelling charge-transfer modes concentrated around the contact crevices blueshift as their apices become blunter. Around d QR , the mode strengths of A and B reach their weakest values because the tunnelling increases sufficiently to screen the charges across the gap, but cannot provide sufficient current to drive the charge-transfer modes into the crevices. Hence at this point the total separated charge localized to the contact region decreases, reducing the optical cross-section. Figure 3: Evolution of the plasmonic modes through the quantum regime and the quantum limit of plasmonic confinement. a , Plasmonic interactions within the three regimes accessed in experiment. b , Near-field distributions for modes B → E from the QCM theory in each regime. Images are of 40 nm by 5 nm region, same intensity scale. c , The lateral confinement width w of each mode, extracted from the simulated near-field distribution, as the cavity width d is reduced. The dashed line marks the classical approximation . The onset of quantum tunnelling effects at d = d QR = 0.31 nm sets a quantum limit ( w QL ) on mode confinement in subnanometre plasmonic cavities. PowerPoint slide Full size image The onset of quantum tunnelling fundamentally limits optical field confinement in plasmonic nanocavities. The plasmonic surface charge between two spherical surfaces of curvature R is confined laterally to (ref. 7 ), as confirmed by our simulations ( Fig. 3c ). However, quantum tunnelling limits w to . Further reducing d rapidly increases the mode spatial width, as the surfaces are quantum-mechanically blurred at this microscopic scale. The tunnelling plasmonics regime thus represents the quantum limit of compression of light that is plasmonically squeezed into a nanogap, as verified in our experiments and QCM calculations. The quantum-limited mode volume can be approximated as , which is estimated to be V min ≈ 1.7 × 10 −8 λ 3 in our experiments at λ = 850 nm (mode A at d QR ). As this limit for plasmonics is six orders of magnitude smaller than the tightest field confinement observed in photonic crystal cavities, it still offers unprecedented opportunities for directly visualizing atomic-scale and molecular processes with electronvolt-scale photons. These experimental and theoretical investigations of plasmonic interactions in subnanometre metal cavities demonstrate that quantum mechanics is already important at the 0.3-nm scale at which tunnelling plasmonics starts to dominate. This understanding is crucial for describing light–matter interactions down to the atomic scale. Stabilizing single-atom contacts or wires will give direct plasmonic access to the quantum transport regime around 1 G 0 (ref. 30 ). Our work opens up new prospects, such as directing and controlling the chemistry of single molecules within nanogaps (for example, for enhanced photocatalysis), exploiting single-molecule plasmon-assisted transport across nanogaps (for example, for single-molecule electronics), investigating extreme nonlinear interactions and accessing photo-electrochemistry on the subzeptolitre scale. Methods Summary Experimental Au-coated, ball-type AFM tips (150 nm radius of curvature) were obtained from Nanotools and used as received. Tips were mounted on separate three-axis piezoelectric actuation stages, axially aligned tip-to-tip at long range ( d > 50 nm) using a nonlinear electrostatic-force technique 28 , then brought together in 1-nm steps. Plasmons in the nanoscale cavity formed between the tip surfaces were excited with a supercontinuum laser tightly focused by a 0.9NA, 100× magnification objective in a dark field configuration. Scattered light was collected with the same objective and spatially filtered with a confocal pinhole to suppress background scattering. Spectral content of the scattered light was measured using a spectrometer with 3-ms integration time. Simultaneous electrical conductance measurements were taken by applying a submillivolt d.c. bias between the tips and measuring the resulting current. Theoretical Far-field scattering spectra and local-field distributions were calculated while reducing the inter-tip separation to identify the optical signature of tunnelling plasmonics and understand plasmon mode evolution through the quantum regime. Electron tunnelling effects were incorporated into a classical electrodynamics simulation using the QCM approach 15 to account for the quantum-mechanical tunnelling between particles. The tunnelling gap is described by a quantum-corrected dielectric function constructed from the full quantum-mechanical jellium calculation of the static gap conductivity. The optical response of the structure was then solved using a boundary-element method. Corresponding purely classical simulations were also performed.
Even empty gaps have a colour. Now scientists have shown that quantum jumps of electrons can change the colour of gaps between nano-sized balls of gold. The new results, published today in the journal Nature, set a fundamental quantum limit on how tightly light can be trapped. The team from the Universities of Cambridge, the Basque Country and Paris have combined tour de force experiments with advanced theories to show how light interacts with matter at nanometre sizes. The work shows how they can literally see quantum mechanics in action in air at room temperature. Because electrons in a metal move easily, shining light onto a tiny crack pushes electric charges onto and off each crack face in turn, at optical frequencies. The oscillating charge across the gap produces a 'plasmonic' colour for the ghostly region in-between, but only when the gap is small enough. Team leader Professor Jeremy Baumberg from the University of Cambridge Cavendish Laboratory suggests we think of this like the tension building between a flirtatious couple staring into each other's eyes. As their faces get closer the tension mounts, and only a kiss discharges this energy. In the new experiments, the gap is shrunk below 1nm (1 billionth of a metre) which strongly reddens the gap colour as the charge builds up. However because electrons can jump across the gap by quantum tunnelling, the charge can drain away when the gap is below 0.35nm, seen as a blue-shifting of the colour. As Baumberg says, "It is as if you can kiss without quite touching lips." Matt Hawkeye, from the experimental team at Cambridge, said: "Lining up the two nano-balls of gold is like closing your eyes and touching together two needles strapped to the end of your fingers. It has taken years of practise to get good at it." Prof Javier Aizpurua, leader of the theoretical team from San Sebastian complains: "Trying to model so many electrons oscillating inside the gold just cannot be done with existing theories." He has had to fuse classical and quantum views of the world to even predict the colour shifts seen in experiment. The new insights from this work suggest ways to measure the world down to the scale of single atoms and molecules, and strategies to make useful tiny devices.
doi:10.1038/nature11653
Biology
New tool for understanding cells in health and disease
Luyi Tian et al. Benchmarking single cell RNA-sequencing analysis pipelines using mixture control experiments, Nature Methods (2019). DOI: 10.1038/s41592-019-0425-8 Journal information: Nature Methods
http://dx.doi.org/10.1038/s41592-019-0425-8
https://phys.org/news/2019-05-tool-cells-health-disease.html
Abstract Single cell RNA-sequencing (scRNA-seq) technology has undergone rapid development in recent years, leading to an explosion in the number of tailored data analysis methods. However, the current lack of gold-standard benchmark datasets makes it difficult for researchers to systematically compare the performance of the many methods available. Here, we generated a realistic benchmark experiment that included single cells and admixtures of cells or RNA to create ‘pseudo cells’ from up to five distinct cancer cell lines. In total, 14 datasets were generated using both droplet and plate-based scRNA-seq protocols. We compared 3,913 combinations of data analysis methods for tasks ranging from normalization and imputation to clustering, trajectory analysis and data integration. Evaluation revealed pipelines suited to different types of data for different tasks. Our data and analysis provide a comprehensive framework for benchmarking most common scRNA-seq analysis steps. Main The rapid development of computational methods for single cell gene expression analysis has created a need for systematic benchmarking to understand the strengths and weaknesses of different algorithms. The performances of particular single cell RNA-seq analysis methods have been evaluated for tasks including normalization 1 , feature selection 2 , differential gene expression analysis 3 , clustering 4 , 5 and trajectory analysis 6 . These studies compare methods using either experimental data where cell type labels are available or simulated datasets. Such ground truth is, however, imperfect, with simulations relying on assumptions that may not reflect the true nature of scRNA-seq data, and, because they focus on specific tasks, these studies lack a complete picture of performance at the pipeline level. Considering the heterogeneity between scRNA-seq datasets in terms of the number of clusters (cell types or states) and the presence of various technical artifacts, we set out to design a realistic gold-standard scRNA-seq control experiment that combines ground truth with varying levels of biological complexity. Two strategies are commonly employed to create gold-standard gene expression datasets. The first, which has been widely adopted in scRNA-seq studies 7 , uses small collections of exogenous spike-in controls such as External RNA Control Consortium spike-ins (ERCCs) 8 that vary in expression in a predictable way. The second involves either the dilution of RNA from a reference sample or mixing of RNA or cells from two or more samples to induce systematic genome-wide changes. An early example from Brennecke et al. 9 involved a dilution series to explore the sensitivity of the Smart-seq protocol. Grün et al. 10 generated a benchmark dataset using single mouse embryonic stem cells together with bulk RNA extracted from the same population, diluted to single cell equivalent amounts to quantify biological and technical variability. A limitation of these experiments is their lack of biological heterogeneity, which makes them less useful for comparing analysis methods. Mixture designs, in which RNA or cells are mixed in different proportions to generate biological heterogeneity with in-built truth, have been successfully used to benchmark microarray 11 , RNA-seq 12 and scRNA-seq 13 data. To combine the strengths of these approaches, we designed a series of experiments using mixtures of either cells or RNA from up to five cancer cell lines that included a dilution series to simulate variations in the RNA content of different cells, as well as ERCC spike-in controls where possible. Data were generated across four single cell technology platforms. Our scRNA-seq mixology design simulates varying levels of biological noise, with sample sizes varying from around 200 cells to 4,000 cells and known population structure to allow benchmarking of different analysis tools. We specifically evaluated combinations of methods for normalization and imputation, clustering, trajectory analysis and data integration. The methods chosen for each task use different algorithms and are mostly implemented in R 14 and Bioconductor 15 for convenience. Results scRNA-seq mixology provides ground truth for benchmarking Our scRNA-seq benchmarking experiment spanned two plate-based (CEL-seq2 and SORT-seq) and two droplet-based (10× Chromium and Drop-seq) protocols and involved three different experimental designs with replicates, yielding 14 datasets in total (Supplementary Table 1 ). The experimental design involved single cells, mixtures of single cells and mixtures of RNA from up to five human lung adenocarcinoma cell lines ( Methods , Fig. 1a,b and Supplementary Fig. 1 ). Fig. 1: Overview of the scRNA-seq mixology experimental design and benchmark analysis. a , b , The benchmark experimental design involving single cells ( a ) and ‘pseudo cells’ ( b ). c , PCA plots from representative datasets for each design (normalized using scran) highlight the structure present in each experiment. The percentage of variation explained by each principal component (PC) is included in the respective axis labels, and sample sizes are indicated by n . d , Workflow for benchmarking different analysis tasks using the CellBench R package. Full size image The three designs incorporate ground truth in various ways. For the single cell datasets (Fig. 1a ), the ground truth is the cell line identity that can be determined for each cell on the basis of known genetic variation ( Methods ). For the ‘pseudo cell’ mixture datasets, the known composition of cells and RNA serves as ground truth. The ‘RNA mixture’ and ‘cell mixture’ datasets (Fig. 1b and Supplementary Fig. 1 ) contain 7 and 34 groups, respectively, that give a continuous structure. Moreover, the RNA mixture dataset contains technical replication and a dilution series (Fig. 1b ), which is ideal for benchmarking normalization and imputation methods that are intended to deal with such technical variability. The data characteristics and analysis tasks that each design is suited to benchmark are summarized in Supplementary Table 2 . By comparing a range of quality control metrics collected across datasets using scPipe 16 , we observed that data from all platforms were of consistently high quality in terms of their exon mapping rates and the total unique molecular identifier (UMI) counts per cell (Supplementary Fig. 2 ). We found substantial differences in the percentage of reads mapping to intron regions in datasets generated from different protocols and experimental designs (Supplementary Fig. 2a ). After normalization by scran 17 , principal component analysis (PCA) plots from four representative datasets show that our single cell and mixture designs successfully recapitulate the expected population structure (Fig. 1c , t-SNE and UMAP visualizations provided in Supplementary Fig. 3a ). Doublet rates were observed to vary between datasets (Supplementary Fig. 3b ). A summary of the benchmarking workflow is shown in Fig. 1d . After preprocessing and quality control with scPipe, each dataset was analyzed using different combinations of normalization (eight methods) and imputation (three methods). Each normalized and imputed gene expression matrix was used as input to various downstream steps, including clustering (seven methods), trajectory analysis (five methods) and data integration (six methods). The CellBench R package ( Methods ) was developed to manage the 3,913 dataset × method combinations obtained through this analysis, with performance evaluated using metrics tailored to the ground truth available. Comparisons of normalization and imputation methods We evaluated eight popular normalization methods, including techniques developed primarily for bulk RNA-seq such as trimmed mean of M-values (TMM) 18 , count-per-million 19 and DESeq2 20 , and others tailored for scRNA-seq, including scone 1 , BASiCS 21 , SCnorm 22 , Linnorm 23 and scran 17 . Three scRNA-seq imputation methods, including kNN-smoothing 24 , DrImpute 25 and SAVER 26 , were also evaluated using input data normalized by different methods. Altogether, 438 analyses representing combinations of datasets, normalization and imputation methods were used for benchmarking. Performance was evaluated using two metrics: the silhouette width of clusters for all datasets and the Pearson correlation coefficient of normalized gene expression within each group for the RNA mixture data ( Methods ). Although a wide range of silhouette widths was observed among different methods in different datasets, most normalization methods, apart from TMM, markedly increased silhouette width relative to the unnormalized data (Fig. 2a and Supplementary Fig. 4a,b ). Methods tailored to scRNA-seq data tended to perform better than methods designed for bulk RNA-seq analysis, with the exception of DESeq2, which generated good results. Across all the methods compared, Linnorm was the top performer on average, followed by scran and scone. Fig. 2: Comparisons of normalization and imputation methods using multiple mixture datasets. a , Silhouette widths calculated using the known cell–mixture groups after different normalization methods, summarized across all datasets and normalized against the silhouette widths obtained without normalization (‘none’). b – e , Example PCA plots after normalization or with imputation by different methods using the RNAmix_CEL-seq2 dataset ( n = 340). Percentage variation explained by each principal component is included in the respective axis labels. f , Average Pearson correlation coefficients for ‘pseudo cells’ within the same groups in the RNAmix_CEL-seq2 and RNAmix_Sort-seq datasets ( n = 2) for different combinations of normalization and imputation methods. g – j , Heat map of Pearson correlation coefficients of samples in the RNAmix_CEL-seq2 dataset that have pure H2228 ( n = 45) or HCC827 ( n = 44) RNA obtained from different imputation methods. Full size image The RNA mixture experiment included pseudo cells with varying amounts of input RNA to simulate dropout events, making it an ideal dataset for evaluating the performance of imputation methods. Example PCA plots on an RNA mixture dataset after different normalization combinations are shown in Fig. 2b–e . In general, imputation induces higher intra-group correlation, although considerable differences are observed depending on the normalization method chosen (Fig. 2f ). The kNN-smoothing and DrImpute methods show similar results with different normalization methods. SAVER has greatest variation in performance, with either the best or worst depending on the input normalization method applied. We also investigated whether spurious cell states could be introduced during imputation. We chose cells from two distinct groups (pure H2228 and HCC827) in the RNA mixture dataset and examined the correlations among samples. Overall, the sample correlation within the same group is lower when the messenger RNA amount decreases (Fig. 2g ) and correlation increases after imputation (Fig. 2h–j ). All three methods clearly separate the two pure groups; however, kNN-smoothing introduces a spurious intra-group correlation structure (Fig. 2h ) that is also shown in the PCA (Fig. 2c ), which is consistent with observations from another recent study 27 . Moreover, we found the extra clusters were related to the input RNA amount, which implies that kNN-smoothing is sensitive to dropout events. Comparisons of clustering methods Performance of five representative clustering methods, including RaceID3 28 , RCA 29 , Seurat 30 , clusterExperiment 31 and SC3 32 , were evaluated across all datasets. As there is no function to choose the optimal number of clusters in Seurat, two resolutions, 0.6 and 1.6, were used. The resolution parameter controls the number of clusters, with higher values producing more clusters. In addition to the various normalized gene expression matrices, we also evaluated Seurat using its own default pipeline (denoted Seurat_pipe) that starts from the raw gene count matrix. PCA and t-SNE plots showing clustering results from three methods across three different datasets are shown in Supplementary Fig. 5a . We measured the performance of clustering methods by calculating both the entropy of cluster accuracy (ECA) and the entropy of cluster purity (ECP) ( Methods ). We consider these two metrics together to account for both under- and over-clustering, with methods that have both low ECP and low ECA having optimal cluster assignments. Good correlation was observed between these two metrics and the adjusted Rand index (ARI) 33 (data not shown), which is commonly used to evaluate clustering performance by computing similarity to the annotated clusters. Fig. 3: Comparisons of scRNA-seq clustering methods. a – d , ECP and ECA for the top performing combinations of each method for different datasets. Colors denote different clustering methods and labels indicate the combination of normalization and imputation methods used as input to the clustering algorithms. Full size image As shown in Fig. 3a–d , which presents the two best results for each method on different datasets, no particular algorithm consistently outperformed others across all experimental designs under default settings. In general, Seurat achieved a good balance between under-clustering and over-clustering across all datasets, and performed best when there was clear separation between cell types, as was the case in the single cell datasets (Fig. 3c,d ). RaceID3 outperformed all other methods in the more complex cell mixture datasets. The accuracy of all methods was lowest in the cell mixture datasets (Fig. 3a ), owing to the continuous population structure that gave less separation between different clusters compared to the other experiments. For the single cell datasets, we note that the true cluster number given by genotype information is likely to be an underestimate, with subtle sub-clusters present within each cell line, as highlighted in the t-SNE and UMAP visualizations (Supplementary Fig. 3a ). Methods that over-cluster the single cell data may well capture true biological signal. In addition to showing the best results for each method, a linear model was fitted to 2,323 analysis combinations (different normalization, imputation and clustering methods applied across different datasets) using either ARI or the true number of clusters as dependent variables and the different methods as covariates (Supplementary Fig. 5b,c ) to further investigate the contribution of each method to the results. Similar to what has been shown in Fig. 3a,b,d , the clusterExperiment method frequently failed to recover the expected population structure and SC3 under-clusters most datasets (Fig. 3a,b ). We found that kNN-smoothing was associated with lower ARI and higher cluster numbers (Supplementary Fig. 5b,c ), which is consistent with our previous results (Fig. 2c,h ) and serves as a further indication that this method can introduce spurious clusters. Comparisons of trajectory analysis methods Five methods, including Slingshot 34 , Monocle2 35 , SLICER 36 , TSCAN 37 and DPT 38 , were evaluated using the RNA mixture and cell mixture datasets. These datasets were chosen as they both contain clear ‘pseudo trajectory’ paths from one pure cell line to another that are driven by variations in composition of cells or RNA. For simplicity, we chose H2228 as the root state of the trajectory (Fig. 4a ). We evaluated the correlation between the pseudotime generated from each method and the rank order of the path from H2228 to the other cell lines on the basis of the cell–RNA mixture information (Fig. 4b ) to examine whether each method can position cells in the correct order. In addition, we calculated the coverage of the trajectory path (Fig. 4c ), which is the proportion of cells that have been assigned to the correct path, and assesses the sensitivity of the method. We used data generated from combinations of normalization and imputation methods as input to each trajectory analysis method to assess their impact on performance. In total, 683 analysis combinations of normalization, imputation and trajectory analysis methods applied to different datasets were evaluated. Fig. 4: Comparisons of scRNA-seq trajectory analysis methods. a , The trajectory path chosen for the RNA mixture dataset (top) ( n = 340) and cell mixture dataset (bottom) ( n = 169) along with visualizations of the output from Slingshot, Monocle-DDRTree and SLICER. Cells are colored by the proportion of H2228 RNA present, which was chosen as the root of the trajectory. b , Violin plot showing the Pearson correlation coefficient between the calculated pseudotime and the ground truth, for the best performing combination of each method on each dataset. Stars indicate the mean values ( n = 8 for cellmix, n = 4 for RNAmix). c , The proportion of cells that are correctly assigned to the trajectory. Stars indicate the mean values ( n = 8 for cellmix, n = 4 for RNAmix). Full size image For each method in each dataset, we selected the best results from all combinations on the basis of the performance metrics (Fig. 4b,c ). In addition to that, a linear model was used to characterize the average performance for each method (Supplementary Fig. 6a,b ). Slingshot and Monocle2 showed robust results according to both metrics and generated meaningful representations of the trajectory, while Slingshot sometimes gave an extra trajectory path (Fig. 4a and Supplementary Fig. 7 ). In contrast, SLICER places all cells in the correct path but was unable to order them correctly or recover the expected structure induced by the mixture designs. Despite the similar performance of Slingshot and Monocle2, their results differ in terms of the way they position cells. Slingshot does not perform dimensionality reduction itself and presents the result as is, whereas Monocle2 uses DDR-tree for dimensionality reduction by default and tends to place cells at the nodes of the tree rather than in transition between two nodes (Fig. 4a ). For example, the RNA mixture dataset has seven clusters by design that are equally distributed along the path between one pure cell line and another. Monocle2 assigns most of the cells to the three terminal states, leaving only a few in between, which does not reflect the designed structure. Indeed, this feature might fit real situations in cell differentiation, where most cells are in defined cell states with only a small proportion in transition between different groups. However, such an assumption may not always hold, and care is therefore needed when interpreting the results. Comparisons of data integration methods State-of-the-art methods including MNNs 39 , Scanorama 40 , scMerge 41 , Seurat 42 and MINT 43 were compared using the single cell (sc_CEL-seq2, sc_10X, sc_Drop-seq and sc_10x_5cl) and RNA mixture (RNAmix_CEL-seq2 and RNAmix_Sort-seq) datasets. The five cell line CEL-seq2 datasets (sc_CELseq2_5cl_p1, sc_CELseq2_5cl_p2 and sc_CELseq2_5cl_p3) were excluded because of their high doublet rates (Supplementary Fig. 3b ). The input to each data integration analysis was from a particular combination of normalization and imputation applied across the datasets to be combined. This gave rise to 469 different analyses across the two experimental designs. As expected, when naively combining the independent datasets, clear separations related to the different single cell protocols were observed in the PCA plots, with data integration methods removing these effects to varying degrees (Fig. 5a,b and Supplementary Fig. 8c,d ). Fig. 5: Comparisons of data integration methods for batch effect correction for the RNA mixture and the four single cell experiments. a , b , Examples of dimension reduction visualizations for some data integration methods. Seurat results were visualized with t-SNE and other methods’ results with PCA. c , d , Silhouette width calculated on either batch information or known sample group. Input data were based on a combination of different normalization and imputation methods, with top performing combination indicated for each method. scMerge_s: supervised scMerge; scMerge_us: unsupervised scMerge ( n = 636 for RNA mixture and n = 5,319 for single cell). Full size image MNNs, Scanorama and scMerge generate a batch-corrected gene expression matrix that can then be analyzed using other downstream analysis tools, while diagonal canonical correlation analysis combined with dynamic time warping from Seurat and MINT outputs a low-dimensional representation of the data. MINT includes an embedded gene selection procedure while projecting the data into a lower dimensional space. We assessed each method’s performance with the silhouette width according to protocol (batch) and known cell line or mixture group information (Fig. 5c,d ). As Seurat uses t-SNE for dimension reduction where distances are not preserved, we also used kBET 44 to quantify the remaining batch effect variation (Supplementary Fig. 8a,b ). When considering the best normalization and imputation methods as input data, most methods performed similarly for the single cell design, with the exception of Seurat that had a low kBET acceptance rate (Supplementary Fig. 8b ). We observed differences in performance between methods for the RNA mixture design that included a larger number of groups (seven) and a smaller number of cells per group than the single cell design. MNNs gave the best performance according to silhouette width distance, while Seurat had the highest kBET acceptance rate in this analysis (Supplementary Fig. 8a ), indicating a homogeneous mix of batches consistent with the t-SNE visualization. In addition to the silhouette width and kBET acceptance rate, we also performed clustering on the RNA mixture data post integration to assess whether the expected mixture groups were retained. Most methods, except Seurat, had high ARI and gave the correct number of clusters (Supplementary Fig. 9 ) after appropriate normalization and imputation, indicating that they preserved the biological signal. Discussion By incorporating many different combinations of normalization and imputation in downstream analyses, we were able to assess the robustness and variability of the final outputs in light of their inputs. The performance of methods varied across different datasets, with no clear winners in all situations (Fig. 6 ). However, consistently satisfactory results were observed for scran, Linnorm, DrImpute and SAVER for normalization and imputation; Seurat for clustering; Monocle2 and Slingshot for trajectory analysis; and MNNs for data integration. Some normalization and imputation method combinations were also found to give good results in most downstream analysis tasks, such as Linnorm and SAVER. Variations were also observed in the ability of methods to handle different inputs. Methods such as Linnorm and SC3 produced relatively consistent results regardless of the input dataset, while others such as SAVER were more sensitive to these inputs. By evaluating the results from method combinations across different tasks, we observed a number of interesting trends related to the suitability of different preprocessing methods in downstream analysis. For example, we found that although imputation generally improved the results of clustering and trajectory analyses, it can lead to poor mixing of data from different batches in data integration analyses (Supplementary Fig. 6d ). Fig. 6: Summary of results from methods comparisons using scRNA-mixology datasets. Methods are ranked by the best performance in each category. The average performance and variability were calculated based on the results from different input datasets, processed by different methods that were then analyzed using the stated downstream method. For normalization and imputation, the impact each method has on downstream analysis is also shown. Results have been scaled and standardized to have the same color scale. Full size image The various ensemble methods, which combine results from multiple algorithms in a bid to improve performance, did not always outperform individual methods. For instance, the ensemble methods SC3 and clusterExperiment did not outperform other clustering methods, and scone normalization also gave mixed results on different datasets. Our comparison is subject to a number of limitations, such as the linear mixture settings, which may not be a realistic model for developmental trajectories where regulatory gene expression may be non-linear and non-systemic. Also, methods are mostly compared under default settings, which may not give optimal performance across all datasets. The number of methods for each task can be easily expanded using our CellBench software for a more in-depth analysis of specific tasks or to explore the effect different choices of starting parameters have on the results. Our benchmarking platform will benefit future package developers as it allows new methods to be evaluated on the same standards, avoiding ambiguity caused by cherry-picking evaluation datasets. We hope that this study will reinvigorate interest in the important area of benchmark data generation and analysis, providing new insights into current best practice and guiding the development of better scRNA-seq algorithms in the future to ensure the biological insights derived from single cell technology stand the test of time. Methods Study design Five human lung adenocarcinoma cell lines (HCC827, H1975, A549, H838 and H2228) were cultured separately and the same batch was processed in three different ways (Fig. 1a,b ). First, single cells from each cell line were mixed in equal proportions, with libraries generated using three different protocols: CEL-seq2, Drop-seq 45 with Dolomite equipment and 10× Chromium 46 . Second, RNA was extracted in bulk from three cell lines (HCC827, H1975 and H2228), mixed in seven different proportions and diluted to single cell equivalent amounts (Supplementary Fig. 1a ). In total, there are eight mixtures in the plate layout with 49 replicates of each mixture. The mix1 and mix2 samples have the same proportions of the three cell lines (one-third/one-third/one-third) but were prepared separately to assess the variation introduced during the RNA dilution and mixture step. In addition to the RNA mixtures, we also designed a dilution series in the same plate to create variations in the amount of RNA added. The amounts ranged from 3.75 to 30 pg (Supplementary Fig. 1a ) and were intended to simulate differences in cell size. In total, each mixture had four different RNA starting amounts with replicates per mixture of 6, 14, 14 and 14 for the 3.75, 7.5, 15 and 30 pg dilutions, respectively. Third, single cells from three cell lines (HCC827, H1975 and H2228) were sorted into 384-well plates, with an equal number of cells per well in different combinations. For most of the wells, we sorted nine cells in total, with different combinations of three cell lines distributed in ‘pseudo trajectory’ paths (Supplementary Fig. 1b ), where the major trajectory is similar to the RNA mixture design while the minor trajectory is the combination that only contains cells from two cell lines instead of three, which is similar in design to our previous study 47 . For the major trajectory, we also included a population control for each combination, which includes 90 cells in total (that is, a large sample) instead of nine. Apart from the trajectory design, we also varied the cell numbers and qualities to study the data characteristics in these configurations. We included nine replicates with three cells in total with one cell from each cell line, to simulate ‘small cells’. The nine-cell wells were sub-sampled after pooling to get single cell equivalents of RNA, with three replicates in one ninth and one in a third. We applied different clean-up ratios to the three replicates after library generation to induce batch effects of a purely technical nature and study how clean-up affects the data. Cell culture and mRNA extraction The cell lines were retrieved from the ATCC ( ) and cultured in Roswell Park Memorial Institute (RPMI) 1640 medium with 10% fetal calf serum and 1% penicillin-streptomycin. Three cell lines H2228, H1975 and HCC827 were cultured for the cell mixture, RNA mixture and single cell experiments and later, five cell lines (H2228, H1975, HCC827, A549 and H838) were grown for another single cell experiment. The cells were grown independently at 37 °C with 5% carbon dioxide until near 100% confluency. For the three cell lines (HCC827, H1975 and H2228), cells were dissociated into single cell suspensions in FACS buffer and sorted for the cell mixture and single cell experiment (see below for sorting strategy). The remaining cells were centrifuged and frozen at −80 °C and subsequently RNA was extracted using a Qiagen RNA miniprep kit. The amount of RNA was quantified using both Invitrogen Qubit fluorometric quantitation and an Agilent 4200 bioanalyzer to get an accurate estimation. The extracted RNA was diluted to 60 ng μl −1 and mixed in different proportions, according to the study design. The different mixtures were further diluted to create an RNA series that ranged from 3.75 to 30 pg, each of which was dispensed into CEL-seq2 and SORT-seq primer plates using a Nanodrop II dispenser. Prepared RNA mixture plates were sealed and immediately frozen upside down at −80 °C. Cell sorting and single cell RNA-sequencing For CEL-seq2, single cells were flow sorted into chilled 384-well PCR plates containing 1.2 μl of primer and ERCC mix A using a BD FACSAria III flow cytometer. Sorted plates were sealed and immediately frozen upside down at −80 °C. These plates, together with the RNA mixture plates, were taken from −80 °C and processed using an adapted CEL-Seq2 48 protocol with the following variations. The second strand synthesis was performed using NEBNext Second Strand Synthesis module in a final reaction volume of 8 μl and NucleoMag NGS Clean-up and Size select magnetic beads were used for all DNA purification and size selection steps. For the nine-cell mixture plates, clean-up of the PCR product was performed with 2 × 0.7–0.9 bead/DNA ratio. For the single cell and RNA mixture plates, two different clean-up ratios for the PCR product were used (0.8 followed by 0.9). The choice of clean-up ratio was optimized from the quality control results of the nine-cell mixture data and the SORT-seq protocols. For the five-cell-line single cell mixture experiment, the protocol was further optimized by pooling the sample after first strand complementary DNA synthesis. The cell mixture plates were sorted with most wells containing nine cells in total in different combinations that were processed using our adapted CEL-seq2 protocol described above with variations in the pooling step. After the second strand synthesis, materials from the nine-cell mixtures and 90-cell population controls were pooled separately into different tubes and the volumes were measured. Then for the nine-cell mixture sample, 3 × 1/9 and 1 × 1/3 of the total volume of pooled material were subsampled and these four samples were processed separately in the following step. At the PCR product clean-up stage, the cleanup ratios for the 3 × 1/9 samples were 0.7, 0.8 and 0.9, respectively, and 0.7 for the one-third nine-cell mixture sample and the 90-cell population controls. The SORT-seq 49 protocol is similar to CEL-seq2 but uses oil to prevent evaporation. This allows reductions in the reaction volume that can be dispensed using the Nanodrop II liquid handling platform (GC Biotech). In summary, 2.0 μl vapor-lock oil was added to each well of the plate, followed by 0.1 μl of primer and ERCC mix. The reaction volume for room temperature and first strand synthesis were 0.075 and 0.568 μl, respectively. The composition of the various mixes was the same as for CEL-seq2. The sample pooling was achieved by centrifuging the plates upside down into a container covered with Parafilm and carefully separating the oil from the other materials. The PCR clean-up ratio used for SORT-seq was 0.8 followed by 0.9. We experienced notable sample loss during sample pooling such that only ∼ 60% of the total volumes were recovered, which was lower compared to the CEL-seq2 protocol (90%). For the 10× and Drop-seq protocols, cells were propidium iodide stained and 120,000 live cells were sorted for each cell line by FACS to acquire an accurate equal mixture of live cells from the three cell lines. This mixture was equally split into three parts, where one part was then processed by the Chromium 10× single cell platform using the manufacturer’s (10x Genomics) protocol. The second part was processed by Dolomite Drop-seq with standard Drop-seq protocols 50 . The third part was sorted in a 384-well plate and processed using the standard CEL-seq2 protocol, with a PCR clean-up ratio of 0.8 followed by 0.9. For the five-cell-line experiment, cells were counted using Chamber Slides and roughly 2 million cells from each cell line were mixed and processed by 10x. Three CEL-seq2 plates were sorted from the same sample. All samples, including Drop-seq, 10x and CEL/SORT-seq, were sequenced on an Illumina Nextseq 500. Data preprocessing and quality control scPipe (v.1.3.0) was used for data preprocessing and quality control to generate a UMI-deduped gene count matrix per dataset. All data were aligned using Rsubread (v.1.28.1) 51 to the GRCh38 human genome and its associated annotations, with ERCC spike-in sequences added as extra chromosomes. For 10x, we processed the 5,000 most enriched cell barcodes, with comp = 3 used in the function scPipe::detect_outliers for quality control to remove poor quality cells. For CEL-seq2 and SORT-seq, we used the known cell barcode sequences for cell barcode demultiplexing and comp = 2 was used in the function scPipe::detect_outliers for quality control. The background contamination was high for Drop-seq, so we first ran scPipe::detect_outliers with comp = 3 to remove outlier cells and ran it again with comp = 2 to remove background noise caused by droplets without beads. The quality control metrics, including intron reads for each cell, were generated during cell barcode demultiplexing by the function scPipe::calculate_QC_metrics. Intron reads are defined as any reads that map to the gene body but do not overlap an annotated exon. The PCA and t-SNE results were generated using runPCA and runTSNE in the scater (v.1.8.2) package with default parameters and perplexity set to 30. The UMAP results 52 were calculated using the umap function from the umap (v.0.2.0) R package 53 . Data normalization and imputation The raw counts were used as input to each normalization algorithm and all methods were blind to the mixture groups. To have a fair comparison, the normalized counts from algorithms such as BASiCS (v.1.4.0) and SCnorm (v.1.4.2) that do not generate values on a log-scale were log 2 transformed after an offset of 1 was added to the counts. The raw counts were also log 2 transformed before calculating the Pearson correlation and silhouette widths. We used edgeR (v.3.24.2) to calculate count-per-million and TMM normalized values. The BASiCS method requires spike-in genes, so we did not apply it to our datasets generated by 10× or Drop-seq, which both lacked ERCC spike-ins. For scone (v.1.6.0), we set the maximum number of unwanted variation components to zero for the removal of unwanted variation method and ignored quality control metrics. Data analysis using scran (v.1.8.2), Linnorm (v.2.6.0), DrImpute (v.1.0), DESeq2 (v.1.20.0) and SCnorm (v.1.4.2) was performed with default settings. For the RNA mixture data, kNN-smoothing (v.2.0) was run with k = 16. The size.factor parameter was set to 1 in SAVER (v.1.1.1) to override its internal normalization procedure. BASiCS (v.1.4.0) was run with 5,000 Markov chain Monte Carlo iterations, 500 warm up iterations and a thinning parameter of 10. The Pearson correlation coefficient was calculated using gene expression after normalization or imputation for samples with the same RNA mixture proportion, as these samples are replicates and any differences in gene expression should be contributed by variations in RNA amount and technical noise. We performed PCA using normalized counts and calculated the silhouette width on the first two principal components to assess whether normalization was able to preserve the known structure. For any clustering of n samples (here a cluster refers to a particular mixture or a cell line), the silhouette width of sample i is defined as $$\mathrm{sil}\left( i \right) \equiv \frac{{b\left( i \right) - a\left( i \right)}}{{\max\left( {a\left( i \right),b\left( i \right)} \right)}} \in [ - 1,1]$$ where a ( i ) denotes the average distance (Euclidean distance over the first two principal components of expression measures) between the i th sample and all other samples in the cluster to which i belongs, and b ( i ) is calculated as below. For all other clusters, $$b(i) = {\mathrm{min}}_Cd(i,C)$$ where d ( i , C ) denotes the average distance of i to all observations in cluster C . Methods with better performance have higher silhouette width. The function silhouette from the package cluster (v.2.0.7) 54 was used to calculate the silhouette width. Clustering Our comparison of clustering methods used all mixture datasets except cellmix5, which is the population control data. To obtain truth for the single cell datasets (sc_CEL-seq2, sc_10x, sc_Drop-seq, sc_10x_5cl, sc_CEL-seq2_5cl_p1, sc_CEL-seq2_5cl_p2 and sc_CEL-seq2_5cl_p3) we used Demuxlet 55 , which exploits the genetic differences between the different cell lines to determine the most probable identity of each cell. The predicted cell identities in each dataset corresponded largely to clusters seen when visualizing the data. Five methods, including clusterExperiment (v.2.2.0), RaceID3 (v.0.1.3), RCA (v.1.0), SC3 (v.1.10.0) and Seurat (v.2.3.4), were compared. Each method is used as specified by the authors in its accompanying documentation. For each dataset, the inputs for each method were normalized and imputed by different methods and the top 1,000 highly variable genes were selected using the trendVar and decomposeVar functions in scran. The same gene selection method was also applied in other downstream analyses such as trajectory analysis and data integration. The Seurat package has its own data preprocessing pipeline that takes raw UMI counts as inputs and includes normalization and gene selection (referred to as Seurat_pipe in the results). Most methods besides Seurat have functions to help choose the optimal cluster numbers. Therefore, two resolutions, 1.6 (Seurat_1.6) and 0.6 (Seurat_0.6), were applied to get greater or fewer clusters, respectively. To compare the performance of the clustering methods, we looked at two measures: ECA, H accuracy ; and ECP, H purity . With M and N representing the cluster assignment generated from clustering methods and annotations (ground truth), we define these measures as follows: $$H_{\mathrm{{accuracy}}} = - \frac{{\mathop {\sum }\nolimits_{i = 1}^M \mathop {\sum }\nolimits_{j = 1}^{N_i} p(x_j){\mathrm{log}}(p(x_j))}}{M}$$ $$H_{\mathrm{{purity}}} = - \frac{{\mathop {\sum }\nolimits_{i = 1}^N \mathop {\sum }\nolimits_{j = 1}^{M_i} p(x_j){\mathrm{log}}(p(x_j))}}{N}$$ where x j are cells in the j th true cluster and p ( x j ) are the proportions of these cells relative to the total number of cells in the i th generated cluster. For H accuracy , M denotes the clustering generated by a given method, and N i are the true clusters in the i th generated cluster. Similarly, in H purity , N denotes the true clusters while M i is the method assigned cluster for the i th true cluster. The ECA measures the diversity of the true group labels within each cluster assigned by the clustering algorithm. A low value indicates that the cells in a cluster identified by a given method are homogeneous and from the same group. However, H accuracy does not account for over-clustering, and in an extreme case, a method that assigns a unique cluster for each cell will have an H accuracy of 0. In contrast, the ECP measures the diversity of the calculated cluster labels within each of the true groups and offers no control of under-clustering. A method that assigns all cells into one cluster will have an H purity value of 0, which is indicative of high cluster purity. Trajectory analysis The comparison of trajectory analysis methods used all nine-cell mixture datasets (cellmix1 to cellmix4) and the RNA mixture datasets generated by the CEL-seq2 and SORT-seq protocols. For each dataset, the gene count matrix is normalized using different normalization methods and imputation methods. The top 1,000 most highly variable genes were selected using the trendVar and decomposeVar functions in scran. Five methods, including Slingshot (v.1.0.0), Monocle2 (v.2.8.0), SLICER (v.0.2.0), TSCAN (v.1.20.0) and DPT (v.0.6.0), were compared using these data. Slingshot requires the dimensionality reduced matrix and cluster assignment as input. Similar to the approach described by the authors, we used PCA (scater::runPCA) for dimensionality reduction and fitted a Gaussian mixture model to the first two principal components to obtain the cluster assignments using the Mclust function from the mclust (v.5.4.2) R package 56 . Next, the first two principal components and the clustering results were used as input for Slingshot. DDR-Tree, a scalable reversed graph embedding algorithm, was used for Monocle2 for dimensionality reduction and tree construction. SLICER applies locally linear inference to extract features and reduce dimensions. To make it easier for comparison, the pure H2228 samples were selected as the root cells or root state when generating the trajectory and computing pseudotime. Then, for the branching structure generated by each method, we searched for the best match to the two branches: H2228 to H1975 and H2228 to HCC827 and calculated the proportion of overlap of cells between the real path and the branch obtained by each method. Data integration The main characteristics of the data integration methods applied are described in Supplementary Table 3 . These analyses made use of the R packages scran (v.1.8.2) for MNNs, Seurat (v.2.3.4) for diagonal canonical correlation analysis, scMerge (v.0.1.14), mixOmics (v.6.6.1) for MINT and the Scanorama Python library (v.1.0). The input data for each analysis were the normalized and imputed results from different methods for each dataset. The scMerge method was run in both unsupervised (scMerge_us) and supervised (scMerge_s) modes using cell line identity or RNA mixtures as groups. We calculated the silhouette width coefficient to compare how well the different methods combined data from different protocols. In the single cell and RNA mixture datasets, the clusters are defined on the basis of either the known batch or protocol information or cell–mixture groups. Silhouette coefficients were calculated on the first two principal components from PCA for each method that output a data matrix (MNNs, scMerge and Scanorama) or the first two components from MINT. A high value for the batch silhouette width indicates that a strong protocol effect remains, while a high value for the biological group silhouette width indicates that the biological signal is retained after data integration. The kBET acceptance rate was calculated using kBET (v.0.99.5) with default parameters, with a high rate indicating homogeneous mixing of samples from different batches. The Seurat package (v.2.3.4) with default parameters was used to perform clustering on the dimensionality reduced RNA mixture datasets after integration. The clustering results were evaluated using ARI and the estimated number of clusters, with results shown in Supplementary Fig. 9 . Pipeline analysis with CellBench CellBench (v.0.99.4) contains functions and data structures that simplify the testing of combinations of analysis methods without duplicating code. In addition to a benchmarking framework, CellBench also provides functions to access preprocessed data objects for the samples described in this study. To use CellBench, methods are modified with wrapper functions to ensure that the methods that perform the same task are able to accept a common input format and produce a common output format. Lists of wrappers for one pipeline stage are passed to the apply_methods function along with the data. Successive application of the apply_methods function with lists of wrappers containing downstream pipeline steps automatically generate combinations between the methods and all upstream combinations. CellBench allows individual combinations to fail without affecting the execution of other pipelines, and the function time_methods can be used to measure running times. The scripts containing the wrappers and code used for this analysis can be found at . Performance summary The results from all analysis combinations including the performance scores are listed in Supplementary Table 4 and are also available from GitHub. Results were plotted using the ggplot2 (v.3.1.0) 57 and pheatmap (v.1.0.10) 58 R software. Clustering of the Pearson correlation coefficients in Fig. 2 was performed using the hclust function in base R. To summarize the results of each analysis we fitted a linear model using the lm base R function with the performance score as the dependent variable, and type of method and experimental designs as binary covariates. The coefficient of each method indicates the degree to which the method is positively or negatively associated with performance. This analysis is summarized in Supplementary Figs. 5b,c and 6 . Figure 6 summarizes the performance across all evaluated methods. For each task, we considered a specific metric. Clustering performance was assessed with the ARI coefficient, trajectory analysis with the correlation between pseudotime and ground truth and integration across protocols, normalization and imputation with the silhouette width coefficient and kBET results. The best performance is defined as the average of the best two results for each design, and the average performance is calculated across all results. Variability refers to the variation of all results. For normalization and imputation methods, the coefficients of the linear model were scaled and shown in the heat map to summarize performance and indicate which method yields better results in the downstream analysis compared with others. The running time for each combination is given in Supplementary Table 5 . For each method, a linear model was fitted (using lm) that regressed the running time against the number of cells on the log-scale from the different datasets. The cell number coefficient indicates the scalability of the method. The scalability is classified as poor if the coefficient is larger than 2, good if the coefficient is smaller than 1 and fair if between 1 and 2. The running time for a method was regarded as poor if it was longer than 30 min, good if shorter than 5 min and fair if in between. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Raw data are available under GEO SuperSeries GSE118767 . A summary of the individual accession numbers is given in Supplementary Table 1 . The processed SingleCellExperiment R objects are available from . Code availability Code used to perform the comparative analyses and generate the figures is available from . The CellBench R package was developed for benchmarking single cell analysis methods and is available from and Bioconductor ( ).
A first-of-its-kind data analysis platform is enabling researchers to select the best tool for interpreting the overwhelming amounts of data generated by single-cell research. Accurately making sense of these datasets will help to explain the vital and varied roles cells play in health and disease. The platform, CellBench, was published today in Nature Methods. The freely accessible platform, which includes software and several gold standard datasets compares the performance of thousands of different single-cell analysis options, enabling researchers to identify the best method for the questions they wish to answer. The project was led by Associate Professor Matthew Ritchie and Mr Luyi Tian from the Walter and Eliza Hall Institute of Medical Research, along with Dr. Kim-Anh Lê Cao and Mr Al J Abadi at the University of Melbourne. Making sense of mass data Much is still unknown about the trillions of cells in the human body. In the quest to better understand cells and the role they play in health and disease, a technique called single-cell sequencing has become a hot research field. Over the past five years there has been an explosion of new analysis tools for interpreting single-cell data. This has left researchers with hundreds of options to choose between in the challenging task of interpreting large and complex biological datasets. Associate Professor Ritchie said the ability to identify and define each cell and its activity was invaluable for preventing and treating disease. "The challenge is that a colossal amount of complex biological data is generated from single-cell studies. Our platform offers a solution to this by helping researchers accurately and efficiently make sense of the overwhelming amounts of information from their studies," he said. Ensuring gold standard analysis Dr. Kim-Anh Lê Cao said choosing the right data analysis tool is crucial for avoiding misleading results or the incorrect biological interpretation of data. "One of the biggest challenges we face in this area of research is our ability to compare the efficiency of all analysis methods currently available. This can only be done if we have good data like CellBench to do this benchmarking. "CellBench is already enabling researchers to choose the right analysis method and generate meaningful and accurate conclusions from their data," she said. Associate Professor Ritchie said it was clear there was a real demand for a single-cell analysis benchmarking tool because the team's study had already been downloaded more than 3,000 times on a preprint server for biology called bioRxiv and that their data had already featured in five other research studies. "We're really pleased to see the rapid uptake and use of the data from our CellBench project," he said. The researchers hope the platform will serve to encourage more rigorous testing, leading to better quality data analysis methods being developed in the future. Ultimately, their hope is that the platform will assist researchers in making new discoveries and developing more effective therapies for the major health challenges of our time.
10.1038/s41592-019-0425-8
Medicine
Young children with autism benefit regardless of high-quality treatment model
"Comparative Efficacy of LEAP, TEACCH and Non-Model-Specific Special Education Programs for Preschoolers with Autism Spectrum Disorders," Journal of Autism and Developmental Disorders, published online June 2013. link.springer.com/article/10.1 … 07/s10803-013-1877-9 Journal information: Journal of Autism and Developmental Disorders
http://link.springer.com/article/10.1007/s10803-013-1877-9
https://medicalxpress.com/news/2013-07-young-children-autism-benefit-high-quality.html
Abstract LEAP and TEACCH represent two comprehensive treatment models (CTMs) that have been widely used across several decades to educate young children with autism spectrum disorders. The purpose of this quasi-experimental study was to compare high fidelity LEAP (n = 22) and TEACCH (n = 25) classrooms to each other and a control condition (n = 28), in which teachers in high quality special education programs used non-model-specific practices. A total of 198 children were included in data analysis. Across conditions, children’s performances improved over time. This study raises issues of the replication of effects for CTMs, and whether having access to a high quality special education program is as beneficial as access to a specific CTM. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction Providing children with autism spectrum disorders (ASD) access to high quality, early intervention results in improved developmental performance (Boyd et al. 2010 ; Dawson et al. 2010 ; Kasari et al. 2006 ; Landa et al. 2011 ; Stahmer and Ingersoll 2004 ). However, debate persists over which treatment approach(es) to use to attain that ultimate goal. Presently, there are two overarching categories of intervention approaches from which practitioners or families can select treatments—focused interventions or comprehensive treatment models (CTMs). Focused interventions refer to treatments that are typically shorter in duration and target discrete skills (e.g., communication or challenging behavior). CTMs employ focused intervention practices that are organized around a central theoretical or conceptual framework, are typically used with children for a longer period of time, and target multiple developmental domains (Odom et al. 2010a ). Most of the empirical support for autism-specific, evidence-based practices comes from the focused intervention literature (Odom et al. 2003 , 2010b ). For instance, Odom et al. ( 2010a ) identified 30 CTMs for individuals with ASD, and one of the resounding and surprising findings of their literature review was the lack of empirical evidence for the majority of CTMs. The most notable exception is the research related to the Lovaas Young Autism Project, with multiple studies (Eikeseth et al. 2002 ; Lovaas 1987 ; Smith et al. 2000 ) as well as meta-analyses (Eldevik et al. 2009 ; Reichow and Wolery 2009 ) demonstrating evidence for this CTM. However, the research on this model has been primarily conducted in clinic- or home-based settings. Like most children, those with ASD spend a great deal of their time in schools; therefore, it is of critical importance to establish the efficacy of school-based CTMs. Increasingly, leaders in the field are calling for comparative efficacy studies to determine the relative effects of treatments when compared to each other (Dingfelder and Mandell 2011 ; Sox and Greenfield 2009 ). Two CTMs that have a long history in the field, are used frequently, and have different conceptual frameworks are LEAP (Learning Experiences and Alternative Program for Preschoolers and their Parents) and the TEACCH Autism Program. The purpose of this study was to examine the relative effects of LEAP and TEACCH school-based CTMs when compared to each other and a control condition consisting of non-model-specific (NMS) special education programs. TEACCH and LEAP are two long-standing CTMs, their origins for individuals with ASD being traced to the 1970s and 1980s, respectively. Yet, the philosophical tenets as well as actual practices underlying these CTMs represent quite divergent approaches to educating children with ASD. TEACCH bases its conceptual orientation in cognitive-social learning theory and subscribes to a “culture of autism,” in which accommodations such as visual schedules and work systems (Hume and Odom 2007 ; Hume et al. 2012 ) are made to the environment versus the individual to promote the child’s engagement and learning (Mesibov and Shea 2010 ; Mesibov et al. 2005 ). While it is not a stated principle of TEACCH, in the context of schools, this has often manifested itself in children with ASD being educated together in classrooms that are separate from their typically developing peers. In contrast, LEAP bases its treatment approach on a blend of applied behavior analysis (ABA) as well as common tenets of early childhood education (ECE) (Strain and Hoyson 2000 ; Strain et al. 1985 ) with a goal of reducing children’s characteristics of autism that interfere with their learning opportunities. The LEAP model uses an inclusive education approach whereby children with ASD are taught alongside typically developing, same-aged peer confederates who become agents of social instruction and intervention. Thus, LEAP and TEACCH provide “naturally occurring” contrasts with the former model using an inclusive education approach and the latter primarily implemented in classroom settings emphasizing more structured, adult-led learning opportunities. Yet the evidence base for LEAP and TEACCH, at least until recently (see Strain and Bovey 2011 , for research on LEAP), has been relatively sparse and there still are no existing studies that have directly compared the two approaches. With TEACCH, some evidence exists which documents positive effects of this CTM when implemented in home (Ozonoff and Cathcart 1998 ), residential (Panerai et al. 1998 , 2002 ), and school settings with older students (Panerai et al. 2009 ). However, we are aware of only one study in which TEACCH was evaluated as a school-based CTM for preschool-aged children with ASD (Tsang et al. 2007 ). In this quasi-experimental study, which was conducted in China, Tsang et al. assigned a total of 34 preschool-aged children to a TEACCH (N = 18) or a services-as-usual (N = 16) control condition. Tsang and colleagues found some improvements in favor of the experimental group in certain developmental skills (e.g., fine and gross motor) but no group differences in the areas of socialization or communication. Obviously, issues of culture as well as differences in the education system limit any generalization of the study’s findings to children with ASD served in the US public school system. In contrast, a recent large-scale efficacy study of the LEAP model demonstrated positive effects in the domains of socialization, cognition, language and challenging behavior for preschool-aged children with ASD (N = 177 LEAP, 117 = control). This study used a cluster randomized trial design with a total of 56 inclusive classrooms randomized to either the treatment condition in which teachers received 2 years of training and ongoing consultation in LEAP implementation, or a control condition in which teachers only received access to the LEAP treatment manual and training presentation materials. This first large-scale study of LEAP demonstrated that the program is efficacious as well as socially valid when fully implemented; however, a question remains as to whether it is more effective than other school-based CTMs designed for children with ASD. Comparative efficacy studies allow us to address these important causal-comparative or effectiveness research questions. Randomized clinical trials (RCTs) represent the gold-standard in medical and education-related fields for conducting efficacy research. However, there are circumstances under which RCTs are not viable options and even reactive with certain groups (Shadish et al. 2002 ). For instance Strain and Bovey ( 2011 ) demonstrated, in their study, that teachers must implement LEAP for at least 2 years to find the most robust treatment effects. This would require teachers in any comparison condition to have a similar amount of training time to rule out the sheer amount of time teachers and children are exposed to the LEAP model (i.e., dosage) as being the primary explanatory variable for any observed treatment differences. In addition, research has demonstrated that when a philosophical mismatch exists between teachers’ beliefs and the underlying tenets of a model that they are more inclined to experience burnout (Jennett et al. 2003 ). This could certainly occur in a study that randomized teachers to LEAP or TEACCH conditions given that one approach espouses an inclusive ABA/ECE approach for children with ASD, and the other involves structured learning approaches that often occur in self-contained classroom settings. Therefore, the current study used a rigorous quasi-experimental study design to compare the LEAP and TEACCH treatments because of these identified issues as well as the fact that both approaches have already been widely disseminated in public school systems throughout the US. Specifically, a quasi-experimental study design was used to compare high fidelity and quality LEAP and TEACCH classrooms to each other as well as a control condition consisting of high quality special education classrooms in which teachers did not use practices that aligned with any particular theoretical or conceptual model. The following research questions were addressed in this study: 1. What are the effects of LEAP, TEACCH and NMS control classroom experiences on the developmental and behavioral performance of preschool-aged children with ASD? 2. What family or child factors moderate intervention effects for children in LEAP, TEACCH or NMS control classrooms? Methods This multi-site study was conducted in public school districts in the following states: North Carolina, Colorado, Florida and Minnesota. Research personnel from the Universities of North Carolina-Chapel Hill and Miami were involved in the collection of child data for 3 years of the project; whereas the Universities of Colorado-Denver and Minnesota were involved for two project years, primarily because of feasibility issues (e.g., the number of classrooms that could be recruited within a given site). The Institutional Review Boards at each of the respective study sites approved the research. Sample Classroom teachers and children in this study comprised one of three mutually exclusive groups: LEAP, TEACCH or NMS classrooms. Classrooms/Teachers A total of 25 TEACCH, 22 LEAP and 27 NMS teachers were enrolled in the study. Teachers were enrolled in the study for the duration of one school year and could not participate again in subsequent years. Relevant demographic information on schools, classrooms and teachers can be found in Table 1 . Table 1 School and teacher demographics by treatment model Full size table Inclusion/Exclusion Criteria Teachers/classrooms were enrolled in the study if they met the following inclusion criteria: (1) all classrooms had to operate within the public school system; (2) teachers had to be certified to teach in their respective state; (3) TEACCH and LEAP teachers must have attended a formal training, either conducted by personnel directly affiliated with those programs or conducted by others who had been formally trained; (4) teachers must have been teaching in their respective classroom type for at least 2 years prior to study enrollment; and (5) teachers must have met prior-determined criteria on classroom fidelity and/or quality rating scales. Specifically, all classrooms had to meet an “average” rating (score of 3 out of 5) on four subscales of a validated classroom quality measure—the PDA Program Assessment ( Professional Development in Autism Center 2008 ) during an initial classroom visit. In addition, TEACCH and LEAP classrooms had to meet above average ratings (3.5 out of 5) on model-specific subscales and items on their respective fidelity of implementation measures. If required average scores were not met on the initial visit, research staff conducted one additional classroom visit using the same criteria described above to make a final determination on study eligibility. Classrooms that did not meet study requirements were excluded from the study. See Fig. 1 for a CONSORT diagram describing the enrollment process. Fig. 1 CONSORT diagram Full size image Booster Training All eligible TEACCH and LEAP teachers subsequently participated in a booster training that occurred prior to their start in the study. The booster training was a 12-h training conducted across 2 days and the content was tailored to address any domains on the fidelity measure that received average or below average scores during the initial screening(s) (e.g., communication, behavior management). Trainers who were approved/certified by model developers conducted this training. Booster training was not offered to NMS teachers because the focus of the study was the active TEACCH and LEAP treatment models. Children/Families Based on participant enrollment data, a total of 205 children were initially enrolled into the study. However, seven children were excluded from data analysis because they did not meet the study diagnostic criteria outlined below. Thus, 198 children (N = 85 TEACCH, 54 LEAP, and 59 NMS) were initially included in data analysis. Demographic information on children/families can be found in Table 2 . Table 2 Child and family demographics by treatment model Full size table Inclusion/Exclusion Criteria Children were enrolled in the study if they met the following criteria: (1) between 3 and 5 years of age at time of enrollment; (2) previous clinical diagnosis or educational label consistent with ASD or developmental delay; (3) met diagnostic criteria on Autism Diagnostic Observation Schedule (ADOS; Lord et al. 1999 ) and/or Social Communication Questionnaire (SCQ; Rutter et al. 2003 ); (4) had not been previously exposed to the comparison CTM, for example, a child enrolled in a TEACCH classroom could not have been previously enrolled in a LEAP classroom; and (5) must have a minimum of 6 months of exposure to the treatment or control condition. Children with significant uncorrected vision or hearing impairment, uncontrolled seizure disorder or traumatic brain injury were excluded from the study. Finally, families must have been proficient enough in English to participate in order to complete parent rating scales. Recruitment/Retention In general, teachers were first recruited into the study through local school administrative contacts. Administrators were asked to identify high-quality LEAP, TEACCH and NMS programs, and the observational screening protocol described above to document classroom fidelity and quality was used to confirm group assignment. The model purveyors and their staff were enlisted to identify high fidelity classrooms implementing TEACCH and LEAP; however, these classrooms also had to meet study screening inclusion criteria. Finally, all NMS classrooms were recruited from within the same school district as TEACCH and LEAP classrooms, and these classrooms were a mixture of inclusive and self-contained classrooms to minimize potential confounds. Following teacher recruitment, teachers or project staff then sent consent forms home to eligible families. Teachers were paid a total of $500 for their study participation, and families were paid a total of $200. Data Collection Research staff conducted the majority of direct child assessments at children’s schools. On some occasions direct child assessments were conducted in clinic or home settings to collect additional measures on children. For parent report data, we mailed assessment packets to parents and conducted follow-up visits in the home. Teacher report data were collected by dropping off and picking up assessment packets at the child’s school. Data collection occurred at the beginning (N = 198) and end of the school year (N = 185), at least 6 months apart. Across the three data sources (parent, teacher and child), we collected all data on an individual child within a 6-week time window at pre and posttest timepoints. Procedures for Collection of Fidelity Data Trained and reliable research staff collected data on the fidelity of implementation of TEACCH and LEAP practices, as well as overall classroom quality (i.e. PDA Program Assessment), four times across the school year. Training and reliability procedures as well as psychometric data on the measures are detailed in Hume et al. ( 2011 ). Two of the four visits included a reliability observer. Mean inter-rater reliability for the TEACCH measure in TEACCH classrooms was 98 % (range 88–98 %), 95 % for the LEAP measure in LEAP classrooms (range 88–100 %), and 88 % for the PDA Quality measure in NMS schools (range 87–99 %). To ensure consistency across sites, all fidelity observations occurred during the first 3 h of the classroom day. During each classroom visit and across classroom types, the two model-specific fidelity measures as well as the quality measure were completed (e.g., in a TEACCH classroom both the TEACCH and LEAP fidelity measures would have been scored in addition to the PDA quality measure). This was done to assess the degree of overlap between the classroom types. All measures were scored on a 1 (no/minimal implementation) to 5 (full implementation) scale. Based on the fidelity measures, the classrooms maintained fidelity to their respective model over time, as evidenced by the small standard deviations, however, there was also overlap across models (see Table 3 for results). Table 3 Mean fidelity scores and quality rating scores by treatment model Full size table Supplemental Classroom Practices Teachers completed the Classroom Practice Inventory (CPI; TEACCH-LEAP project team 2007 ) at pretest and posttest in order to self-report the type and frequency of supplemental teaching strategies used to educate children with ASD in their classrooms. Preliminary psychometric data on the CPI indicate it is a reliable tool (ICC = 0.80; α = 0.77) that identifies commonly used classroom practices (e.g., discrete trial training, pivotal response treatment, behavioral supports). The measure is scored on a 0–4 scale, with higher scores indicating more frequent use of the practice. Descriptive data from the CPI indicated that there was similar use of some instructional strategies across model type (e.g., mean visual support scores were: NMS = 3.82, LEAP = 3.91, and TEACCH = 3.71), while other practices were more associated with specific models (e.g., mean peer-mediated instruction scores were: NMS = 1.00, LEAP = 3.18, and TEACCH = 1.17). See Table 4 for results. Table 4 Classroom practice inventory (CPI) scores by treatment model Full size table Measures The following measures were collected on children as part of their study participation: Autism Diagnostic Observational Schedule (Lord et al. 1999 ); Childhood Autism Rating Scale (Schopler et al. 1988 ); Leiter International Performance Scale-Revised (Roid and Miller 1997 ); Mullen Scales of Early Learning (Mullen 1995 ); Pictorial Infant Communication Scales (Delgado et al. 2001 ); Preschool Language Scales, 4th Edition (Zimmerman et al. 2002 ); SCQ (Rutter et al. 2003 ); Social Responsiveness Scale (Constantino 2002 ); Repetitive Behavior Scales-Revised (Bodfish et al. 1999 ); and Vineland Adaptive Behavior Scales, Survey Edition (Sparrow et al. 1984 ). Study measures included as outcomes or moderators were, in part, selected based on prior research on children with ASD (e.g., Farmer et al. 2012 ; Reichow and Wolery 2009 ), and research on the CTMs under study (e.g., Strain and Bovey 2011 ; Tsang et al. 2007 ). Descriptions of the measures and their psychometric properties can be found in the supplementary material Appendix A. Children’s scores at baseline and posttest on these individual measures can be found in supplementary material Appendix B. However, baseline group differences are only reported for the composite variables (described below) because these variables served as the actual outcomes for this study. Results Derivation and Empirical Validation of Composite Variables This study involved the measurement of a large number of cognitive, behavioral, psychological, and social variables, many of which could have been responsive to treatment and could have therefore served as outcome variables. However, we chose to construct composite variables for the following reasons: (a) because fitting models to a large number of correlated outcome variables would have been problematic due to the difficulty in conceptualizing patterns of results across outcomes, (b) the power-reducing effects of measurement error in the outcome variables, and (c) the expected increase in the family wise error rate resulting from the estimation of a large number of parameters across the models. In order to create the composite variables, we first collected the complete set of possible outcome variables (listed above in measures). Variables were placed into prospective composite variables based on the theoretical constructs measured as well as an examination of simple correlation coefficients. In the next step, we used exploratory factor analysis (EFA) to determine whether the conceptual groupings from the first stage were supported by the data. We chose an oblique rotation for this step. Based on the examination of factor loadings and scree plots, we revised the conceptual groupings. For example, we originally conceptualized separate expressive and receptive communication factors, but the EFA results strongly suggested that they should be combined into a single communication factor. In the last step, we used confirmatory factor analysis (CFA) to provide a final check of the construct validity of the composite variables and to estimate factor scores to save and use for further analysis. The following seven composite variables were derived from the CFA: (a) Autism Characteristics and Severity (ACS), (b) Communication (Comm), (c) Sensory and Repetitive Behaviors, parent report (SRB-P), (d) Sensory and Repetitive Behaviors, teacher report (SRB-T), (e) Reciprocal Social Interaction, parent report (RSI-P), (f) Reciprocal Social Interaction, teacher report (RSI-T), and (g) Fine Motor (FM). The SRB and RSI composites were originally conceptualized as single factors, but examination of the bivariate correlations between indicators provided evidence that the teacher- and parent-rated items were not highly correlated with one another. Therefore, those two constructs were divided into teacher- and parent-rated versions. Through the use of full-information maximum likelihood estimation (Finkbeiner 1979 ), factor scores could be estimated even in the presence of one or more missing indicator variables, which also increases power by avoiding unnecessary listwise deletion. Table 5 provides the definitions and model fit statistics for the seven composite variables. Table 5 Composite variable CFA results Full size table The CFA models for all of the composites except for the ACS and FM composites adjusted for the repeated measures structure of the data. The ACS and FM models, being based on three and two indicators, respectively, had too few degrees of freedom to enable the adjustment for clustering. However, the failure to adjust did not impact the factor scores generated by the model for later analysis. Table 6 provides descriptive statistics for the composite variables by group and time point. Oneway ANOVAs tested for baseline differences on the composites. The ACS ( p = .0013), COMM ( p < .001), SRB-T ( p = .0417), RSI-P ( p = .0241), and FM ( p = .0066) composites did exhibit statistically significant baseline differences. Covariates were added to the model to address baseline differences on composite variables (described below). Table 6 Descriptives for composite variables by time point and model Full size table Data Analysis The primary data analytic approach involved the estimation of gain score models in which the outcome was computed as the outcome measured at posttest (Time 2) (based on the child’s derived composite variable score) minus the outcome measured at pretest (Time 1) (again based on the derived composite variable score). The gain score model was fit using multilevel or mixed linear models, which adjusted for the clustering of students within classrooms. The models were fit using PROC MIXED in SAS version 9.2. The multilevel model failed to converge in the model for parent-rated reciprocal social interaction, most likely due to a lack of variability at the classroom level. This model was fit using ordinary least squares regression. The advantages of the gain score model were as follows: first, they provide more powerful hypothesis tests than some alternative models due to the purging of between-subjects variance from the outcome; second, they provide some protection against misspecification bias by relieving the analyst of the necessity of correctly specifying the functional form of change over time (e.g., linear vs. non-linear relationships) and the main effects of covariates; and third, they are often simpler to interpret than a repeated measures model. This is because the model only considers change over time. Further in gain score models, pretreatment differences on outcomes are removed via the computation of the gain score; therefore the gain score model completely separates baseline differences from changes over time. Of the three major alternative analytic models for pretest–posttest designs (ANCOVA, repeated-measures models, gain score models), only the latter two produce unbiased estimates when there are baseline differences. The repeated-measures model is capable of estimating relationships between covariates and the baseline outcome as well as change over time, but the tradeoff is increased model complexity. Since none of our research questions involved the baseline time point, the gain score model was a compelling alternative to the repeated-measures approach. We did rerun the analysis using the repeated-measures approach, which required three-level hierarchical linear models, as a sensitivity check to make sure that our results were not overly dependent on the gain-score model specification. The results of the repeated-measures models did not vary from the results of the gain-score models. A second analysis examined within-model moderators of treatment effect. These models sought to determine the most efficacious treatment model for children with differing covariate patterns. In these models, key covariates were interacted with treatment status and entered into the model, thus allowing estimated treatment effects to vary by the covariate-to-treatment model match. In both cases, missing data were addressed prior to analysis via the application of multiple imputations (MI; Little and Rubin 2002 ). The imputation model included all the composite variable outcomes and main effects for all predictor variables. In the gain score models, the outcomes included in the imputation model were the gain scores from T1 to T2. Some of the predictor variables were dummy-coded categorical variables, such as race, gender, and education status. Main Effect Model Results for Pretest to Posttest Change Results for the main effect models are presented in Table 7 . The models were fit to the seven composite variable outcomes described earlier. To assist in addressing pre-treatment differences on composite variables, covariates for the models were parent education, teacher education, gender, and child race (all dummy coded), as well as years of teaching experience, parent stress index total score, Mullen standard score, ADOS severity score, PLS4 score, Leiter BRIEF IQ score, and duration of the school day (half-day vs. full-day program). It must be reiterated that the use of gain scores themselves also allow us to address baseline group differences because they adjust for children’s initial status by differencing their pretest and posttest scores, which results in the construction of a gain score. All continuous variables were measured at pretest and were grand-mean centered prior to analysis. Therefore, the model intercepts represent the average change from T1 (pretest) to T2 (posttest) for a “reference” child (i.e., all covariates set to zero) in a NMS classroom. The key variables of interest were dummy variables representing the TEACCH and LEAP treatment models. The NMS model served as the reference group. Post-hoc analyses compared TEACCH to LEAP, TEACCH to NMS, and LEAP to NMS, after adjusting for covariates. Because the latent variable outcomes were standardized to have standard deviations of one, the intercept coefficient as well as the estimates for the pairwise comparisons between models may be interpreted as Cohen’s d -like effect sizes. For example, an intercept of 0.20 would indicate that the latent outcome increased by approximately 1/5 of a standard deviation from the Time 1 assessment to the Time 2 assessment in NMS (reference) classrooms. A TEACCH versus NMS estimate of 0.5 would indicate that students in TEACCH classrooms experienced an additional average increase of one half of a standard deviation on the latent outcome than the change experienced by students in NMS classrooms. Table 7 Gain score model results (Time 1 to Time 2) Full size table Primarily, a significant within group effect was found across models for the average change over time variable (see Table 7 ). The TEACCH group demonstrated significant change over time for 5 of 7 composite variables with the exceptions of teacher- and parent-report of sensory and repetitive behavior (SRB-T and SRB-P, respectively). The LEAP group demonstrated significant change across time on 4 of 7 composites with the exceptions being SRB-T, SRB-P and parent report of child reciprocal social interaction (RSI-P). Finally, the NMS group showed significant change for the following three composite outcome variables: autism severity (ACS), communication (COMM), and fine motor (FM). No statistically significant differences were found between models. The effects of the covariates were not interpreted because they were only included to reduce confounding bias and increase power. Moderation Model Results Results for the moderation models are presented in Table 8 . The models included the same covariates from the main effect models, except five covariates were specified as within-treatment model moderators. These covariates included gender, Mullen pretest score, ADOS severity score at pretest, PLS4 score at pretest and parent stress index score measured at pretest. Three significant moderators were discovered. First, a statistically significant Mullen by TEACCH interaction was identified. For the ACS composite, the positive coefficient for this parameter indicated that children with higher Mullen scores at pretest made smaller gains (i.e., reduction in autism severity) from T1 to T2. As the Mullen is normed to have a standard deviation of 15, a child with a pretest Mullen score one standard deviation higher than the sample mean (a pretest Mullen score of approximately 79.104) would be expected to experience a reduction of autism severity from T1 to T2 by −0.100 standard deviations if placed in a TEACCH classroom (calculated by adding the intercept of −0.208 plus TEACCH main effect of −0.147, plus 15 times the Mullen main effect of −0.010, plus 15 times the Mullen × TEACCH interaction of 0.027). Conversely, a child with a pretest Mullen score of one standard deviation below the sample mean (a pretest Mullen score of approximately 49.10) would be expected to experience a comparatively large reduction in autism severity of −0.610 standard deviations. Table 8 Moderation analysis (Time 1 to Time 2) model results Full size table A PLS pretest by TEACCH interaction was identified as well. The coefficient of −0.019 indicates that students with high PLS scores would be expected to experience a larger reduction of autism symptoms (ACS composite) from T1 to T2 than would be expected for students with low PLS scores. A PLS by TEACCH interaction was also discovered for the Communication composite, whose positive coefficient of 0.014 indicates that children with high PLS scores in TEACCH classrooms would be expected to exhibit larger gains in communication than would be expected for children with low PLS scores. Finally, a female by LEAP interaction was identified for the Communication outcome. Female students in LEAP classrooms exhibited a smaller gain in communication of 0.057 from T1 to T2, whereas males in LEAP classrooms had a much larger gain of 0.233 standard deviations on the communications composite. However, this interaction effect is difficult to interpret because of the relatively low number of females from LEAP classrooms enrolled in the study (12 of 54 children). Discussion This is the first study of which we are aware with the expressed purpose of comparing school-based CTMs for children with ASD. We found that children made gains or reductions in autism characteristics across time irrespective of programmatic type. We did not find change across time for the parent- or teacher-reported sensory and repetitive behavior composite for any of the models; however, this may reflect that these behaviors are by their nature resistant to change and there are few evidence-based interventions specifically designed to target these characteristics of autism (Bodfish 2004 ; Boyd et al. 2012 ). It may be somewhat surprising that significant change across time was not found in LEAP for parent report of social interaction given that the primary method of instruction is peer-mediated instructional strategies. This finding may reflect the higher expectations of these parents for their children’s social skills as they observe on a day-to-day basis how the social performance of their children compare to typically developing peers; yet, we did find change for teacher report of children’s social skills. In addition, we found that children’s pretest Mullen and PLS scores moderated the effects of TEACCH on children’s autism severity, with children with lower Mullen but higher PLS scores at pretest having better outcomes on this composite. Higher PLS scores also moderated the effects of TEACCH on children’s communication outcomes. It is consistent with other treatment research studies that children with higher baseline scores tend to have better outcomes (Sallows and Graupner 2005 ). However, the Mullen finding is of interest because it suggests that children enrolled in TEACCH classrooms with lower versus higher cognitive ability showed more improvement in autism severity. This finding could be attributable to children with lower cognitive abilities likely having more severe symptoms of autism and thus more room for improvement; or it may suggest that some of the environmental and behavioral supports used in TEACCH are more beneficial to children with greater cognitive impairments. Gender appeared to moderate the communication outcomes of children in LEAP classrooms, with girls showing less improvement for this composite variable, yet, interpretation of this outcome must be cautioned because of the relatively low numbers of females in the study. Still the overall findings, which demonstrate change across time and no model differences, may reflect the importance of general programmatic quality in promoting the positive development of children with ASD. The early childhood literature is replete with studies demonstrating that classroom quality is an important predictor of typically developing children’s social, language and academic outcomes (Burchinal et al. 2000 ; Mashburn et al. 2008 ; Pianta et al. 2002 ; Rimm-Kaufman et al. 2005 ). This concept has not often been measured or considered in autism intervention research, possibly because there have been fewer studies focused on school-based CTMs. It stands to reason that teachers may be better able to implement evidence-based practices, including specific CTMs, when the classroom has a certain level of foundational quality. Our quality measure included such domains as classroom organization, positive instructional climate, collaborative interdisciplinary teaming, and family involvement, and these as well as other areas may be essential building blocks upon which other practices can then be successfully layered. However, we cannot draw firm conclusions without a “lower” quality comparison condition. It is also important to restate that studies have found positive effects for school-based implementation of TEACCH (Tsang et al. 2007 ) and LEAP (Strain and Bovey 2011 ). In particular, the Strain and Bovey study was a large-scale, well-designed evaluation study of LEAP that demonstrated its superiority to a control condition. The difference between their study and the current one is that their comparison condition consisted of teachers having access to the LEAP treatment manual and training materials (i.e., a low-fidelity LEAP), whereas our comparison conditions were comprised of another active treatment (TEACCH) and high quality control classrooms (NMS). In treatment efficacy research, the control condition plays a large role in detecting between group differences (Mohr et al. 2009 ). Thus, it is possible that our NMS “control” group really acted as another active treatment because all the classrooms were high quality. The study may not have been sufficiently powered to detect differences between three active treatment conditions. The dissimilar findings between the current study and prior studies of TEACCH and LEAP also may reflect the difficulty of sustaining intervention effects when the model developer is not directly involved in the research. For instance, Reichow and Wolery ( 2009 ) found that the effects of the Lovaas Young Autism Project on child IQ was moderated by training personnel, with larger changes in IQ found if the study personnel had been trained in the UCLA model, which is the origin of the Lovaas approach. While not involving the model developer may help to reduce researcher bias, it could also be that there are intangible aspects of the intervention that are more easily conveyed when the purveyor is directly involved. However, our fidelity measures indicated that TEACCH and LEAP teachers strongly adhered to the components of the model that could be observed, and did so consistently across the school year (Coman et al. 2013 ). This concept of model developers not being involved in the research also moves the current study into the area of effectiveness versus efficacy research. Within the RCTs literature, the term “pragmatic randomized trial” is used to describe studies that combine elements of both efficacy and effectiveness trials (Marchand et al. 2011 ; Zwarenstein and Treweek 2009 ). This study included elements of efficacy trials in that it purposefully enrolled high quality classrooms to minimize pretest differences, and provided booster training to LEAP and TEACCH teachers to increase and/or maintain their fidelity of implementation. Having to combine elements of both study designs, because LEAP and TEACCH are already widely used in practice, may have affected study outcomes. However, pragmatic randomized trials also contribute to generalizability because they often involve community-based intervention agents. The study findings also could reflect limitations with the use of the fidelity measures to screen in/select classrooms (e.g., they may not have fully captured the active ingredients of the model that best reflect adherence), or the outcome measures used may not have been the most sensitive to detect treatment differences. Finally, the study findings may simply reflect that high quality teachers are aware of and use similar practices to educate children with autism. In fact, this crossover of classroom practices was made evident through our fidelity/quality data as well as CPI data. While a likely limitation, we believe this overlap reflects the real-world heterogeneity found when studying classroom-based practices; i.e., school-based practitioners are trying out and using a number of strategies to educate children with ASD. It should be pointed out that although the CPI data indicated that teachers self-reported the use of a variety of supplemental practices (e.g., pivotal response training), we did not collect observational data to confirm those self-reports or measure the quality of implementation of those ancillary practices. There are also obvious limitations with the use of quasi-experimental designs, which could account for study outcomes. First, we cannot rule out that the findings are the result of sheer developmental maturation or selection bias issues. For example, we had to use raw versus standard scores for some measures (e.g., CARS), which may make these measures more susceptible to the effects of developmental maturation. However, the difficulty of randomization when studying existing school-based CTMs necessitated the use of a quasi-experimental study, and we put appropriate protocols in place, such as screening classrooms for quality and monitoring fidelity of implementation across time, to ensure a rigorous study design. Further, selection bias is likely an issue with most school-based studies because one is often dependent upon school officials to nominate classrooms/teachers to participate in the study, and it could be that officials self-select higher quality programs. Second, there were pre-treatment differences between groups; however, the use of a gain score analysis approach allowed us to address these initial differences. Third, assessors were not blind to children’s group assignment. It would have been difficult to maintain assessor blindness because the majority of assessments were conducted in children’s schools and assessors could see the stark contrasts in the physical layout as well as types of children in TEACCH versus LEAP classrooms (all children with ASD versus more typically developing children). We attempted to counter this bias by not analyzing any child outcome data until all data collection had been completed. Finally, the NMS classrooms in our study may not reflect the real-world heterogeneity found in actual practice. These NMS classrooms may represent the “best” of standard practice and may differ substantially from the modal level of quality that reflects “business-as-usual” classroom practices. Implications and Future Research All three programs were found to produce statistically significant changes in children’s outcomes across the school year. This finding may shift the field’s thinking around CTMs designed for students with ASD. Perhaps it is not the unique features of the models that most contribute to child gains; instead it is the common features of the models that most influence child growth. In other fields, such as clinical and counseling psychology, the concept of “common factors” is well used and understood when describing therapeutic interventions for clients. These are intervention components that should apply to all clients and should be in place for effective practice to occur (e.g., trusting relationship, Deegear and Lawson 2003 ; Luborsky 1995 ). The common factor theory may well apply to interventions for children with ASD (Odom et al. 2012 ). Early analysis of the overlap of scores on the fidelity measures indicate that perhaps those components common to the intervention approaches (e.g. classroom organization, teacher interaction with students and families) account for outcomes more than components that are unique to each approach (e.g. peer-mediated instruction in LEAP classrooms, structured work systems in TEACCH classrooms). A more complex analysis of the fidelity and quality measures and outcomes will allow for further exploration of this common factor theory and its application to CTMs for young children with ASD. Examining the treatment effects of CTMs on child level outcome variables is important, however when selecting educational models for implementation, caregivers, teachers, and school administrators must evaluate multiple features of the model and its impact on multiple variables. These include outcomes for caregivers whose children (e.g. mental health, stress) are enrolled in specific school-based programs, their impact on teachers (e.g. teacher burnout), and the financial costs to the district to adopt and implement a specific model. For instance, by design, LEAP classrooms are operated as half-day programs and if children are able to receive similar outcomes in half-day as full-day programs, then the cost-benefits may factor into school administrators’ decision to select a particular CTM. Because there were no differential effects for child outcomes, further exploration of the effect or impact of these classroom programs on these and other variables is an important next step. With these supplemental analyses, in addition to further exploration of treatment moderators, stakeholders will be better informed when selecting and implementing CTMs in public school settings.
Researchers at the University of North Carolina at Chapel Hill have found that preschoolers with Autism Spectrum Disorder (ASD) who receive high-quality early intervention benefit developmentally regardless of the treatment model used—a surprising result that may have important implications for special-education programs and school classrooms across the country. "This is the first study designed to compare long-standing comprehensive treatment models for young children with ASD," said Brian Boyd, a fellow at UNC's Frank Porter Graham Child Development Institute (FPG) and one of the study's co-principal investigators. Boyd also is an assistant professor in occupational science and occupational therapy in UNC's School of Medicine. "We know that more children are being diagnosed with ASD each year, and that it can cost an estimated $3.2 million to treat each child over a lifetime. Understanding that a child can benefit from a high-quality program, rather than a specialized program, may help reduce those costs by decreasing the need for teachers and other school practitioners to be trained to deliver multiple specialized services," Boyd said. He stressed it remains important to ensure educators are trained to provide high-quality programs that meet the special behavioral, communication and other needs of children with ASD. Previous research has shown that when children with ASD have access to early intervention via treatment programs, they improve developmentally. Until now, however, debate has persisted over which approach to use, said Boyd. The study appeared in the June issue of Journal of Autism and Developmental Disorders. Two frequently used comprehensive treatment models have a long history: LEAP (Learning Experiences and Alternative Program for Preschoolers and their Parents) and TEACCH (now known only by its acronym). FPG's study examined the relative effects of the LEAP and TEACCH school-based comprehensive treatment models when compared to each other and to special-education programs that do not use a specific model. The multisite study took place only in high-quality classrooms and enrolled 74 teachers and 198 3- to 5-year-olds in public school districts. The study found that children made gains over the school year regardless of the classroom's use of LEAP, TEACCH or no specific comprehensive treatment model. "Each group of children showed significant positive change in autism severity, communication and fine- motor skills," said Kara Hume, FPG scientist and co-author. "No statistically significant differences were found among models, which challenged our initial expectations—and likely the field's." "This study may shift the field's thinking about comprehensive treatment models designed for young children with ASD," said co-author Samuel L. Odom, FPG's director and the study's principal investigator. "Perhaps it's not the unique features of the models that most contribute to child gains but the common features of the models that most influence child growth."
link.springer.com/article/10.1 … 07/s10803-013-1877-9
Physics
Transistor made from vanadium dioxide could function as smart window for blocking infrared light
1.Nakano, M., Shibuya, K., Okuyama, D., Hatano, T., Ono, S., Kawasaki, M., Iwasa, Y. & Tokura, Y. Collective bulk carrier delocalization driven by electrostatic surface charge accumulation. Nature 487, 459–462 (2012). www.nature.com/nature/journal/ … abs/nature11296.html Journal information: Nature
http://www.nature.com/nature/journal/v487/n7408/abs/nature11296.html
https://phys.org/news/2013-02-transistor-vanadium-dioxide-function-smart.html
Abstract In the classic transistor, the number of electric charge carriers—and thus the electrical conductivity—is precisely controlled by external voltage, providing electrical switching capability. This simple but powerful feature is essential for information processing technology, and also provides a platform for fundamental physics research 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 . As the number of charges essentially determines the electronic phase of a condensed-matter system, transistor operation enables reversible and isothermal changes in the system’s state, as successfully demonstrated in electric-field-induced ferromagnetism 2 , 3 , 4 and superconductivity 5 , 6 , 7 , 8 , 9 , 10 . However, this effect of the electric field is limited to a channel thickness of nanometres or less, owing to the presence of Thomas–Fermi screening. Here we show that this conventional picture does not apply to a class of materials characterized by inherent collective interactions between electrons and the crystal lattice. We prepared metal–insulator–semiconductor field-effect transistors based on vanadium dioxide—a strongly correlated material with a thermally driven, first-order metal–insulator transition well above room temperature 17 , 18 , 19 , 20 , 21 , 22 , 23 —and found that electrostatic charging at a surface drives all the previously localized charge carriers in the bulk material into motion, leading to the emergence of a three-dimensional metallic ground state. This non-local switching of the electronic state is achieved by applying a voltage of only about one volt. In a voltage-sweep measurement, the first-order nature of the metal–insulator transition provides a non-volatile memory effect, which is operable at room temperature. Our results demonstrate a conceptually new field-effect device, extending the concept of electric-field control to macroscopic phase control. Main Metal–insulator–semiconductor field-effect transistors (MISFETs) work by using the electrostatic charging effect of a capacitor: the number of charges at the topmost surface of a channel material is linearly and reversibly tuned by an external electric field, giving rise to isothermal electrical switching functions. In contrast, a MISFET based on a strongly correlated material (a ‘Mott transistor’) could have a different operation mechanism 11 , in which a small number of electrostatically doped carriers drive all pre-existing localized electrons in the material to be mobile. This occurs by the reduction of the effective Coulomb repulsion energy, as has recently been demonstrated in an FET based on an organic Mott insulator 12 . Several attempts have been made at Mott transistor operation, using correlated oxides such as perovskite manganite 13 , 14 or nickelate 15 , 16 ; yet none has succeeded in inducing a metallic ground state by an electric field. In this study, we focus on a classical simple oxide, VO 2 . A striking feature of this compound arises from its first-order metal–insulator transition (MIT) 17 , 18 , 19 , 20 ; above the transition temperature ( T MI ), the system behaves as a half-filled metal with a 3 d 1 ( S = 1/2) state, but below T MI , the system adopts the insulating ground state, at which the resistance changes abruptly by several orders of magnitude. The microscopic origin of this transition is the dimerization of V 4+ ions along the c -axis direction, and resulting lattice transformation from the high-temperature tetragonal (rutile) phase to the low-temperature monoclinic phase, where 3 d electrons are localized on V sites to form the spin singlet state ( Fig. 1a ). The driving mechanism of the MIT is still debated, but both strong electron correlation (Mott–Hubbard transition) and electron–lattice coupling (Peierls transition) are generally thought to be important 21 , 22 , 23 . Figure 1: The first-order metal–insulator transition in VO 2 . a , Schematic drawings of the thermally driven first-order metal–insulator transition (MIT) in VO 2 . b , Temperature dependence of the resistivity of strained 10-nm and relaxed 70-nm VO 2 films grown on TiO 2 substrates. c , Schematic of an electric-double-layer transistor (EDLT) based on VO 2 , potentially enabling electrical switching of the MIT between the metallic tetragonal phase and the insulating monoclinic phase. PowerPoint slide Full size image The T MI of VO 2 , in bulk (or relaxed thin-film) form, is well above room temperature ( T ∼ 340 K), and can be varied across a wide range by electron-doping 24 , or by epitaxial strain in thin films 25 . Figure 1b shows the typical temperature dependence of the resistivity ( ρ xx ) of relaxed and strained VO 2 films, with a clear thermal hysteresis originating from the first-order nature of the transition. The T MI is high enough to be of practical use; accordingly, many efforts have been devoted to external control of the MIT in VO 2 using, for example, photon irradiation 26 , 27 or current excitation 28 . The use of a purely electrostatic effect in an FET geometry has also been examined, owing to its great importance for practical applications, but so far, only small changes in the resistance and/or the T MI have been achieved 28 , 29 , 30 , possibly because of insufficient electric field available with conventional solid dielectrics. To maximize the electron density attainable by applying a static electric field, we have used a recently developed electric-double-layer transistor (EDLT) technique involving an organic ionic liquid, which enables us to tune the surface charge density up to 10 15 cm −2 (refs 3 , 8–10 , 15 , 16 ). We fabricated micro-patterned EDLTs with c -axis-oriented VO 2 epitaxial thin films 24 in a side-gate configuration (see Methods Summary and Supplementary Information section A for details.) Figure 1c shows a conceptual schematic of the device operation based on a VO 2 -EDLT, in which electronic and structural phase transitions are expected to occur simultaneously owing to collective electron–lattice interactions, as demonstrated in femtosecond pump–probe experiments 26 , 27 . To see the effect of the electric field on the transport properties of VO 2 , we first examined this effect on a 10-nm-thick, strained VO 2 film. Figure 2a shows the temperature dependence of the four-terminal normalized sheet resistance ( R s ) at different gate voltages ( V G ), which were applied in the metallic state above T MI . Below a threshold V G of ∼ 0.3 V, T MI remains almost unchanged, whereas R s decreases slightly with increasing V G at all temperatures, showing normal n -type FET behaviour. In this regime, the electric-field effect is limited to the topmost surface of VO 2 by the Thomas–Fermi screening effect, as in conventional MISFETs; hence, T MI cannot be controlled by the electric field because it is governed by the unaffected, bulk part of the film. For V G > 0.3 V, by contrast, T MI decreases dramatically with applied voltage, indicating that the bulk region has been induced to enter the metallic state. At V G > 0.7 V, the MIT vanishes, with the emergence of a metallic ground state. The critical V G necessary to induce the metallic state is less than 1 V, which is very low by comparison with other EDLT-related studies 3 , 8 , 9 , 10 , 15 , 16 . The shift of T MI is summarized in an electronic phase diagram ( Fig. 2a inset). The observed effect is highly reversible and reproducible (see Supplementary Information section B for details). A negative V G up to −1 V did not affect the transport properties, suggesting that the shift of T MI and the following emergence of the metallic ground state are not driven by the electric field itself (electrostriction effect), but by electrostatic charge accumulation at the surface of the VO 2 channel. Also, the temperature dependence of R s at V G = 0 V showed an exactly identical curve to that without an ionic liquid, indicating that any strain effect from freezing of the organic ionic liquid used in our experiments is negligible. Figure 2: Effect of electric field on the transport properties of a 10-nm-thick, strained VO 2 film. a , Temperature dependence of the sheet resistance ( R s ) for a 10-nm strained VO 2 film with different gate voltages ( V G ). Inset shows the resulting phase diagram. The transition temperature ( T MI ) is defined as the average of the two inflection points (for cooling and warming, respectively) in plots of d[ln( R s )]/d(1/ T ) versus temperature 24 . b , V G dependence of R s , measured at T = 260 K. Sweep rate, 15 mV s −1 . c , Schematic energy diagrams of VO 2 , showing a double-minimum potential as a function of the normal coordinate. PowerPoint slide Full size image The V G sweep measurement shown in Fig. 2b revealed new behaviours that cannot be achieved in conventional MISFETs. Starting from an insulating (high- R ) state at V G = 0, R s decreased nonlinearly by a few orders of magnitude above a threshold V G of ∼ 1.5 V, clearly showing behaviour characteristic of a Mott transistor 11 , 12 . In addition, the electric-field-induced metallic (low- R ) state survived even when V G was reset to zero, providing a non-volatile memory effect. As the V G was swept to negative values, R s increased again, and returned to its initial values at V G = 0, tracing a clear hysteresis loop. Owing to the first-order nature of the MIT, the electronic phase of VO 2 should have a double-minimum potential as a function of the normal coordinate, as schematically illustrated in Fig. 2c . Near T MI , the metallic tetragonal phase and the insulating monoclinic phase are nearly degenerate and hence bistable, giving rise to a thermal hysteresis loop as seen in Fig. 1b . Scanning V G at fixed temperature corresponds to an isothermal change in potential of one phase with respect to the other, which in principle results in the same effect as the temperature scan. The electric hysteresis loop shown in Fig. 2b thus demonstrates an intrinsic feature of electric-field control of the first-order MIT. The huge resistance ratio (>100:1) between the two persistent states, and the low-voltage-switchable character, suggests excellent potential for practical applications with low power consumption. Additional signatures of electric-field control of the first-order MIT arise in the thickness dependence of the electric-field effect. Figure 3a shows R s as a function of temperature for three samples with thicknesses of 10, 20 and 70 nm, for both initial ( V G = 0) and electrostatically gated states. Regardless of thickness, all of the films entered the metallic ground state at approximately the same critical V G , of about 1 V. The sheet conductance ( σ s ) of the films in the metallic state increases linearly with increasing thickness, as shown in Fig. 3b , yielding a constant conductivity of the electric-field-induced metallic state of 900 S cm −1 , which is of the same order as that of chemically doped metallic VO 2 films 24 . In both electrostatically and chemically doped materials, the carrier density in the metallic state is confirmed to be >10 22 cm −3 by Hall-effect measurements (see Supplementary Information section C for details.) These results show that electrostatic surface charge accumulation can trigger carrier delocalization in a bulk film, irrespective of thickness, beyond the fundamental limitation of the Thomas–Fermi screening effect—distinguishing the present case from the surface-limited electric-field effect seen in conventional MISFETs. The small difference in the critical V G originates not from the thickness, but from the lattice strain present in the initial ( V G = 0) samples; the thinner (10- and 20-nm) films are epitaxially strained, giving rise to a lower T MI than in the relaxed (70-nm) film. The thinner films thus require a smaller V G to induce the metallic ground state, although the difference is less than a factor of two. Figure 3: Emergence of the three-dimensional metallic ground state. a , R s versus temperature in 10-, 20- and 70-nm films, showing both initial ( V G = 0) and electric-field-induced metallic states. b , Sheet conductance ( σ s ) of the electric-field-induced metallic states at T = 50 K as a function of film thickness. Inset, schematic depiction of the strain situation in the samples. PowerPoint slide Full size image To illustrate the uniqueness of the present situation, we compare the electronic phase diagrams of electrostatically doped 10-, 20- and 70-nm VO 2 films to that of chemically doped 40-nm films 24 . Figure 4 shows the relationship between T MI and the sheet charge density ( n s ) of the carriers that are anticipated to be ‘additionally’ introduced by the respective doping procedures. As mentioned above, the volume carrier density of the metallic states is of the same order (>10 22 cm -3 ) in both cases. However, as shown in Fig. 4 , the values of n s needed to induce the respective metallic states differ by more than two orders of magnitude. This suggests that, in VO 2 -EDLTs, a cascade of phase transitions occurs in a bulk region, suggesting that a simple picture of electrostatic surface charge accumulation, based on a classical capacitor model, is no longer valid. Instead, it seems that surface charge accumulation is accompanied by a collective lattice deformation along the c -axis direction, and resultant delocalization of previously localized electrons in the bulk VO 2 film, leading to a three-dimensional metallic ground state with high carrier density (‘proliferatively’ generated) throughout the film. In the presence of strong electron correlation and electron–lattice coupling, it is energetically favourable for a system to remain in a single electronic and/or structural phase, at length scales less than a domain size, to minimize interface energy. In fact, a preliminary X-ray diffraction study probing a channel region of a VO 2 -EDLT ( Supplementary Information section D) did show the occurrence of a structural phase transition on gating, suggesting that the observed bulk phase transition is probably related to this kind of phase separation. Further detailed examinations, including thorough structural analysis, are needed to clarify the underlying microscopic mechanism. Figure 4: Electronic phase diagrams of electrostatically and chemically doped VO 2 films. T MI plotted against sheet charge density ( n s ) for both VO 2 -EDLTs and 40-nm V 1− x W x O 2 films 24 . For the electrostatically doped films, n s was calculated from the capacitor equation, Q = C i V G (where Q is the sheet charge and C i is the areal capacitance), by assuming C i = 10 μF cm −2 (ref. 9 ). For the chemically doped films, n s was calculated from doping level ( x ) and film thickness. Insets schematically illustrate the nature of the metallic state in each case, with red and yellow dots representing ‘additionally’ introduced carriers (that is, n s ) and ‘proliferatively’ generated carriers, respectively. (See text for details.) PowerPoint slide Full size image Our results thus provide a new route to controlling the state of matter beyond the fundamental limitation of the screening effect, opening the door to macroscopic phase control by the application of an external voltage. Collective carrier delocalization is one of the most distinctive features of strongly correlated materials, making them good candidates for use in such field-effect device applications. We anticipate that clarifying the mechanism and broadening the scope of relevant materials, based on the concept presented here, will lead to a remarkable discovery of novel electronic phases. For practical applications, we expect that the ability to control macroscopic electronic states by voltage will create useful new device functions—for example, remote transmission of electrical signals over macroscopic length scales or voltage-tunable optical switching. Methods Summary Thin-film growth VO 2 (001) epitaxial thin films with thickness of 10, 20 and 70 nm were grown on TiO 2 (001) single crystal substrates (Shinkosha Ltd) by pulsed laser deposition at 390 °C under oxygen pressure of 10 mtorr 24 . The films were then annealed at 300 °C for 30 min under 1 torr oxygen to fill oxygen vacancies introduced in the TiO 2 substrate during deposition. From X-ray diffraction measurements, we confirmed that the c -axis-oriented epitaxial films were grown without any impurity phases, and that the 10- and 20-nm films were pseudomorphically strained on the substrates, whereas the 70-nm film was relaxed from epitaxial strain. Thicknesses of the films were evaluated from spacing of X-ray diffraction Laue fringes. Device fabrication All of the samples were patterned into a standard Hall-bar geometry, with a side gate fabricated next to a channel by photolithography and argon-ion etching. A device schematic and an optical micrograph of an EDLT are shown in Supplementary Fig. 1a and b, respectively. The dimensions of a channel were 30 μm (or 60 μm in some cases) in width and 520 μm in length. Ti/Au electrodes were deposited by electron-beam evaporation, for both current/voltage probes and a gate electrode. A hard-baked photoresist was used as a separator to electrically isolate gate from channel. After these processes, the samples were annealed at 200 °C in air for several hours to fill oxygen vacancies created in the substrates during device fabrication. Both channel and gate areas were covered just before measurement by a droplet of an organic ionic liquid, N , N -diethyl- N -(2-methoxyethyl)- N -methylammonium bis-trifluoromethylsulphonyl)-imide (DEME-TFSI), with a glass plate on top of the droplet.
The transistor is the ultimate on-off switch. When a voltage is applied to the surface of a semiconductor, current flows; when the voltage is reversed, current is blocked. Researchers have tried for decades to replicate these effects in transition metal oxides by using a voltage to convert the material from an insulator to a metal, but the induced change only occurs within a few atomic layers of the surface. Now, Masaki Nakano and colleagues at the RIKEN Advanced Science Institute in Wako have discovered that applying a voltage to a vanadium dioxide (VO2) film several tens of nanometers thick converts the entire film from an insulator to a metal. The findings point to the specific material properties needed to make such devices work. They may also lead to new types of 'smart' technology. The electronic properties of transition metal oxides can be tuned by changing their chemical composition or temperature. For example, VO2 is an insulator at room temperature, but heating it or replacing a small fraction of the vanadium atoms with tungsten (an electron donor) causes a phase transition where the vanadium ions, which are paired up at low temperature, unfasten into a different crystal structure in which electrons are mobile. In principle, applying a positive voltage to the surface of an insulating VO2 film can accomplish the same effect by inducing electrons to the surface, making this region metallic. Researchers have assumed that this charging effect would be limited to a few atomic layers just below the surface because the excess of electrons cancels out the applied electric field (an effect called screening). But Nakano and his colleagues found that the excess electrons were enough to 'trigger' the crystal structure change associated with metallic behavior (Fig. 1). "The surface lattice distortion propagates through the entire film, followed by an electronic phase transition inside the bulk region," he says. The voltage-induced transition decreases VO2's resistance by a factor of 100. The team is actively seeking other materials like VO2, as well as technological applications. One is a heat switch. Since temperature determines if VO2 is a metal or an insulator, it also determines the frequency of light the material absorbs. VO2-coated glass could therefore act as a 'smart window', passing or blocking infrared light depending on the temperature outside. "Normally, this switching temperature is fixed," says Nakano. "Our device adds electrical switching functionality to a smart window, which is very promising for energy-saving applications."
www.nature.com/nature/journal/ … abs/nature11296.html
Biology
Aspects of Asian elephants' social lives are not related to amount of intestinal parasites
Carly L. Lynsdale et al, Investigating associations between nematode infection and three measures of sociality in Asian elephants, Behavioral Ecology and Sociobiology (2022). DOI: 10.1007/s00265-022-03192-8 Journal information: Behavioral Ecology and Sociobiology
https://dx.doi.org/10.1007/s00265-022-03192-8
https://phys.org/news/2022-06-aspects-asian-elephants-social-amount.html
Abstract Frequent social interactions, proximity to conspecifics, and group density are main drivers of infections and parasite transmissions. However, recent theoretical and empirical studies suggest that the health benefits of sociality and group living can outweigh the costs of infection and help social individuals fight infections or increase their infection-related tolerance level. Here, we combine the advantage of studying artificially created social work groups with different demographic compositions with free-range feeding and social behaviours in semi-captive Asian elephants ( Elephas maximus ), employed in timber logging in Myanmar. We examine the link between gastro-intestinal nematode load (strongyles and Strongyloides spp . ), estimated by faecal egg counts, and three different aspects of an elephant’s social world: individual solitary behaviour, work group size, and work group sex ratio. Controlling for sex, age, origin, time since last deworming treatment, year, human sampler bias, and individual identity, we found that infection by nematodes ranged from 0 to 2720 eggs/g between and within 26 male and 45 female elephants over the 4-year study period. However, such variation was not linked to any investigated measures of sociality in either males or females. Our findings highlight the need for finer-scale studies, establishing how sociality is limited by, mitigates, or protects against infection in different ecological contexts, to fully understand the mechanisms underlying these pathways. Significance statement Being social involves not only benefits, such as improved health, but also costs, including increased risk of parasitism and infectious disease. We studied the relationship between and three different sociality measures—solitary behaviour, group size, and the proportion of females to males within a group—and infection by gut nematodes (roundworms), using a unique study system of semi-captive working Asian elephants. Our system allows for observing how infection is linked to sociality measures across different social frameworks. We found that none of our social measures was associated with nematode infection in the studied elephants. Our results therefore suggest that here infection is not a large cost to group living, that it can be alleviated by the benefits of increased sociality, or that there are weak infection–sociality associations present which could not be captured and thus require finer-scale measures than those studied here. Overall, more studies are needed from a diverse range of systems that investigate specific aspects of social infection dynamics. Working on a manuscript? Avoid the common mistakes Introduction In social species, group living and social behaviours can promote reproduction and survival through numerous pathways, including increased offspring survival, increased access to potential mates and resources, protection from predation, and increased health via social support (reviewed in Cantor et al. 2021 ). However, the same mechanisms that offer these benefits—frequent social interactions, close proximity to conspecifics, and group density—also present costs, such as increased competition and conflict (Alexander 1974 ; Krause and Ruxton 2002 ) and risk of disease and parasite infection (McEwen 2012 ; Hawley et al. 2021 ). Different components of sociality affect parasite load in various ways. For instance, group size is positively related to intensities of non-mobile parasites, but negatively to intensities of mobile parasites (Patterson and Ruckstuhl 2013 ). When investigating host–parasite interactions, it is important to consider the three main components of disease, called the disease triangle: the host, the environment, and the pathogen/parasite (Scholthof 2007 ). Individual host characteristics such as age (Lynsdale et al. 2020 ) and sex (Hillegass et al. 2008 ), as well as behaviour and social status (Hawley et al. 2011 ; Keiser et al. 2016 ), can relate to transmission and infection risk. In addition, external factors, such as season or weather conditions, influence sickness behaviour and infection dynamics for environmentally transmitted parasites (Owen-Ashley and Wingfield 2006 ; Rödel and Starkloff 2014 ). Finally, parasites can influence host social behaviour to promote transmission of parasites from the infected to new hosts (Moore 2002 ; Hawley et al. 2021 ). The “classic” view of social–infection dynamics is that sociality is a main driver of infections (Rifkin et al. 2012 ; Patterson and Ruckstuhl 2013 ). This view has substantial support within the literature: Higher numbers of social contacts and frequent social interaction are generally linked to increased infection (Loehle 1995 ; Schmid-Hempel 2017 ), and the reverse for increasingly solitary behaviour. A recent review on over 200 associations between individual social network measures and parasite load has shown that, within individuals, social behaviour leads to an increased risk of parasite infection (Briard and Ezenwa 2021 ). However, a growing body of theoretical and empirical studies challenges the assumption that infection risk and social behaviour or group size always co-vary positively (Kappeler et al. 2015 ; Ezenwa et al. 2016 ). This “enhanced” view suggests that the health benefits of sociality and group living can outweigh the costs of infection and help social individuals resist or tolerate parasites and other infectious disease (Ezenwa et al. 2016 ). Several studies have demonstrated that positive social interactions can be related to lower infection risk and lower infestation with gastrointestinal parasites. Social support by known group members and strong social bonds with opposite-sex partners can reduce parasite infestation (Rödel and Starkloff 2014 ; Müller-Klein et al. 2019 ), though this effect can depend on various factors such as environmental conditions or pathogen-specific transmission routes (Balasubramaniam et al. 2016 ). The encounter-dilution effect describes the potentially positive effects of group living with regard to costs of parasite infection (Mooring and Hart 1992 ), where group members experience protection from parasites by diluting the risk of being preyed on by ectoparasites or vector species (Mooring and Hart 1992 ; Patterson and Ruckstuhl 2013 ). Additionally, social living per se can reduce the negative effects of parasite infestation and larger group size can mitigate the costs of infection with ectoparasites for group members (Almberg et al. 2015 ). Furthermore, the positive effects of group size have been proposed and described for endoparasites and other infectious diseases. Although re-infection with gastrointestinal nematodes is more likely for individuals in larger social groups, infected individuals benefit from larger intake of energy, which offsets main costs of nematode infection (Ezenwa and Worsley-Tonks 2018 ). Interestingly, the link between group size and parasite loads is often species-specific and related to other social measures. In African bovids, a positive correlation between group size and parasite infection was found, but this was only evident for relatively host-specific parasites and for hosts living in stable groups (Ezenwa 2004 ). The relative infection costs versus sociality benefits of group living should be investigated under various contexts. One interesting, but understudied, factor where individual host characteristics and social group properties can interact on infection dynamics is the sex ratio of the group. Individual sex represents a dichotomy in social behaviour and immunity for many mammal species, e.g. males exhibit solitary or nomadic behaviour more often than females (Lawson Handley and Perrin 2007 ), which could differentially affect transmission dynamics. Life-history theory dictates that differential selection pressures, prioritising reproduction for males and longevity for females, drive sex-specific differences in resource-allocation trade-offs between immunity and reproduction (Trivers 1972 ; Stearns 1992 ; Norris and Evans 1999 ). Hamilton and Zuk ( 1982 ) first proposed parasitism as a mediator for these trade-offs, which can be maintained by either the boosting or regulatory effects of oestrogen and testosterone respectively on individual immune function (Folstad and Karter 1992 ), alongside behavioural traits that lead to differences in transmission and exposure (Patterson and Schulte-Hostedde 2011 ). Consequently, parasite infection intensity (i.e. parasites per host) is often higher for adult males, in comparison to females, within mammal populations (Giery and Layman 2019 ). Sex effects and group size effects are well investigated, but sex ratio effects on parasite infection have rarely been studied in natural systems despite the clear potential for it to influence infection dynamics within group-living species. Asian elephants ( Elephas maximus ) are an interesting species to address those questions because they are a long-lived and highly social species, with life-history traits similar to several primate and cetacean species such as humans and killer whales ( Orcinus orca ). Studies on elephants can therefore help enable generalizations across larger highly social mammals in understanding the link between sociality and parasite infection. In the wild, Asian elephants form complex social organisations, predominantly existing in matriarchal herds of matrilineal female relatives and juveniles, with males leaving the herd and becoming nomadic upon sexual maturity (Sukumar 2003 ). Furthermore, elephant society provides benefits such as predator defence, the transfer of social knowledge, and alloparental care of offspring (Wittemyer et al. 2005 ). However, elephants’ high sociality also imposes costs, such as increasing costs of philopatry for older individuals and increased resource competition (Wittemyer et al. 2005 ), as well as facilitating the spread and persistence of parasite infections (Hawley et al. 2021 ). In conditions where the social setting of large long-lived mammals is artificially modulated by humans, the link between sociality and infection is less well understood. Gastro-intestinal nematodes are among the most abundant internal parasites found in Asian elephants (Fowler and Mikota 2006 ; Lynsdale et al. 2020 ), are an important driver of elephant mortality (Lynsdale et al. 2017 ), and are linked to reduced elephant health and immunity (Santos et al. 2020 ). However, the results regarding the link between parasite load and sociality in elephants are inconsistent (Vanitha et al. 2011 ; Abhijith et al. 2018 ) and warrant further investigation. Both individual host characteristics and social group properties are important predictors of parasite infection; however, the outcome of these predictions can vary depending on the classic or expanded view of the parasite-related costs and benefits of sociality (Ezenwa et al. 2016 ). There is a need for more empirical studies to expand the range of investigated species and systems for animals, as this will ultimately help us improve our understanding of how individuals balance the cost of parasite exposure on the one hand and the benefits of increased parasite tolerance on the other hand in the wild social living animals (Ezenwa et al. 2016 ). Here, we take advantage of a unique dataset on semi-captive timber Asian elephants from Myanmar to investigate the link between sociality and parasite infection in a long-lived and highly social mammal. This population is ideal for studying this relationship because their age-specific survival rates and social behaviours resemble those of wild elephants compared to those of fully captive individuals (Seltmann et al. 2018 ; Clubb et al. 2008 ; Hayward et al. 2014 ; Lahdenperä et al. 2016 ; Chapman et al. 2019 ; Lynch et al. 2019 ). In addition, the Myanma Timber Enterprise (MTE) has maintained extensive logbooks on each individual, which allow tracking individual elephants’ life events, such as illness and health treatments, and provide detailed data on group compositions and friendship networks. In addition, we capitalize on the longitudinal data on parasite infection already existing in this system (Lynsdale et al. 2017 , 2020 ), which is highly important for gaining a reliable quantification of infection dynamics as opposed to cross-sectional studies and opportunistic sampling. In these semi-captive elephants, nematode infections happen via faecal–oral horizontal transmission, which is the same route found for wild elephant populations. Adult worms live and reproduce in the gut and gastro-intestinal tract, with eggs expelled with elephant faeces (Fowler and Mikota 2006 ). Hence, routes of transmission of, and exposure to, local pathogens are potentially similar to those experienced by wild systems compared to fully captive systems, given the studied semi-captive individuals live in their natural habitat and express nocturnal free-roaming behaviours. Myanmar timber elephants are grouped together in mixed-sex units of approx. 4–12 individuals that work within the same forest area, overall spending at least ~ 4–8 h/day together in their working groups, for over 9 months of the year. Individuals therefore spend more time during diurnal hours within close proximity of other group members than non-group conspecifics, occupying shared physical environments where all group members can forage, defecate, and interact. This is important considering the faecal–oral environmental transmission of strongyle and Strongyloides spp. nematodes between elephant hosts (Fowler and Mikota 2006 ), and that Asian elephants display trunk touches around and inside other conspecifics’ mouths as a form of reassurance behaviour (Plotnik and de Waal 2014 ). Therefore, our study system offers a unique opportunity to study the relationship between sociality and later parasite infection in a semi-experimental way, as elephants live in mixed-sex and age groups with different demographic compositions. Our work offers data on how different measures of sociality are related to later infection in known individuals of a large, long-lived mammal, which usually roams over long distances in the wild and is therefore challenging to investigate in such detail under fully wild conditions. We use three measures of sociality and investigate their links to subsequent infection by nematode parasites. More specifically, we (1) investigate if engaging in regular social interactions with conspecifics or being solitary (individual solitary behaviour) is linked to later nematode load, measured as faecal egg counts. As wild Asian elephants exist in either strongly associated female family units, or nomadic males or loosely associated male bachelor groups, we therefore expect subsequent differences in group mitigation of infection to arise from natural sex-specific social frameworks. Specifically, we expect females to gain social benefits which protect against infection, such as elevated health and condition from increased social contact and group living, and for males to minimise infection through increased distance from, and less frequent contact with, other potential infective hosts. Transmission-related costs for adult females should be offset by benefits of higher social interaction, but not for adult males that would otherwise incur lower transmission costs from predominantly solitary lifestyles. Therefore, we predict that solitary females and social males in our study sample exhibit higher nematode loads than social females and solitary males. In addition, we (2) investigate how group size is related to nematode load. In our system, larger group size represents potential for increased nematode transmission due to higher densities of (potential) host feeding and defecating within the same habitat patches, alongside more frequent close, physical interactions, as well as improved individual health, linked to increased social interaction and social support, which may help mitigate or offset the costs of infection. We thus expected a weak but overall positive effect of group size, with individuals in larger groups yielding higher nematode load. Furthermore, we (3) studied the link between the sex ratio of the work group and nematode load. We predicted that elephants in groups that have more males than females, hence in groups with a male-biased sex ratio, show higher levels of parasite infection. In many mammals, males show higher levels of parasite infection than females (Wilson et al. 2002 ) and there is a need to investigate potential sex effects in the social context in which we find these elephants. Generally, it is important to expand our understanding of the link between different social measures and parasite infection by expanding the available empirical evidence for those relationships for different species living under different conditions. This can help disentangle the complex associations of sociality and infection and the contradictory results found in previous studies, and to generate a more holistic view of a very topical problem. Methods Study population The working timber elephants of Myanmar ( n ~ 3000) make up the largest remaining semi-captive population of this species (Mar 2007 ; Hedges et al. 2018 ). The elephants work as draught animals in logging camps during the day alongside an elephant caretaker or ‘mahout’, but freely roam, forage, and interact with wild and other semi-captive conspecifics at night in surrounding forest habitat (Gale 1974 ). The current abundance and distribution of wild elephants in Myanmar are not well studied (Leimgruber et al. 2011 ; Hedges et al. 2018 ). Myanmar’s wild elephant population is estimated to be fewer than 2000 individuals (Leimgruber et al. 2011 ), and the chances for encounters between wild and semi-captive elephants are probably low. Though semi-captive elephants roam freely at night, they usually do not leave the wider vicinity of their timber camps. The semi-captive population is centrally managed by MTE, which mark each animal with a unique identification (ID) number on their haunches allowing for reliable recognition of different individuals. MTE staff also keep detailed records in individual log books, on e.g. elephant date of births (if captive born) or capture (if wild caught), location, maternal lineage, disease history, and treatment history, throughout an elephant’s lifetime. Subsequently, MTE maintain longstanding records on Asian elephant life-history and health which are now digitised into an electronic database, allowing for accurate sampling of individuals of known age. Trained MTE veterinarians are responsible for the basic upkeep of the elephants, and predominantly treat wounds and other working injuries. Vets are also charged with administering anthelmintic drugs (ivermectin and albendazole) approximately twice a year in accordance with state regulations as a blanket treatment rolled out across all elephants within treated camps, irrespective of their level of infection. Treatment is administered either subcutaneously (1 ml/100 kg elephant body weight), or orally (10 mg/100 kg body weight for ivermectin and 750 mg/100 kg body weight for albendazole), in line with equine guidelines. Exact dates of anthelmintic treatment are recorded onsite on the day of deworming in each animal’s logbook. The entire population is distributed across Myanmar, grouped into mixed-sex working units comprising individuals of mixed ages. Adults enter the workforce at approximately 17 years old and remain until retirement (usually around 55 years of age), with workload set by regulations on haulage ability and elephant age (Mar 2007 ). Elephants work only during the cold (November–February) and monsoon (June–October) seasons, and are rested in the hottest, driest months. Pregnant mothers are rested from halfway through their pregnancy (11 months), and for approximately 1–2 years following parturition where they are used for light baggage work, although calves remain nearby ‘at heel’ until they are weaned and can suckle as needed (Gale 1974 ). Following weaning and taming (at approx. age 5 years), young elephants either return to their natal group or are relocated away from their mothers. Overall, the elephants spend approx. 4–8 h/day, during diurnal hours, working and interacting with the other members of their designated groups, throughout their ~ 40-year working life. Sample In total, we sampled 71 focal individuals (total no. of samples = 130 including repeated measures, no. of measures per individual = 1–6, mean = 2), all working within the Kawlin logging agency in the Sagaing Division. Our study population included 45 females (91 samples) and 26 males (39 samples), ranging in age from 10–62 years of age (mean = 26 years, median = 16 years) at the time of sampling, and of which 58 were captive born and 13 were wild caught. It was not possible to record data blind because our study involved focal animals in the field. Sociality data collection We investigated how infection was associated with three specific aspects of elephant sociality: individual solitary behaviour, group size, and group sex ratio. First, in order to assess an individual’s direct social interactions with conspecifics (individual solitary behaviour), we extracted information from social questionnaires given to elephant handlers (mahouts) regarding whether each mahout classed their working animal as solitary (does not interact with other elephants) or social (interacts with other elephants). Mahouts can spend as long as 16 years with the same individual within this sample (Crawley et al. 2019 ), and thus develop an excellent knowledge of their animal and its behaviour. Questionnaires were carried out locally at field sites during the hot season (March – May) between 2014 and 2018. Next, using the same questionnaire data and recordings by veterinarians, we recorded the overall size of the working group of focal individuals at the time of sampling, only considering the number of adults present. Finally, we determined the sex ratio of the focal individual’s work group by calculating the proportion of females in a group, excluding calves. Faecal sampling and nematode quantification We collected 4.5 g of fresh faecal samples ( n = 130) from the 71 elephants following a standardized sampling method for our sample population (Lynsdale et al. 2015 ). Samples were collected within 66 days latest following social data collection, but still within the hot season of March–May. The majority of FECs were collected on the same day as the social data collection was conducted (102/130 measures, ~ 78% of the total sample) and 1 measure was collected the following day. A further 16 FECs were collected within approximately 4–5 weeks after social data collection (25–37 days, ~ 12% of the total sample). Finally, 11 FECs were collected over 5 weeks after social data collection (at exactly 66 days, ~ 8% of the total sample). For each sample, we carried out a faecal egg count (FEC) following the special modification of the McMaster method (MAFF 1986 ), as in Lynsdale et al. ( 2020 ), using compound microscopes with × 10 optical zoom and × 10 magnification. We identified ova microscopically to the lowest taxonomic unit via identification of size, morphology, and developmental stage (Taylor et al. 2007 ; Bowman 2014 ). We obtained a quantified estimate of nematode load by multiplying FECs by the dilution factor (10) to convert counts into measures in eggs per gram (epg) of faeces. While FECs are a widely recognised measure of observable parasite load in veterinary and ecological studies, no study has yet provided empirical data on how FECs vary with ultimate measures of infection, e.g. intestinal worm counts, in elephants, and FECs may not account for, e.g. immature larvae, variation in shedding rates of female worms, prepatent periods, and non-reproductive individuals. As such, FECs should be regarded as a reliable estimate of the extent of infection (i.e. approximate load), rather than an exact sum of the total infective agents within a host; see Lynsdale et al. ( 2020 ) for further detail. Statistical analysis We analysed the association between the social landscape and subsequent infection by nematode parasites, as estimated via FECs, in our study population using three separate generalised linear mixed-effects models. All analyses were carried out in R 4.0.3 (R Core Team 2020 ) using glmmTMB (Anderson and Winter 2020 ), with untransformed FECs as the response term, and fitted to a negative binomial error structure ( nbinom2 ), to account for the overdispersed distribution of FECs based on the mean–variance relationship of the data (Lynsdale et al. 2020 ). Each of our three models contained one separate univariate predictor pertaining to our sociality measures: sociality (binary, social/solitary), working group size (continuous), and working group female:male sex ratio (continuous). All models started with the same fixed covariates accounting for elephant age (continuous, years)—included to the highest significant polynomial level, sex (two-level factor, male/female), origin (two-level factor, captive born/wild caught), sample year (five-level factor, one level for each year 2014–2018), human sampler bias (three-level factor, one for each sampler who collected data), and time since last deworming treatment prior to sampling (continuous, days). However, models including year and sampler bias did not converge. Hence, we excluded sampler bias from further analyses because in models including only year and sampler bias, year was significant, whereas sampler bias was not. We also included two random factors to account for repeated measures from the same individuals (elephant ID), and from individuals located within the same working group (group ID). We tested all fixed covariate and random terms using likelihood ratio tests (LRTs), comparing starting models to replicates without each singular term in turn. Finally, as Asian elephants display clear sexual dimorphism in social structure and behaviours in the wild, we tested whether sociality–infection dynamics differed between males and females by including an interaction between sex and the social measure included (solitary behaviour, group size, group sex ratio) after excluding other non-significant covariates. Final models retained only significant confounding covariates, as well as our social terms of interest (sociality, working group size, and sex ratio). We checked models for goodness of fit using residual diagnostic checks with the DHARMa package (Hartig 2020 ). Results We found strongyle (Nematoda; Strongylidae) and Strongyloides (Nematoda; Strongyloididae) type eggs within faecal samples, observed in different developmental stages. Nematode loads varied widely between individual hosts within our population; FECs were highly skewed (aggregation parameter κ = 0.272, variance:mean ratio ≥ 1), and ranged from 0 to 2720epg (mean ± SE = 156epg ± 26, median = 75epg). Additionally, seven elephants (~ 10% of hosts, 10 faecal samples corresponding to ~ 8% of FEC measures) had relatively high observable levels of egg shedding with FECs over 500epg (Nielsen et al. 2010 ), including only one elephant having a FEC much higher than 1000epg. We did not discard this value since this is a common pattern of nematode loads found in many species. Individual time since deworming ranged from 12 to 419 days of sampling (mean = 131 days), although ~ 80% of elephants had not been dewormed within 30 days prior to sampling ( n = 105/130 individuals), and 65% of elephants had not received treatment for ~ 90 days before sampling ( n = 84/130). We also found variation between individuals in their social measures; however, after accounting for variance from confounding factors, the differences in FECs were not associated with those in our tested social measures (Fig. 1 , Table 1 ). We first tested for an influence of solitary behaviour on infection rates. Overall, elephants were mostly classed as social (118/130 answers relating to 62 different elephants—84 from females, 34 from males), and we recorded only 12/130 classifications of solitary for 9 different elephants (7 answers from females and 5 from males) from the social questionnaires. Overall, males were over twice as likely to be classified as solitary, which was recorded in 19% of all males studied ( n = 5/26) in comparison to 9% of the total number of females ( n = 4/45). However, while mean raw FECs were 46% higher for social elephants in comparison to solitary individuals (mean raw FEC ± SE = 164 ± 29 epg for social elephants vs. 76 ± 22 epg for solitary conspecifics, model estimate ± SE = 0.178 ± 0.456), these differences were not statistically significant ( χ 2 = 0.151, p = 0.658). Moreover, we found no evidence for any sex-specific differences in solitary behaviour influencing infection rates when including a sex*solitary behaviour interaction (Table 2 , solitary behaviour: χ 2 = 3.117, p = 0.077; 164 ± 37 mean raw epg for social females vs. 164 ± 38 epg for social males, 101 ± 30 epg for solitary females vs. 40 ± 25 epg for solitary males). Fig. 1 The social landscape of infection in Asian elephants highlighting no significant variation in infection, as estimated by faecal egg counts (FEC, in eggs per gram of faeces, epg) with differences in ( a ) solitary behaviour, ( b ) working group size, and ( c ) working group sex ratio. In total, 130 measures were collected from 71 individual elephants. Red points correspond to raw FECs, black points and error bars to mean and standard error FEC values, and black diamonds correspond to median FEC values. For ( c ), lines show predicted FECs, calculated in R using ggpredict (Lüdecke 2018 ), and shaded areas correspond to 95% confidence intervals. Plotted data is limited to FECs of 1000 epg, excluding one individual data point (2720epg) Full size image Table 1 Effect estimates from final models for predictors of faecal egg counts for each sociality measure, fitted with a negative binomial error structure and log link function. Working group ID number was included as a random effect. The intercept corresponds to FECs from elephants with 0 days since treatment, and that (1) displayed social rather than solitary behaviour, (2) lived in small working groups, and (3) lived in groups with a female:male sex ratio of 0. All models were fitted to 130 observations from 71 elephants. Significant effects ( p < 0.05) are in bold Full size table Table 2 LRT and P values for comparisons of models (as described in Table 1 ) but including a fixed term for sex and a social measure*sex interaction term, and replicate models consisting only of main effect terms. All models were fitted to 130 observations from 71 elephants Full size table We next investigated associations between FECs and the size of working groups. In our population, elephants lived in groups of varying size (range = 4–10, mean = 6.5, median = 7). After accounting for variance from treatment and from repeated measures, our results indicate that infection is lower for elephants in larger groups (model estimate ± SE − 0.202 ± 0.196). As with solitary behaviour, this difference was not significant ( χ 2 = 1.044, p = 0.307) when controlling for other contributing factors. Furthermore, we found no evidence that egg shedding in males and females differs respectively in response to more or fewer group members when testing for an interaction between FEC and group size (Table 2 , χ 2 = 2.475, p = 0.116). Finally, we determined the effect of group female:male sex ratio on infection dynamics. Sex ratios of the different working groups ranged from 0.2 to 1 (mean = 0.58, median = 0.57), where an increasing ratio equals an increasing proportion of females within a group (1 = all-female group). FECs increased with increasing sex ratio, i.e. in groups with proportionally more females (model estimate ± SE 0.519 ± 1.904). However, as with our other sociality measures, we found no significant association overall between working group sex ratio and later infection status in our host population (χ 2 = 0.221, p = 0.639). Again, we found no evidence of sex-specific differences of group sex ratio on later infection rates when including an interaction term with sex (Table 2 , χ 2 = 0.078, p = 0.780). All these results are robust when data are limited to FECs and social data collected within 1 day. Discussion Here, we investigated associations between three specific aspects of host sociality—individual solitary behaviour, group size, and group sex ratio—and infection by gastro-intestinal nematodes (strongyles and Strongyloides spp . ), in a long-lived mammal with a complex social structure, the Asian elephant. Our study population consisted of 71 semi-captive Asian elephants of mixed sexes and ages, grouped into working units of various sizes (4–10 individuals) which spent diurnal hours together, but were able to display natural nocturnal roaming, socialising, and mating behaviours with other semi-captive and local wild conspecifics. We argue that this difference in social structure to their natural matriarchal herds allows for a ‘natural experiment’ to observe how infection is linked to the studied measures of sociality across different social frameworks. Controlling for other known confounding factors, we show that infection was not associated with any investigated measure of sociality, and that this finding was conserved across both males and females. Generally, while our results contradict our expectations, they support the argument that the parasite-related costs of sociality may vary in magnitude, are not linear, and do not operate solely in one direction. Recent studies highlight a more complex picture—that the extent of parasite-related costs, or the severity at which they are felt, may hinge on other aspects of host ecology, for example individual life history (Ezenwa et al. 2016 ), differences in dominance hierarchies within a social unit (Smyth and Drea 2016 ), or the degree of modularity or subgrouping within a population (Nunn et al. 2015 ). As such, there is increasing support for an ‘expanded view’ that infection, or the fitness costs thereof, may in fact be minimised through socially promoted resistance and/or tolerance pathways (Ezenwa et al. 2016 ). While we found that males were nearly twice as likely to display solitary behaviour, neither individual solitary nor social behaviour influenced infection by strongyle or Strongyloides nematodes for any individual in our sample. When comparing across systems, infection measures are higher for social animals than those for solitary ones (Ezenwa et al. 2016 ). However, crucially, studies often compare the effects of infection, and selection for solitary versus gregarious behaviour, across species, rather than observing intraspecific variation within groups. Consequently, much less is known of how infection costs relate to variation in individual solitary behaviour within populations, which is an oversight considering that sociality is not homogenous within species. For example, in a number of ‘social’ species, individuals may realistically exhibit behaviours over a spectrum from more solitary to more social, with behavioural tendencies varying with other traits such as age and sex, as found in e.g. elephants, hamadryas baboons ( Papio hamadryas hamadryas : Schreier and Swedell 2012 ), and Western lowland gorillas ( Gorilla gorilla gorilla : Racevska and Hill 2017 ). Finer-scale analysis has found elephant societies show multilevel organisation and fission–fusion dynamics, with populations varying in their degree of modularity, hierarchical levels, and the extent to which these are nested (de Silva and Wittemyer 2012 ; Nandini et al. 2018 ). There is also evidence that individuals maintain long-term affiliate relationships alongside ephemeral associations with conspecifics (de Silva and Wittemyer 2012 ), suggesting that interactiveness of both males and females to conspecifics both within and outside social units may not be temporally stable, and that tendencies of social versus solitary behaviour may change over time. Therefore, while broader classifications of sociality (i.e. as with our binary measure of solitary behaviour) are still highly valuable, especially from lesser-studied taxa, such methods may not capture the sufficient detail needed to elucidate how potential parasitism constrains finer structural contexts, for example, establishing how infection changes with increasing social contacts, or frequency or quality of interactions with other group members. While recent studies offer initial insights as to how infection operates with varying modularity in animal populations (Sah et al. 2017 ), there is still great scope for future studies to investigate how infection costs are incurred, and alleviated, over multi-level societies with complex coalitions, such as those seen in elephants and primates. Group size remains one of the most widely studied predictors of parasite risk as a disease cost of sociality. Increases in group size are coupled with higher spatial–temporal concentration of potential hosts and more frequent conspecific interactions, which facilitates increased transmission risk and exposure to infective agents (Altizer et al. 2003 ; Rifkin et al. 2012 ). However, we found no evidence to support an association between nematode infection and group size. In fact, after accounting for confounding covariates, infection was overall lower for hosts in larger groups. Our results are surprising considering other studies on contagious and environmentally transmitting parasites, like gastro-intestinal nematodes, mostly show positive associations, although the size of this effect is smaller in mammalian hosts than that in birds (Rifkin et al. 2012 ). Instead, our results were more comparable to those on searching parasites, which are mobile enough to move between host aggregations, e.g. ectoparasites such as lice, ticks, and fleas (Rifkin et al. 2012 ). However, nematode motility is unlikely to substantially influence sociality–infection dynamics in our system. While elephant nematodes exhibit host-seeking behaviour in their infective stage after their third larval moult (Fowler and Mikota 2006 ), the distances travelled are miniscule in comparison to the roaming distances and range sizes of their elephant hosts (see Gang and Hallem 2016 ). While previous studies have noted a lack of association between infection and group size (Côté and Poulin 1995 ), it remains a relatively rare observation, possibly explained by publication bias (Rifkin et al. 2012 ), with group size weakly predicting parasite risk across most taxa (Côté and Poulin 1995 ; Rifkin et al. 2012 ). Interestingly, group size does not predict parasite intensity in a range of other mobile hosts (Patterson and Ruckstuhl 2013 ), including for other herbivore–strongyle systems, such as Grant’s gazelle ( Gazella granti ), buffalo ( Syncerus caffer ), impala ( Aepyceros melampus ), and eland ( Taurotragus oryx ) (Ezenwa 2004 ). One possible explanation is that mobile hosts gain resistance benefits from living in larger groups, as individuals that travel over larger ranges spend less time overall within any given area and thus reduce exposure in parasite-contaminated areas (Côté and Poulin 1995 ). While our study elephants work within designated forest areas during the day, they are able to roam, unsupervised, over larger distances at night—a behaviour that may mitigate the parasite infection risk incurred through their diurnal work grouping. The focus on group size as a primary predictor of infection costs is a relatively simple view of the linkage between sociality and disease, as group-living species display huge variation in both the size and structure of social landscapes. Group sex ratio, and how this factors into disease costs, is a relatively overlooked aspect of group living, which is an oversight considering how widely observed sex biases are in infection in wild systems (Wilson et al. 2002 ). In mammals, parasite infection intensity is often higher for adult males than females (Giery and Layman 2019 )—a consequence of hormone-mediated differences in resource allocation trade-offs between immunity and reproduction (Hamilton and Zuk 1982 ; Folstad and Karter 1992 ; Stearns 1992 ). Curiously, a previous work has highlighted a lack of sex-biased parasitism within the Myanmar timber elephant population, with males and females harbouring similar nematode loads across a longitudinal study period (Lynsdale et al. 2020 ), despite males incurring higher mortality cost of parasitism. The close proximity between individuals in mixed-sex working groups may increase transmission between males and females, possibly concealing inherent sex-specific differences in susceptibility. Yet, as with group size, infection was not associated with variation in sex ratio of working groups, suggesting that social framework does not mask differences in nematode loads between males and females in this system. Despite this, group sex ratio should be regarded as an important potential driver of associations between sociality and infection in other systems, particularly where sex biases in infection rates are observed. The reasons underlying the absence of associations between infection and solitary behaviour therefore remain largely obscure. A possible explanation is that for reproductive-age adult elephants, nematodes are less pathogenic in comparison to e.g. bacterial and viral infections that severely impacting host survival (Fowler and Mikota 2006 ), or that loads do no reach critical thresholds, exacting low costs to individual morbidity and relatively weak selection pressures on sociality. However, this seems unlikely. Preliminary work has shown that observed nematode infection significantly reduces individual health and condition, as measured by white blood cell counts and liver function (Franco dos Santos D., unpublished). Moreover, our host population displays high heterogeneity in infection; nematode loads can reach exceedingly high burdens (> 4000epg), beyond ‘high’ shedding veterinary thresholds for other non-ruminant hosts (Nielsen et al. 2013 ), but only for specific demographic groups (Lynsdale et al. 2020 ). In particular, juveniles show both elevated loads estimated via FECs (Lynsdale et al. 2020 ), and historically, along with adult males and non-reproductive females, suffer from increased mortality as a result of parasitism (Lynsdale et al. 2017 ). This, coupled with the fact that timber elephants live in mobile working units without strong competition or dominance hierarchies, may instead mean that the strongyle and Strongyloides nematodes either do not present large sociality costs in this system or that these are mitigated by the social health benefits of group living more than in other host taxa. The known variation in FECs observed across different elephant ages could provide an explanation for the lack of a link between our social measures and FECs; as acquired immune function varies across vertebrate lifespans, strong age-specific susceptibility effects may override effects of sociality in hosts, which have been previously exposed to nematodes. However, in this study, we account for age-specific variation in FECs by including host age in our analysis, allowing us to reliably detect any strong associations with sociality measures. However, it should still be noted that other individual differences in infection and exposure profiles may potentially mask weaker associations between the sociality measures investigated and FECs. Another unexplored avenue of interest is self-medicating behaviour, as observed in numerous primates (Neco et al. 2019 ), and in Asian elephants (Greene et al. 2020 ). For example, red colobus monkeys ( Procolobus rufomitratus tephrosceles ) increase their consumption of fodder with known anthelminthic properties, such as certain barks and Albizia spp . plants, during periods of increased shedding of whipworm ( Trichuris spp.) eggs (Ghai et al. 2015 ). In elephants, specific plant consumption is thought to relate to self-medication behaviour for certain medical ailments, including parasitism, according to local human mahouts and knowledge holders (Greene et al. 2020 ). Behavioural switches to sole consumption of clay rather than vegetative matter are also noted by Asian elephants’ mahouts during monsoon months, which is thought to aid in expelling established gastro-intestinal parasite infections (Greene et al. 2020 ). As foraging decisions can be transferred through cultural transmission in primates (Horner et al. 2006 ), Ezenwa et al. ( 2016 ) propose that social living and large group sizes may promote self-medicating selective foraging as a behavioural mechanism for parasite resistance. Studies have also suggested that this strategy may particularly benefit larger, longer-lived species (Neco et al. 2019 ), such as Asian elephants, which are also generalist browsers and graze feeders and have been known to vary their diet in response to environmental change (Sukumar 2003 ). Our results provide a reliable insight into whether strong social–infection associations exist by utilizing a centralized keeping system in a rarely studied host system—semi-captive timber elephants in Myanmar. The elephant mahouts have a detailed knowledge of their elephant’s behaviours and collect the elephants in the morning from the forest meaning that they are aware of whether elephants are exhibiting solitary behaviour, and whether they are grouped with the same group members or other working individuals, during part of the unsupervised period. However, it should be stressed that we cannot account for variance from any nocturnal social interactions and individual differences in foraging activity (e.g. Parker et al. 2020 ). While data on nocturnal activity of elephants is limited, and largely focused on fully captive systems, there is some evidence that elephants may be stationary for large periods of the night (Wilson et al. 2006 ; Lukacs et al. 2016 ), and that activity depends on age and access to outside areas (Evison et al. 2020 ) suggesting that most social activity takes place during diurnal hours. The measures of elephant sociality used in our study might have been too broad to capture any potential weak infection–sociality associations present in our study population or actually not capture specific social–infection mechanisms. Therefore, we cannot exclude the possibility that finer-scale measures than those investigated here might show a different picture. Data on social network dynamics and characteristics might provide the needed fine-scale measures. Specific network components, such as connectivity or centrality within the social network, can relate to transmission dynamics (Rimbach et al. 2015 ). Unfortunately, the qualitative nature of our questionnaire data does not allow assessing those network characteristics in detail. Other confounding factors, such as the distribution of high-shedding individuals sharing the work areas with focal individuals and the effects of season on infection dynamics, should be noted. While faecal egg counts are moderately repeatable within hosts of our study population (Lynsdale et al. 2020 ), we do not know how genetic components contributed to high-shedding behaviour and hence we cannot directly control for this factor. However, including individual identity as a random factor in our analyses should help mitigate this bias to some extent. Regarding seasonal effects, this study used data collected only during one season (dry season) and hence we can exclude seasonal biases on our results, but to complete our understanding of infection–sociality associations they should also be investigated in other seasons (monsoon and cold season). Finally, although our study population shares more characteristics with wild elephant populations than fully captive populations, we suggest that our results should be treated with care when comparing to truly wild populations. The potential impact of human handling on social behaviours and group composition of our study elephants and the strong effect of regular anthelmintic treatments should be kept in mind when interpreting our results. However, some of these confounding factors constitute general challenges to studies investigating infection–sociality associations in the wild, and we were able to control for several other confounding factors of susceptibility such as age. Thus, we suggest that our findings are still a valuable addition to the literature, with very few other studies using adequate sample sizes and providing insights into the social infection dynamics of extremely long-lived terrestrial mammals. In conclusion, our results further highlight the need for a general push towards placing social infection dynamics clearly in specific contexts, and the necessity for more studies investigating different facets of sociality from a diverse range of host-parasite systems, to inform broader meta-analyses. It is becoming increasingly clear that the relative costs of disease are determined by a number of social traits, and their organisation across different social landscapes, acting in synergy; in essence it is ‘more than just a numbers game’ (Nunn et al. 2015 ). Consequently, there is a growing emphasis on establishing how the sociality–disease nexus varies across and within a range of taxa, with elephants presenting a much-needed comparison to other long-lived, complex mammal societies. Here, we highlight the need for finer-scale studies, establishing how sociality is limited by, mitigates, or protects against infection in different ecological contexts, to fully understand the mechanisms underlying these pathways. Data and code availability Data and code are available as electronic supplementary material.
An international team of scientists found that sociality is not linked to intestinal nematode infection in Asian elephants. The researchers looked at loneliness and characteristics of the elephants' social groups and found no differences in infection levels. Social behaviors are common in group-living mammals, and it is often thought that sociality and group density are main drivers of infections and parasite transmissions. Recent research however suggests that the health benefits of sociality and group living can outweigh the costs of sociality and help social individuals to fight infections. "Asian elephants are one of the world's largest and longest-living terrestrial mammals, and have a very complex social life. However, studying sociality and parasite infection in Asian elephants in the wild is very difficult, if not impossible, due to the dense forests they live in and large roaming distances," says Postdoctoral Researcher and one of the two lead authors of the study Martin Seltmann from the University of Turku. The researchers investigated three different aspects related to Asian elephants' social world: socialness vs loneliness, size of the group, and living in single vs mixed sex groups. "We studied these characteristics in 71 Asian timber elephants living in their natural habitat in Myanmar. These elephants work in the timber industry, where they pull and push logs out of the forest. This is a unique research environment and population that allows us to study many elephants living in their natural environment, but at the same time have detailed information about their social lives and infection dynamics," says postdoctoral researcher Carly Lynsdale from the University of Helsinki, the other lead author of the study. These elephants are semi-captive and their access to social partners is somewhat limited by the working conditions. Their social behaviors do not correspond exactly to the social behaviors of truly wild elephants. However, spending much of their time free in their natural habitat allows the timber elephants to express many of their natural behaviors, which is often not the case in fully captive systems such as zoos. Elephants and their handlers on their way to work. Credit: Carly Lynsdale Parasite infection is strongly affected by anti-parasitic treatment Every elephant works together with an elephant handler (mahout), and this relationship can last a lifetime. Therefore, a handler knows very well about his elephant's behaviors and is able to give detailed information on its social interactions with other elephants. From 2014 to 2018, the scientists asked the handlers if their elephants have friends or if they prefer to stay alone. Furthermore, the team assessed the size of the elephants work groups as a measure of social group size and the number of males and females within the work group. In addition, the researchers counted the eggs of intestinal parasites (nematodes, or round worms) in fresh fecal samples. These egg counts are a reliable estimate of the extent of parasite infection. "We found that veterinarian treatment against parasite infection affects levels of parasite infection, meaning the more recent the treatment took place, the lower the parasite infection. This is of course not surprising and is confirmed by past research, and it is good for the health of these elephants to see that this treatment works. When controlling for this effect, we did not find any relationship between our social measures and parasite infection," says Seltmann. An egg of intestinal parasite commonly found in elephant feces. Credit: Carly Lynsdale Though this study did not find evidence to support an association between nematode infection and sociality, there is increasing support for the view that costs of infection may in fact be minimized through socially-promoted resistance and/or tolerance pathways. The reasons underlying the absence of associations between infection and social behavior in elephants remain largely obscure. However, according to the scientists, the research adds to a bigger picture of how social relationships can help prevent or increase the risk of infection in group-living animals. "Timber elephants live in mobile working units without strong competition or dominance hierarchies, and this may instead mean that intestinal parasites either do not present large sociality costs for these elephants or that these costs are mitigated by the social health benefits of group living more than in other host taxa. Additionally, there might have been weak infection—sociality associations present in this system that could not be captured by the study's design," says Lynsdale. The findings of this new study highlight the need for finer-scale studies, establishing how sociality is limited by, mitigates, or protects against infection in different ecological contexts, to fully understand the mechanisms underlying these pathways. The article was published in the topical collection "Sociality and Disease" in the journal Behavioral Ecology and Sociobiology.
10.1007/s00265-022-03192-8
Other
Using a mobile while browsing the shelves may make shoppers buy more
Smart phones, bad calls? The influence of consumer mobile phone use, distraction, and phone dependence on adherence to shopping plans, Journal of the Academy of Marketing Science, doi.org/10.1007/s11747-019-00647-9 Journal information: Journal of the Academy of Marketing Science
https://doi.org/10.1007/s11747-019-00647-9
https://phys.org/news/2019-05-mobile-browsing-shelves-shoppers.html
Abstract As mobile phones continue to rapidly expand around the world, marketers are seeking to better understand the impact these devices have on consumer outcomes. One common but understudied area is how mobile phones may influence in-store behaviors. Although prior research has investigated the many shopping related activities consumers undertake on their phones, it is still estimated that nearly half of all in-store mobile phone use is unrelated to the shopping task. Therefore, this paper examines the impact of shopping-unrelated mobile phone use, a frequent but understudied phenomenon, on consumers’ ability to accurately manage in-store shopping plans. Using both field and experimental data, we demonstrate that shopping-unrelated mobile phone use negatively affects consumers’ ability to accurately carry out in-store shopping plans and is associated with an increase in unplanned purchasing. Furthermore, we find that consumers who are highly dependent upon mobile phones tend to be the most at risk of deviating from a shopping plan while engaging in shopping-unrelated mobile phone use. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes The proliferation of mobile phones has significantly altered the world in which we live. With their continual growth, mobile phones have found their way into almost all aspects of everyday life, from our sleep hygiene (e.g., the Sleepbot app) and dietary habits (e.g., the MyFitnessPal app) to our entertainment choices (e.g., mobile gaming, videos, and social media) and work practices (e.g., email, calendar apps). Today, it is estimated that 46% of all Americans check their phones at least 25 times per day (Deloitte 2017 ) with the average user spending approximately 3 h and 45 min daily on mobile phone use (Brustein 2014 ). While the rapid growth of mobile phones has changed our lives, marketers are struggling to fully understand the impact of mobile phones on consumer outcomes (Oddenino 2015 ). An understudied area is the role that mobile phones play in impacting in-store behaviors. It has been well documented that consumers use their mobile phones in retail environments with 93% of consumers admitting to using their phones while out shopping (Deloitte 2017 ). While prior work focuses on the many shopping related activities consumers complete on their phones (Google 2013 ; Nielsen 2015 ; Skrovan 2017 ), recent research notes that almost half of all in-store mobile phone use is unrelated to the shopping task (Martin 2016 ). Despite the prevalence of mobile phone use in stores, there is a dearth of knowledge on how these devices impact consumers. Therefore, this paper examines the impact of shopping-unrelated mobile phone use, a frequent but understudied phenomenon, on consumers’ ability to accurately carry out in-store shopping plans. Shopping-unrelated mobile phone use occurs when a consumer uses her mobile phone in a manner that is not directly related to the focal shopping trip. This includes engaging in private conversations, sending personal text messages, checking emails, surfing the Internet, listening to music, and playing games. Additionally, we are interested in adherence to shopping plans as this is an important consideration for marketers and retailers’ success. It is imperative for marketers to have a better understanding of factors associated with consumers’ successful implementation and deviations from their planned shopping behavior (Iyer and Ahlawat 1987 ). While consumers have always had to contend with some forms of distraction (e.g. listening to in-store music or shopping with others), it is critical to single out in-store phone use for at least two reasons. First, as we have noted before, mobile phones have become inextricably immersed into modern life, with some arguing that these devices have become necessities rather than luxury goods (Dreyfuss 2017 ). Given their ubiquity, mobile phones are quickly becoming the principal distractor for many consumers (Poppick 2016 ). Second, work in other fields suggests that mobile phones offer a unique form of interruption. For example, research on distracted driving notes that mobile phone conversations impact individuals differently in comparison to in-person conversations (Drews et al. 2008 ). Therefore, we investigate shopping-unrelated mobile phone use as a factor that can dramatically impact adherence to shopping plans. More specifically, in this paper we are interested in the following research questions related to shopping-unrelated mobile phone use: (1) Does shopping-unrelated mobile phone use impact consumers’ ability to accurately carry out in-store shopping plans? (2) Are some consumers more susceptible to shopping errors while engaging in shopping-unrelated mobile phone use? (3) Does intermittent as opposed to continuous shopping-unrelated mobile phone use impact consumers’ ability to carry-out shopping plans? Before beginning our focal investigation, we conducted a preliminary investigation to examine shoppers’ perceptions of positive and negative outcomes tied to shopping-unrelated mobile phone use. We utilized a critical incident technique (Flanagan 1954 ; Gremler 2004 ; Keaveney 1995 ) in which direct recollections and stories were collected from consumers regarding their in-store mobile phone use. We recruited fifty-four participants and asked them to think about a time they used their mobile phone in a shopping-unrelated manner in a retail setting. After recalling the situation in detail, we asked respondents to compare the outcomes of their described shopping trip with a similar trip in which they did not use their phones. Results showed that participants disagreed that they forgot more items than they normally would have if not using their phone, disagreed that they made more unplanned purchases than they normally would have had they not been using their phone, and disagreed that they lacked sufficient mental resources to focus on their shopping. Finally, participants agreed that there were no significant drawbacks to using their phones during the shopping trip. Footnote 1 These results suggest that consumers perceive little impact of shopping-unrelated mobile phone use on their ability to accurately complete their shopping trip. Building off of consumers’ general beliefs about in-store mobile phone use, the goal of this paper is to investigate shopping-unrelated phone use and examine if these devices are really as innocuous as consumers believe. We study the consequences of shopping-unrelated mobile phone use on shoppers’ ability to accurately manage their shopping trip. The remainder of this article is organized as follows. In the next section, we review the relevant literature and discuss our conceptual framework outlining the use of mobile phones in store environments. Following this, we present the results of a field study investigating consumers’ use of mobile phones in a real-world retail setting. We then present the results of two experiments assessing the impact of mobile phone use on consumers’ ability to accurately carry out in-store shopping plans. Finally, we close with a discussion of the implications for research and practice. Theoretical background Mobile devices, mobile phones, and mobile marketing Using mobile devices, consumers are now able to access information in almost any location, including homes, offices, and shopping centers. A mobile device is an all-purpose term used to describe any handheld computer including technologies such as tablets, e-readers, mobile phones, PDAs, and personal music players (Viswanathan 2017 ). The most ubiquitous form of mobile devices are mobile phones, which are owned by 83% of consumers in the United States and 66% of consumers worldwide (Statista 2018 ). Given the rapid adoption of mobile phones, managers have become increasingly interested in integrating mobile phones into their marketing strategy. Consequently, research on mobile marketing has been steadily increasing (Lamberton and Stephen 2016 ; Shankar et al. 2016 ). Prior work in this area has focused on service delivery via mobile phones (Kleijnen et al. 2007 ), the prediction of mobile app demand (Garg and Telang 2013 ), mobile phone advertising and promotions (Andrews et al. 2015 ; Bart et al. 2014 ; Danaher et al. 2015 ) and mobile shopping (Wang et al. 2015b ). Despite growth in mobile phone use and increasing research on mobile marketing, little research has assessed consumer use of mobile devices in retail environments (Shankar et al. 2016 ), with a couple of noteworthy exceptions. First, Hui et al. ( 2013 ) investigate the impact of in-store travel distance on unplanned purchasing and demonstrate that mobile promotions focusing on increasing distance traveled can be an effective tool to increase unplanned spending. Second, van Ittersum et al. ( 2013 ) show that “smart” shopping carts (carts equipped with technology which allows consumers to track real-time spending during the shopping trip) differentially impact budget and non-budget shoppers. These authors find that real-time spending feedback increases spending for budget shoppers due to an increase in national brand purchase, while non-budget shoppers tend to spend less. While these studies begin to bridge mobile device with in-store environments, the scope of the mobile device use is relatively limited. For example, though the work of Hui et al. ( 2013 ) demonstrates the immense potential in mobile promotion technology as a means of influencing travel distance and unplanned purchases, the authors do not actually deliver promotions via mobile phone. Instead, they simulate the delivery of mobile coupons by intercepting shoppers and providing them with the promotion before shopping. In the case of van Ittersum et al. ( 2013 ), the focus on shopping cart technology is important, but does not specifically consider mobile phones. Therefore, we build upon this work to address how shopping-unrelated mobile phone use may impact a critical outcome for marketers: consumers’ ability to accurately manage a shopping trip. Shopping plan implementation Marketing scholars and practitioners have long been interested in factors which impact consumers’ ability to accurately carry out in-store shopping plans (Iyer and Ahlawat 1987 ). Prior research has focused on both planned and unplanned purchases to better understand consumers’ implementation of their shopping plans (Bucklin and Lattin 1991 ; Park et al. 1989a ). Planned items are products that have been predetermined by the shopper prior to entering the store environment and can be premeditated to the brand level or category level (Inman et al. 2009 ). Conversely, unplanned items are products which were not planned on being purchased prior to entering the store (Park et al. 1989a ). In investigating accurate shopping plan implementation, prior research has considered both in-store and out-of-store elements. For example, factors such as shopper household size, store familiarity, product category hedonicity, shopping time, travel distance, shopping goal abstractness, and store selection motives have all been positively associated with unplanned buying behavior (Bell et al. 2011 ; Hui et al. 2013 ; Inman et al. 2009 ; Kollat and Willett 1967 ; Park et al. 1989a ). Similarly, characteristics such as the use of product coupons, increased shopping frequency, and formalized list generation all help consumers accurately fulfill their shopping as planned (Block and Morwitz 1999 ; Inman et al. 2009 ; Thomas and Garland 1993 ). Taking into account the importance of in-store decisions on retailers’ success or failure as well as explicit calls to better understand the impact of mobile phone use on consumer outcomes (e.g. Lamberton and Stephen 2016 ; Shankar et al. 2011 ; Shankar et al. 2016 ), we consider the effect of in-store shopping-unrelated phone use on consumers’ ability to accurately adhere to their shopping plans. Specifically, we investigate the impact of shopping-unrelated mobile phone use on unplanned purchasing and missed planned items. Mobile phone use, mobile phone dependence, and shopping plan implementation When using a mobile phone in a shopping-unrelated manner during a shopping task, consumers engage in a form of concurrent multitasking in which they are completing two significant tasks at the same time. Therefore, in the following sections, we build upon resource theories of information processing to predict the impact of using a mobile phone during a shopping trip on consumers’ ability to accurately carry out their shopping plans. In addition, we consider mobile phone dependence, an increasingly worrying phenomenon tied to mobile phone use, as a potential moderating variable. Figure 1 outlines our conceptual framework discussed in the following sections. Fig. 1 Shopping-unrelated mobile phone use conceptual framework Full size image Resource theories of information processing and shopping-unrelated mobile phone use The driving premise behind resource theories of information processing is that individuals have a limited amount of cognitive resources to process information (e.g., Kahneman 1973 ; Lang 2000 ; Norman and Bobrow 1975 ; Navon and Gopher 1980 ; Wickens 1984 ). Therefore, when engaging in multiple tasks, individuals must allocate processing resources among concurrent tasks. For example, Lang’s ( 2000 ) limited capacity model of information processing argues that information processing consists of simultaneously occurring sub-processes (encoding, storage and retrieval) that individuals enact on stimuli. Importantly, when processing demands exceed available cognitive resources, multitasking performance often deteriorates (Lang 2000 ; Mayer and Moreno 2003 ). Unlike the limited capacity model, Wikens’ (1984) multiple resource theory contends that there are multiple pools of processing resources for individuals to tap. However, while the multiple resource theory asserts that processing resources may be conceptually distinct, this theory does not imply that truly frictionless resource sharing is possible. As the number or complexity of tasks increases, it becomes highly likely that distraction will result in task inference (Wickens 2008 ). When consumers use a mobile phone during their shopping, they engage in multiple tasks that may ultimately impact their ability to accurately manage their shopping trip. Consistent with resource theories of information processing, we argue that using a phone in a shopping-unrelated manner during a shopping trip will lead to significant distraction and subsequently impact consumers’ ability to accurately carry out their shopping plan. This prediction stems from the fact that both shopping decision making (Inman and Winer 1998 ; Inman et al. 2009 ) and the use of mobile phones (Drews et al. 2008 ; Strayer et al. 2003 ; Strayer and Johnston 2001 ; Hyman et al. 2010 ) place significant demands on mental resources. Additionally, both tasks require the use of visual and verbal processing resources and are therefore, likely to compete for the same pool of cognitive resources. Finally, the concurrent tasks being performed are unlikely to share a common goal, which makes the processing of these tasks less effective (Wang et al. 2015a ). Given that shopping-unrelated mobile phone use can alter shoppers’ levels of distraction, it is important to consider its impact on two critical components of accurate shopping plan implementation: unplanned purchases and missed planned items. We first consider the impact of in-store mobile device use on the number of unplanned purchases. Previous research on self-regulation and resource depletion illustrates that acts of volition draw on a common inner resource similar to strength or energy (Baumeister et al. 1998 ). Evidence suggests that cognitive overload can interfere with individuals’ self-regulatory behaviors as demonstrated by people who deviate from diets while experiencing periods of high stress (Herman and Polivy 2003 ) or fail at self-control when cognitively taxed (Baumeister et al. 1998 ; Vohs and Faber 2007 ). As we have already discussed, when using a mobile phone in a shopping-unrelated manner, consumers are often engaging in cognitively demanding tasks that require divided attention and resource allocation. Therefore, we argue that the cognitive and attentional requirements of in-store multitasking will tax consumers’ self-regulatory resources and lead to deviations from the shopping plan in the form of increased unplanned purchases. This expectation is consistent with the research of Shiv and Fedorikhin ( 1999 ), who find that under conditions of low processing capabilities, individuals’ choices are driven by affective reactions to choice options as opposed to cognitions. When relying on affective reactions to products, consumers are likely to make more impulse decisions (Shiv and Fedorikhin 1999 ). Turning next to the purchase of planned items, we contend that shopping-unrelated phone use will also affect accurate shopping plan implementation by influencing the number of missed planned items. When not using a shopping list, consumers must actively recall all of the planned items they wish to purchase. Prior research has found that divided attention during recall significantly limits individuals’ ability to retrieve information (Craik et al. 1996 ; Park et al. 1989b ). For example, Craik et al. ( 1996 ) presented participants with a set of common nouns and asked individuals to recall these words while either participating in a demanding secondary task or not participating in a demanding secondary task. The authors found that engaging in a secondary task significantly impaired individuals’ ability to recall information, with recall in the divided attention condition approximately 11% lower compared to participants in the full attention condition. Conversely, when using a shopping list, consumers must accurately identify and process all of the information on the list. Gardiner and Parkin ( 1990 ) found that divided attention while reading words impaired individuals’ processing of the information and resulted in subsequent failure to recollect seeing words. Moreover, both Craik et al. ( 1996 ) and Park et al. ( 1989b ) found a significant impact of divided attention on encoding and processing of word lists. Taken together, these results suggest that shopping-unrelated phone use should impact consumers’ ability to recall products to be purchased and ability to process and manage shopping lists. Therefore, we predict that distraction from mobile devices will result in consumers failing to purchase planned items. Mobile phone dependence One increasingly important construct that may impact consumers’ ability to accurately manage a shopping task while using a mobile phone is mobile phone dependence. Mobile phone dependence is a form of psychological reliance on a mobile phone and is often characterized by excessive use of and reliance on a mobile phone (Baker 2017 ). Phone dependence is becoming a progressively worrying phenomenon, with some estimating that over 175 million individuals worldwide are dependent on mobile phones (Feeney 2014 ). Prior work has found that extraversion, low self-esteem, materialism, emotional instability, and approval motivation are all associated with mobile phone dependence (Bianchi and Phillips 2005 ; Hong et al. 2012 ; Takao et al. 2009 ; Roberts et al. 2015 ). Currently, we have argued that shopping-unrelated mobile phone use leads to increased cognitive distraction, which negatively impacts consumers’ ability to accurately carry out their shopping (both in the form of increased unplanned purchasing and missed planned items). Furthermore, we argue that mobile phone dependence will moderate this process, with those higher in mobile phone dependence showing greater deviations from their shopping plans. Figure 1 provides a visual presentation of our conceptual framework. As shown in Fig. 1 , we believe that mobile phone dependence will moderate one of two paths; A) the link between shopping-unrelated mobile phone use and distraction or B) the link between distraction and shopping plan accuracy. Focusing first on the path between shopping-unrelated phone use and distraction, prior work has demonstrated that the mere presence of a mobile phone can inhibit cognitive resources available for tasks with those who are highly dependent on their phones showing the most detriment (Ward et al. 2017 ). Therefore, mobile phones may generate greater levels of distraction for consumers who are highly dependent on mobile phones, which can result in difficulties accurately managing shopping plans. Alternatively, mobile phone dependence may impact the path between distraction and shopping plan accuracy. In this case, consumers high in mobile phone dependence may find it more difficult to manage multiple tasks, despite experiencing comparable levels of distraction as other consumers. This argument is consistent with prior research that has tied mobile phone dependence to general attentional impulsivity, suggesting that these individuals have trouble focusing on any task, regardless of the presence of an additional distractor (Roberts et al. 2015 ). This perspective is also supported by research highlighting frequent media multitaskers’ difficulty filtering irrelevant environmental stimuli (Ophir et al. 2009 ). Due to the increasing prevalence and interest in mobile phone dependence, it is important to investigate how this factor impacts consumers’ ability to adhere to a shopping plan while using a mobile phone. Moving forward, we report the results of an in-store field study and two experiments designed to test our predications about the impact of shopping-unrelated mobile phone use on consumer shopping plan adherence. In Study 1, we investigate consumer plan adherence in a real in-store environment using shoppers’ unplanned items as our focal dependent variable. In Study 2, we further explore the impact of shopping-unrelated mobile phone use, distraction, and mobile phone dependence on consumers’ ability to accurately carry out in-store shopping plans by investigating missed planned items. Finally, in Study 3 we examine intermittent shopping-unrelated phone use in which consumers use their phones periodically throughout the shopping trip rather than for the entire duration of the shopping trip. Study 1 Interestingly, the results of our preliminary investigation revealed that consumers do not appear to recognize any drawbacks to using mobile phone in store environments. Might consumers be correct in believing that in-store mobile phone use plays little role in affecting in-store outcomes? In Study 1, we use a novel data set to examine the impact of shopping-unrelated mobile phone use on actual shopping outcomes. We assess how mobile phone use impacts deviations from planned shopping behavior by considering unplanned purchasing. This focus is consistent with prior research in shopper marketing (Inman et al. 2009 ; Shankar et al. 2011 ). Procedure Study 1 employs data from the 2013 Point of Purchase Advertising International (POPAI) Shopper Engagement study. POPAI is a global non-profit trade association that conducts research and offers educational opportunities related to shopper marketing. Working with POPAI, we added a question in the exit interview asking shoppers about their smartphone or cellular phone use during the shopping trip. As part of this research, over 2600 shoppers across four broad US geographic census regions were intercepted before entering mass merchandisers. Shoppers completed a ten-minute entry interview that gathered information on their shopping plans and preliminary shopping information. After completing the shopping trip, interviewers collected information from shoppers on items purchased, store perceptions, and demographics. Previous research has shown that the pre and post shopper interview technique applied in the POPAI study does not affect consumer spending (e.g., Kollat and Willett 1967 ; Stilley et al. 2010a ). Due to missing or incomplete responses, the usable sample of respondents was 2520 (79.1% female). Focal measures Our central focus in this study is the impact of in-store mobile device use on shoppers’ unplanned purchasing. We focus on unplanned purchases as this measure provides insight into deviations from the shoppers’ plan and therefore is critical to retailers’ success (Inman et al. 2009 ; Iyer 1989 ; Kollat and Willett 1967 ; Stilley et al. 2010a ). The number of unplanned purchases was operationalized as the total number of items that were purchased by the shopper but were not planned prior to beginning the shopping trip. We use three variables to capture shopper mobile phone use (Related, Unrelated, and Both). Mobile phone use was classified as shopping-related if the respondent indicated they used their phone to compare prices of products, to compare different retailers for the best price, to look at a manufacturer’s website, to look at a retailer’s website, to access a retailer’s shopping or loyalty app, to create, store or access a shopping list, to scan a QR code on a package, to use their device as a calculator, and/or to call someone for help with a decision (302 shoppers or 11.98% of the sample was classified as Related). Mobile phone use was classified as shopping-unrelated if the respondent indicated they used their phone to engage in a private conversation with another individual, check or send emails, look at websites unrelated to the shopping trip, send personal text messages, listen to music, and/or to play games (280 shoppers or 11.11% of the sample was classified as Unrelated). Finally, shoppers who indicated that they used their mobile phone in at least one shopping-related and one shopping-unrelated manner during the trip fell into the Both category (157 shoppers or 6.23% of the sample was classified as Both). For all of our analyses, we compare these groups to shoppers not using a mobile phone during their shopping trip (1781 shoppers or 70.67% of the sample was classified as No Phone Use). Footnote 2 See Table 1 for the mobile usage categories collected and number of shoppers within each category. Table 1 Mobile phone usage type and frequency: Study 1 Full size table Along with these three mobile phone use categories, we included a number of important shopping variables and demographics in our models as controls in line with prior research (Hui et al. 2013 ; Inman et al. 2009 ; Stilley et al. 2010a ). This included variables such as consumer impulsiveness, trip time, use of a hand written shopping list, basket size, aisles shopped, shopping with others, gender, age, income, and household size. Descriptions of these measures are reported in Appendix Table 7 . Results In modeling the number of unplanned items purchased by shoppers, we estimate a Poisson model with the number of unplanned items as the dependent variable. To account for overdispersion in the model, we include a dispersion parameter which provides a correction term when estimating the model (McCullagh and Nelder 1989 ). This approach allows for proper inference when overdispersion is modest (Cox 1983 ) and is the conventional approach when running a Poisson analysis (Pedan 2001 ). Overdispersion is the occurrence of greater variability than would be expected and occurs frequently in applied analysis of count data (Barron 1992 ). Specifically, we include our mobile phone use categories as a single class variable with four levels (Related, Unrelated, Both, and No Use) and our control variables in the model. In our analyses, the comparative reference for our three mobile categories are those shoppers not using a mobile phone (no use). Table 2 shows all of the variables and the results. The ratio of the Pearson χ 2 divided by its degrees of freedom provides a means of assessing the adequacy of the model (Ramsey and Schafer 1997 ). Ratios that are close to a value of one indicate a good fitting model, whereas ratios significantly above one indicate overdispersion and ratios significantly below one indicate underdispersion. The ratio for our Poisson model is close to one, indicating a good fitting model. In addition, Table 2 illustrates that the full fitting model significantly outperforms the null model (χ 2 (15) = 4495.35, p < .001). Table 2 Mobile phone use and unplanned purchases: Study 1 Full size table Previously we argued that shopping-unrelated mobile phone use would negatively impact consumers’ ability to manage their shopping. Consistent with this prediction, when compared to those not using a mobile phone, shoppers using their phones in a shopping-unrelated manner purchased significantly more unplanned items ( β Unrelated = 0.0906, p < .05). Using a mobile phone in a manner unrelated to the shopping task increased unplanned items by an average of 9%. Furthermore, though not the primary focus of this research, we found that shoppers using their mobile phones in a shopping-related manner purchased fewer unplanned items ( β Related = −0.1316, p < .01). Using a mobile phone in a shopping-related manner was associated with a decrease in unplanned items by an average of 13%. This result aligns with prior findings that shopping-related mobile phone use may make consumers better shoppers (Google 2013 ; Nurun 2013 ). We note that whether someone used their mobile phone while shopping, and specifically the nature of that use (Related, Unrelated, or Both), could be endogenous due to selection. In other words, consumers self-selected into mobile phone use and how they used it; therefore, it is possible that some consumers might be systematically more or less prone to use a mobile in various ways when shopping. As a robustness check, we addressed this through a control function approach (Petrin and Train 2010 ). This is a two-stage estimation procedure. In the first stage, the potentially endogenous variable—in this case, type of mobile phone use—is modeled as a function of exogenous covariates (demographic characteristics of the shopper). Since type of use is a categorical variable, the control function was estimated as a multinomial probit model. In the second stage, the response model is estimated with residuals from the first-stage model added as additional covariates. The same type of Poisson regression model was estimated for this purpose. After accounting for type of mobile use, the main results were substantively unchanged; shoppers using their phones in a shopping-unrelated manner purchased significantly more unplanned items and shoppers using their phones in a shopping-related manner purchased significant fewer unplanned items (See Table 3 for control function results). Since this robustness check indicates that selection of type of mobile use does not change our findings, all subsequent analyses are based on the simpler models (i.e., without control functions). Table 3 Control and response function results: Study 1 Full size table Category analysis: Types of unplanned items While we find results consistent with our main predictions, we are also interested in the nature of the unplanned items shoppers purchased. In our conceptualization we argued that shopping-unrelated use would make it more difficult to accurately manage a shopping task due to distraction. Therefore, we would expect that the nature of the unplanned items would differ depending upon mobile phone use. In particular, we assess the degree of hedonicity of unplanned items as prior research has shown that hedonic products are preferred when consumers are distracted (Shiv and Fedorikhin 1999 ; Vohs and Faber 2007 ). Hedonic products are often considered to be vices that are decadent, excessive, or impractical (Dhar and Wertenbroch 2000 ; O'Curry and Strahilevitz 2001 ). If significant distraction is playing an important role in unplanned items for those using phones in a shopping-unrelated manner, we expect the unplanned items to be more hedonic in nature when compared to shoppers not using a phone. To investigate the nature of the unplanned items purchased by shoppers, we consider the nature of the category. In the POPAI field data, there were a total of 244 unique categories from which shoppers made purchases. This includes an array of categories such as adhesives, boy’s apparel, dairy, soup, yogurt, etc. To evaluate the nature of these categories, we tasked ten human judges to appraise and code every product category on four specific dimensions: excessiveness of the product category, extravagance of the product category, indulgence of the product category, and the degree to which the product category was a vice. All dimensions were measured on a seven point scale with 1 corresponding to a low level of the dimension and 7 corresponding to a high level of the dimension. Further, the four items were averaged for analysis (α = 0.98). While completing the evaluation task, judges saw the name of the product category (e.g., “adhesives”) and three pictures of example products that would be a part of this category (e.g., clear tape, a hard glue stick, and a bottle of liquid glue). To focus attention on the overall category, judges were told to think about the product category as a whole and not about specific brands that might be within the category. This evaluation task resulted in overall average category hedonicity measures for each of the 244 unique product categories. We next used these overall evaluations for each product category to assess the degree of hedonicity for the basket of unplanned items purchased by each shopper in the dataset. For example, if a shopper purchased five unplanned items during the trip (e.g., Dannon yogurt, Elmer’s glue, Hershey’s chocolate bar, a women’s blouse, and Green Giant frozen peas), we averaged the category hedonicity values for these five purchases to generate an estimate of just how hedonistic the shopper’s basket of unplanned items was (i.e., we averaged the category ratings for yogurt, adhesives, chocolate candy, women’s apparel, and frozen vegetables). Using this information, we could directly assess the association between differing types of mobile device use and the average hedonistic nature of shoppers’ unplanned purchases. In modeling the hedonistic nature of unplanned purchases, we estimate a standard OLS regression model with the average category hedonicity for unplanned items as the dependent variable. Consistent with prior models, we include our mobile device usage categories and all control variables in the model. The results of this analysis are reported in Appendix Table 8 . As expected, we find a significant relationship between consumer impulsiveness and average category hedonicity for unplanned items ( β Impulsiveness = 0.0562, p < .05). Importantly, and consistent with our conceptualization, we find that the unplanned items purchased by shoppers using their device in a shopping-unrelated manner are significantly more hedonic than shoppers not using mobile devices ( β Unrelated = 0.0774, p < .05). This finding supports our argument that cognitive distraction associated with shopping-unrelated device use influences the nature of shoppers’ decisions. Additionally, when compared to shoppers not using mobile devices, we detect no difference in category hedonicity of unplanned items for those using their mobile devices in a shopping-related manner ( p > .40). Discussion In contrast to consumer lay beliefs, Study 1 demonstrates the considerable impact that shopping-unrelated phone use has on consumers ability to accurately manage a shopping trip. Consistent with prior research, when used in a shopping-related manner, our results suggest that shoppers will be better equipped to stay on track during the shopping trip. However, more critically, when used in a shopping-unrelated manner, our results indicate that shoppers may have a difficult time fulfilling their shopping trip as planned. Furthermore, the increased hedonicity of unplanned purchases made by shoppers using their mobile phone in a shopping-unrelated manner illustrates that this type of in-store mobile phone use is significantly more distracting than consumers believe. Study 2 Study 1 demonstrated the significant real-world effects of in-store mobile phone use on consumers’ ability to accurately complete a shopping trip. In particular, Study 1 analyzed deviations from shopping plans in the form of unplanned purchasing. As discussed in our conceptual development, another important part of shopping plan adherence is the purchase of planned items. Therefore in Study 2, we utilize a simulated shopping task to further assess consumers’ ability to accurately manage a list of planned items while engaging in shopping-unrelated mobile phone use. In particular, we assess the number of planned items that are missed by consumers. Moreover, in this study we directly investigate the mediating role of distraction in driving the results. Finally, in this study we also evaluate the influence that mobile phone dependence plays in influencing consumers’ ability to accurately manage a shopping task while using a mobile phone. Specifically, we test the proposed moderated mediation process whereby the mediating effect of distraction is moderated by consumers’ mobile phone dependence. Procedure One-hundred and sixteen participants (48% female; average age = 35.5 years, range 20 to 64 years) recruited using Amazon Mechanical Turk participated in Study 2 in exchange for a small monetary incentive. Study 2 employed a between-subjects design with two levels of mobile phone use: control (no phone use) and shopping-unrelated phone use. All participants completed a grocery shopping task in which they watched an approximately nine minute first-person perspective shopping video. This video was created using a pair of video recording glasses which captured a real-life first-person perspective grocery shopping trip in high-definition (1080p). In the shopping video, an individual pushed a cart through a grocery store and placed items in the cart to be purchased. Furthermore, the individual in the video picked up and inspected some items but did not put the item in the grocery cart. Each participant was asked to imagine that they were the person shopping in the video. Prior to beginning the task, participants saw a list of fifteen grocery items that they intended to purchase during the trip. While watching the video, it was the participants’ job to keep track of these items and identify which products were placed in the cart and which products were picked up by the shopper but not put in the cart. To do this, participants selected each item from a drop down list (see Appendix Fig. 2 ). In total, nine of the items were put in the shopping cart and six of the items were picked but not put in the shopping cart. After reading about the task, participants viewed a layout screen that provided specific directions on the shopping task and demonstrated the arrangement of all parts of the task so that participants could familiarize themselves with the design prior to beginning. To manipulate mobile phone use, participants were randomly assigned to one of the two conditions (no phone use or shopping-unrelated phone use). Participants in the shopping-unrelated use condition engaged in a simulated phone conversation in which they listened to a phone exchange between two individuals while they completed the shopping task. Simulated conversations have been used in prior research and provide a suitable proxy for real-world mobile phone conversations (Drews et al. 2008 ). In this condition, the conversation was not applicable to the shopping video being watched and lasted for the entire duration of the shopping trip. The conversation focused on the individuals’ professional lives (e.g. how’s your job going?), past experiences (e.g. discussing the past weekend), and upcoming plans (e.g. discussing a future vacation). Conversely, participants in the control group completed the focal shopping task without listening to a phone conversation. Therefore, these participants served as a no phone use control. After completing the focal shopping task, participants completed measures of distraction, mobile phone dependence, and demographics. We used five Likert items (measured on a scale from 1 “strongly disagree” to 7 “strongly agree”) to capture distraction during the shopping task (“I was distracted while keeping track of the shopping items”; “I was totally focused on the shopping trip” [reversed scored]; “I had a hard time focusing on the shopping task”; “I had difficulty maintaining focus on the shopping task”; “It was easy to focus on the shopping task” [reverse scored]) which were averaged for analysis and mean centered (α = .90, M = 3.41, s.d. = 1.78). To measure mobile phone dependence we used Bianchi and Phillips ( 2005 ) 20 item mobile phone use scale (measured on a scale from 1 “not at all true” to 10 “extremely true”), which was also averaged for analysis and mean centered (α = .96, M = 3.48, s.d. = 2.01; sample items: “I can never spend enough time on my mobile phone,” “I find it difficult to switch off my mobile phone,” “I find myself engaged on the mobile phone for longer periods of time than intended”). Results Mobile phone use and accurate adherence to shopping plans To measure adherence to a shopping plan, we assessed the total number of planned grocery items that participants missed during the shopping task. This focal variable was the sum of the number items that participants failed to correctly identify as being placed in the shopping cart as well as the number of items that participants failed to correctly identify as being picked up but not placed in the shopping cart. An ANOVA revealed a significant effect of mobile phone use on adherence to a shopping plan, F (1, 114) = 7.96, p < .01, such that participants in the shopping-unrelated condition ( M = 3.30) missed more planned items when compared to participants in the control condition ( M = 2.18). Moderated mediation analysis: Distraction and mobile phone dependence Previously we argued that shopping-unrelated mobile phone use would differently impact consumers’ levels of distraction due to the divergent relationships between the mobile phone use and the shopping task. In Study 1, we observed evidence that shopping-unrelated phone use may be sufficiently distracting to impact shoppers’ unplanned items. In addition to the mediating role of distraction, our theoretical development proposed that consumers’ mobile phone dependence would moderate this mediating relationship, impacting either the unrelated mobile phone use to distraction path or the distraction to shopping plan adherence path. To directly test this moderated mediation model, we applied a PROCESS Model 58 to test for moderated mediation (Hayes 2013 ; Preacher and Hayes 2008 ). Table 4 shows the full results of the moderated mediation analysis. In this analysis, we used a shopping-unrelated effects coded variable (overall mean vs. shopping-unrelated use) as the independent variable, number of missed planned items as the dependent variable, mean centered distraction as the mediator, and mean centered mobile phone dependence as the moderator. Table 4 Moderated mediation of the effect of mobile phone use on number of missed planned items: Study 2 Full size table Overall, the results revealed that mobile phone dependence did not moderate the shopping-unrelated phone use to distraction path, as the index of moderated mediation for this path was nonsignificant (β = −0.03; 95% CI = −0.10, 0.03). However, the index of moderated mediation for the distraction to number of missed planned items was significant (β = 0.25, 95% CI = 0.08, 0.37), indicating that mobile phone dependence does in fact moderate the distraction to number of missed planned items relationship. In this case there was a positive indirect effect of shopping-unrelated mobile use on missed planned items through distraction for participants with average levels of mobile phone dependence (β = 0.52, 95% CI = 0.21, 0.94), and for participants with high levels of mobile phone dependence (β = 0.92, 95% CI = 0.34, 1.51). However, there was not a significant indirect effect of shopping-unrelated mobile use on missed planned items through distraction for participants with low levels of mobile phone dependence (β = 0.01, 95% CI = −0.32, 0.41). Floodlight analysis The results of our moderated mediation analysis demonstrates that consumers moderate to high in mobile phone dependence tend to deviate more from their shopping plans. To better understand the proportion of consumers who may be at risk of this deviation, we conducted a floodlight analysis. A floodlight analysis highlights the range of significance and insignificance for a simple effect (Hayes and Matthes 2009 ; Spiller et al. 2013 ). In the context of this study, the floodlight analysis identified the range of mobile dependence values for which there is a significant difference in missed planned items between the shopping-unrelated phone use condition and the control condition and the range of mobile dependence values for which there is not a significant difference in missed planned items between shopping-unrelated phone use condition and the control condition. The results of the floodlight procedure revealed that participants scoring above an average value of 2.78 on the mobile dependence scale missed more planned items when engaging in shopping-unrelated mobile phone use when compared to those in the control condition (all p ’s for values over 2.78 < .05). Furthermore, 54% of the sample had values on the mobile phone dependence scale above 2.78. Discussion The results of Study 2 replicate and extend the findings from Study 1 and offer a number of important insights. First, we again find evidence that shopping-unrelated mobile phone use interferes with consumers’ ability to accurately manage a shopping task. In particular, shopping-unrelated mobile phone use resulted in significantly more missed planned items. Second, our mediation analysis supports our argument that shopping-unrelated phone use is significantly more distracting than consumers realize and negatively impacts consumers’ ability to accurately manage the shopping task. Third, our results establish that mobile phone dependence plays an important role in influencing consumers’ ability to accurately complete their shopping. Specifically, the results of our moderated mediation analysis demonstrate that consumers who are higher in mobile phone dependence have a tougher time managing their shopping. Importantly, we did not find evidence that consumers who are average to high in mobile phone dependence find multitasking more distracting than other consumers (i.e., mobile phone dependence did not moderate the shopping-unrelated phone use to distraction path); rather our results suggest that these consumers perform worse with their shopping when dealing with comparable levels of distraction (i.e. mobile phone dependence moderated the distraction to number of missed items path). Thus, when dealing with comparable levels of distraction, those higher in mobile phone dependence exhibit significant decrements in their ability to accurately manage their shopping trips. This finding is consistent with prior work tying mobile phone dependence to attentional impulsivity and research on dual task performance which shows that frequent media multitaskers tend to perform worse when dealing with multiple tasks (Ophir et al. 2009 ; Roberts et al. 2015 ). Finally, the results of our spotlight analysis suggest that a sizeable proportion of consumers demonstrate phone dependence levels that put them at risk of deviating from their shopping plans. Study 3 The results of Study 2 indicate that shopping-unrelated mobile phone use significantly impacts consumers’ ability to accurately manage planned shopping items. However, in that study, participants used their phones for the entire shopping trip. What about situations in which shoppers do not use their phones for their entire shopping trip? Might shoppers still struggle to manage their shopping trip if they only use their mobile phones intermittently throughout a trip? In Study 1, it is likely that some shoppers used their mobile phones for the entire duration of the shopping trip while other shoppers only used their phones sparingly. Therefore, the purpose of Study 3 is to investigate situations in which consumers do not use their mobile phones continuously throughout the shopping task. Procedure One-hundred and fifteen participants (55% female; average age = 34.4 years, range 19 to 66 years) recruited using Amazon Mechanical Turk participated in Study 3 in exchange for a small monetary incentive. This study used a between-subjects design with two levels of mobile phone use (no phone use and shopping-unrelated phone use) in which participants completed the same grocery shopping task used in Study 2. Once again participants watched a first-person grocery store shopping video and tracked items being placed in the shopping cart and items picked up but not put in the cart. Participants were randomly assigned to one of the two mobile phone use conditions (no phone use or shopping-unrelated use). In the control condition, participants watched the standard shopping video without the use of a mobile phone. Conversely, participants in the shopping-unrelated phone use condition periodically received a series of push notifications from a mobile news app which were displayed in the upper left hand corner of the shopping video (See Appendix Fig. 3 ). A push notification is a message that pops up on a user’s mobile phone and is used by mobile apps to bring information to individuals’ attention (Nations 2018 ). These notifications appear as a short message and often play a sound to alert the user of the new information. In total, four news push notifications were sent to participants throughout the shopping trip. For each push notification, participants heard a “ding” to indicate reception of a new mobile phone push notification. Each time a push notification was received, the shopping video froze for a period of 10 s, allowing participants time to read the message. Finally, to maintain realism, participants in both conditions were free to pause the shopping video whenever they wished. After completing the shopping task, participants completed the same measures of distraction, mobile phone dependence, and demographic measures used in Study 2. We averaged and mean centered the measures of distraction (α = .88, M = 3.41, s.d. = 1.78) and Bianchi and Phillips ( 2005 ) mobile phone dependence scale (α = .96, M = 3.30, s.d. = 2.00) for our analysis. Results Mobile phone use and accurate adherence to shopping plans To measure consumers’ ability to accurately carry out shopping plans, we again evaluated the total number of missed planned grocery items for participants during the shopping task. An ANOVA revealed a significant effect of mobile phone use on accurate adherence to shopping plans, such that participants in the shopping-unrelated condition ( M = 3.09) had more missed planned items compared to participants in the control condition ( M = 1.68), F (1, 113) = 7.67, p < .01. Moderated mediation analysis: Distraction and mobile phone dependence To investigate the impact of mobile phone dependence on the relationship between shopping-unrelated mobile device use and accurate shopping plan adherence through distraction, we used a PROCESS Model 58 to test for moderated mediation (Hayes 2013 ; Preacher and Hayes 2008 ). Table 5 shows the full results of the moderated mediation analysis. In accordance with Study 2, we used a shopping-unrelated effects coded variable (overall mean vs. shopping-unrelated use) as the independent variable, number of missed planned items as the dependent variable, mean centered distraction as the mediator, and mean centered mobile phone dependence as the moderator. Table 5 Moderated mediation of the effect of mobile phone use on number of missed planned items: Study 3 Full size table Consistent with Study 2, results revealed that mobile phone dependence did not moderate the shopping-unrelated mobile phone use to distraction path, as the index of moderated mediation for this path was nonsignificant (β = 0.00; 95% CI = −0.10, 0.14). However, for the distraction to number of missed planned items path, results revealed a significant index of moderated mediation (β = 0.27; 95% CI = 0.90, 0.45), indicating that mobile phone dependence moderates this relationship. More specifically, we again found a positive significant indirect effect of shopping-unrelated mobile phone use on the number of missed planned items through distraction for participants with average levels of mobile phone dependence (β = 0.43; 95% CI = 0.08, 0.90) and participants with high levels of mobile phone dependence (β = 0.90; 95% CI = 0.36, 1.65). Conversely, there was not a significant indirect effect of shopping-unrelated mobile phone use on number of missed planned items through distraction for participants with low levels of mobile phone dependence (β = −0.06; 95% CI = −0.61, 0.48). Floodlight analysis Once again, we ran a floodlight analysis to further understand the proportion of consumers who may be at risk of deviating from their shopping plans. Results of the floodlight procedure revealed that participants scoring above an average value of 3.48 on the mobile dependence scale missed more planned items when engaging in shopping-unrelated mobile phone use compared to the control condition (all p ’s for values over 3.48 < .05). Additionally, 38% of the sample showed mobile phone dependence values above 3.48. Consistent with the results of Study 2, this result suggests that a significant number of consumers show levels of phone dependence putting them at risk of deviating from their shopping plans while using mobile-phones in a shopping-unrelated manner. Discussion In line with prior results, the findings of Study 3 offer additional insight into the impact of shopping-unrelated mobile phone use on consumers’ ability to accurately manage a shopping task. We again find evidence that mobile phone use unconnected to the shopping trip can interfere with consumers’ ability to accurately complete their shopping trip. Furthermore, the results of Study 3 highlight the importance of distraction in driving these results. Interestingly, the results of this study suggest shoppers do not need to use their phones for the entire duration of the trip for their plan adherence to be negatively affected. Alternatively, it appears that intermittent mobile phone use can interfere with consumers’ shopping and lead to issues in accurately carrying out in-store shopping plans. Given that participants were allowed time to read the push notification messages before continuing the shopping trip, the results of this experiment demonstrate that in-store mobile phone use may actually consume attentional resources after use. Therefore, even after putting away a cell phone or smartphone, consumers may be expending cognitive resources thinking about the content of their phone conversations, text messages, push notifications, or emails which may negatively impact their shopping trip (Ward et al. 2017 ). Importantly, the results of Study 3 replicate those of Study 2 and demonstrate that mobile phone dependence plays an important role in shoppers’ ability to accurately manage their shopping while using a mobile phone. Once again, it appears that consumers high in mobile phone dependence have a more difficult time managing multiple tasks. General discussion As mobile technologies continue to grow in popularity, it is critical that consumers and marketers understand the impact these technologies have on consumer behaviors. The main objective of our research was to investigate the role that shopping-unrelated mobile device use plays in influencing consumers’ ability to accurately manage their shopping. To achieve this, we integrated work on resource theories of information processing, shopper marketing, and mobile phone dependence to explore some of the implications related to mobile phone use in store environments. This work is important because mobile phones are a fast-growing communications medium and consumers are becoming increasingly reliant on these technologies in their daily lives (Ericsson 2015 ). Additionally, understanding the role that mobile phones play in shaping consumers’ ability to accurately manage their in-store shopping plans is critical for retailers whose ultimate success is inextricably linked to in-store consumer behaviors. Prior research recognizes the considerable gap in our understanding of consumers’ use of mobile technologies (Shankar et al. 2011 ; Shankar et al. 2016 ). A major contribution of our research is to identify important limitations associated with mobile technology use when executing a shopping plan. We illuminate the dangers of shopping-unrelated mobile device use, underscoring considerable issues associated with mobile phone distraction. Across three studies, we find evidence that shopping-unrelated phone use results in considerable distraction for consumers, and subsequently poor adherence to their shopping plan. Furthermore, we illustrate that shopping-unrelated phone use is associated with more hedonic unplanned purchases. Methodologically, our research relies on multiple research methods to focus on the phenomena of in-store consumer phone use. The application of a multi-method approach highlights consumers’ use of mobile phones by providing data from both field and experimental settings. Theoretically, this research offers numerous interesting insights. In line with multiple resource theory (Wickens 1984 ), we find evidence that shopping-unrelated mobile phone use leads to considerable cognitive distraction, thus negatively impacting consumers’ ability to accurately manage their shopping trip. To our knowledge, this is the first paper investigating outcomes tied to in-store mobile phone use. Additionally, we show that both continuous and intermittent shopping-unrelated mobile phone use can interfere with consumers’ ability to accurately manage shopping plans. This critical finding contradicts prior work advocating the harmless nature of moderate mobile phone use and adds to the nascent literature on carryover effects of mobile phone distraction (Isikman et al. 2016 ; Ward et al. 2017 ). Finally, we contribute to the literature on mobile phone dependence and provide evidence that mobile dependence impacts consumers’ ability to manage their shopping while using a phone. Prior work on mobile phone dependence has highlighted individual characteristics associated with the development of phone dependence (Bianchi and Phillips 2005 ; Hong et al. 2012 ; Takao et al. 2009 ; Roberts et al. 2015 ) and specific health risks associated with mobile phone reliance (Thomée et al. 2011 ). However, to our knowledge, we are the first to establish that mobile phone dependence might actually influence individuals’ ability to multitask while using a mobile phone, specifically impacting the accurate completion of a shopping trip. Managerial implications Our research offers significant implications for firms wishing to incorporate mobile phones into their consumer-based strategy (Hamilton 2016 ). We focus on shoppers’ ability to accurately manage a shopping task and assess two managerially relevant variables: unplanned purchasing and missed planned items. Historically, unplanned purchasing has been an important topic for marketing scholars (Iyer 1989 ; Kollat and Willett 1967 ) and a central variable in recent research on shopper marketing and retailer decision making (Bell et al. 2011 ; Inman et al. 2009 ; Park et al. 1989a ; Stilley et al. 2010a ; Stilley et al. 2010b ). Additionally, prior research has investigated the purchase and omission of planned items as critical for retailer success. (Cobb and Hoyer 1986 ; Heilman et al. 2002 ; Park et al. 1989a , b ; Stilley et al. 2010b ). Assessing both variables, our results underscore the importance of how consumers use mobile phones and offer actionable insights. We find that using mobile phones in a shopping-unrelated manner makes it much more difficult for consumers to actively manage their shopping task. In our first study, we find that shopping-unrelated mobile phone use is associated with additional unplanned purchases that are more hedonic in nature. Our second and third studies demonstrate that shopping-unrelated mobile phone use makes it more difficult for consumers to accurately manage their planned shopping, thus resulting in more missed planned items. These findings suggest that the decision to encourage shopping-unrelated mobile phone use in store environments is not as straightforward as it appears. Therefore, we recommend that managers first consider the overall loyalty profiles of customers. See Table 6 for a breakdown of our major findings and the general impact this may have for retail outlets. Table 6 Shopper marketing implications Full size table We believe that firms with a highly loyal customer base will find success in encouraging shopping-unrelated mobile device use in store. Shopping-unrelated phone use will lead shoppers to buy more unplanned items and mismanage more planned items than normal. For highly store-loyal shoppers, this result benefits retailers. Not only are retailers maximizing unplanned purchasing during the focal trip, if a shopper misses or forgets an item, this may necessitate a second shopping trip. An additional trip may therefore have positive implications for retailers’ bottom lines. Conversely, our results suggest that firms with less loyal customers may need to rely on a situational analysis to determine the best course of action. Managers need to determine if spending on additional unplanned items due to phone use outweighs losses from missed planned items due to phone use. If this is not the case, retailers may suffer losses when encouraging mobile use since shoppers are likely to go elsewhere to purchase planned items mismanaged during the initial shopping trip. Our results clearly indicate that proximity marketing is a double-edged sword. On the one hand, when the messaging is relevant to the shopper, Study 1 shows that shopping-unrelated phone use has a beneficial effect to the retailer. However, when the messaging is not relevant to the shopper (e.g., a blanket promotion for the floral department), Studies 1–3 show the potential downside if the uptick in unplanned purchasing is more than offset by the increase in missed planned items, particularly for more mobile dependent shoppers. To encourage shopping-unrelated mobile device use in-stores, retailers have many options. First, managers can highlight the availability of Wi-Fi throughout the store and promote the shopping environment as “technology friendly.” Similarly, retailers may entice consumers to use their mobile devices via subtle advertisements or signage reminding shoppers that it is smart to multi-task or catch up on conversations. Second, marketers may incorporate unrelated messages and information into mobile shopping apps, including short news updates, weather alerts, or general shopping information. This may distract shoppers from the focal shopping trip and lead to deviations from their shopping plans. Along with highlighting the general distractive nature of in-store mobile phone use, our research also identifies the duration of mobile phone use (i.e. continuous vs. intermittent) and the modality of the distraction (i.e. audio vs. visual) as critical considerations in shoppers’ ability to accurately manage their shopping plans. For marketers we show that irregular mobile device use during a shopping trip can interfere with consumers’ plans. Hence, prompting short-duration uses such as text messaging, checking emails, or listening to voice mails may divert shoppers from accurately completing their shopping. Furthermore, our findings propose that different modalities of input can distract consumers and impact their shopping. While prior research contends that that auditory perception relies on differing processing resources than visual perception (Wickens 1984 ), our studies show that both modalities of shopping-unrelated mobile phone may affect accurate adherence to shopping plans. For managers this means that many widely used visual mobile marketing tools such as SMS, mobile push notifications, and emails can be viable strategies to distract shoppers currently in store environments. Finally, our results reveal that shoppers most dependent upon these devices may be the most likely to deviate from their shopping plans while using mobile phones in a shopping-unrelated manner. Therefore, managers can target consumers highly dependent on mobile devices. Managers may heavily promote Wi-Fi availability and unrelated shopping applications to these consumers during shopping trips. This can be accomplished by tailoring emails, store circulars, and receipt alerts to shoppers highly dependent on their mobile phones. Furthermore, variables such as age and mobile usage data collected using store apps or geofencing technology (Michael 2016 ) can be used to identify shoppers who heavily rely on mobile phones. Consumer implications While our results provide direction for retailers, our findings are also crucial for consumers. Our preliminary analysis suggests that consumers tend to overlook or are unaware of the limitations associated with in-store shopping-unrelated mobile phone use. Consumers tend to view their phones as beneficial and discount the attentional limitations potentially imposed by the use of these devices. Contrary to consumer lay beliefs, our results indicate that mobile phone use can have substantial negative repercussions when used in a shopping-unrelated manner. More importantly, our results illustrate that those who are most dependent on their phones are most susceptible to the distractive nature of shopping-unrelated phone use. Therefore, we hope that our research will influence consumers’ attitudes toward mobile phones and persuade them to reflect on how these devices impact our lives, both positively and negatively. Despite the public’s reliance on and praise for new mobile technologies that support supporting a hyper-connected lifestyle, it appears that there are deleterious outcomes associated with in-store mobile distraction. Limitations and future research The current research elucidates some of the benefits and limitations associated with in-store mobile phone use. While this work begins the investigation, there is considerable opportunity for future research. First, the distractive nature of shopping-unrelated phone use may be further examined. For example, future research can analyze consumers’ behaviors using eye-tracking technology to help understand where and how long consumers focus on their mobile devices while in store environments. This may reveal how these devices impact cognitive and visual processing. Second, additional research is needed to understand the intricacies of each type of phone use. For example, are certain types of shopping-unrelated use more distracting than other types of phone use? While we demonstrated the limitations of shopping-unrelated use, further research is needed to better understand if specific types of phone use may be sensitive to the shopping situation and consumer characteristics. Moreover, research is needed to further investigate shopping trips in which consumers use their mobile phones in both a shopping-unrelated and shopping-related manner. The results of our first study indicate that this is a relatively common occurrence (157 shoppers engaged in both shopping-unrelated and shopping-related phone use in our dataset) and therefore, merits additional study. For example, future research may evaluate whether shopping-related phone use neutralizes the limitations of shopping-unrelated phone. Similarly, future inquiry can explore if the order of the mobile phone use (shopping-related use followed by shopping-unrelated use or shopping-unrelated use followed by shopping-related use) impacts consumer outcomes. Third, while we found evidence that consumers higher in mobile phone dependence have a more difficult time accurately managing their shopping plans when engaging in shopping-unrelated phone use, there is some literature that suggests alternative outcomes. For example, prior work proposes that those who frequently rely on mobile phones might find multitasking more automatic given that they regularly engage in multitasking behaviors involving their phone (Bardhi et al. 2010 ). Additionally, some work in cognitive psychology suggests that those who regularly engage in media multitasking behaviors perform better when managing multiple tasks (Alzahabi and Becker 2013 ; Lui and Wong 2012 ). Therefore, this warrants additional examination of situations and environments where mobile phone dependence and shopping-unrelated phone use may not negatively impact consumers’ shopping. Fourth, further research may study the types of consumers who are more or less likely to use their mobile phones while making decisions. We have identified consumers who are highly dependent upon mobile devices as one particular group. However, a comprehensive understanding of demographic and personality characteristics tied to mobile phone use can help marketers better integrate mobile technologies into their interactions with consumers. For example, evaluating when and why consumers reach for mobile phones may help marketers target consumers open to digital interactions, design more efficient shopping applications, and manipulate the manner in which consumers use mobile phones. Finally, additional research is required on the potential impact of shopping-unrelated mobile device use on in-store stimuli recall. While we have demonstrated that shopping-unrelated mobile phone use is associated with an increase in unplanned purchases, mobile phone use may also alter consumers’ attention to in-store promotions and signage. Ultimately, this may limit retailers’ ability to communicate with consumers. Therefore, future research should further evaluate how shopping-unrelated mobile device use impacts consumers’ explicit memories of external stimuli compared to those not using mobile phones. Notes All means are compared to the scale midpoint (4 out of 7). Forgot more (M = 2.85, t (53) = −5.29, p < .001); More unplanned purchased (M = 2.41, t (53) = −10.73, p < .001); Lacked mental resources to focus on shopping (M = 2.03, t (53) = −13.80, p < .001); No significant drawbacks to using phone (M = 5.33, t (53) = 8.65, p < .001). While we included the Related and Both phone use categories in our models to account for all of the ways shoppers used their mobile phones during the shopping, our main focus in this study is on those shoppers using their phones in a shopping unrelated manner.
In-store mobile phone use that is unrelated to shopping may be associated with an increase in unplanned purchases, according to a study published in the Journal of the Academy of Marketing Science. Dr. Michael Sciandra at Fairfield University, US and colleagues investigated the impact of mobile phone use on in-store shopping behaviour. They found that those who used mobile phones in store for purposes unrelated to shopping, such as making phone calls, sending text messages, checking emails or listening to music, were more likely to make unplanned purchases and forget items they had planned to buy. The authors observed this effect even when phones were only used for part of the shopping trip, suggesting that in-store mobile phone use may consume attentional resources even after the phone is put away. Dr. Michael Sciandra, corresponding author of the study said: "Our finding that phone use that is unrelated to shopping negatively affects shopping behaviour was in stark contrast to beliefs held by consumers. The vast majority of shoppers we asked thought that mobile phones did not have any negative effect." The researchers asked 231 participants to complete a simulated shopping task. While the participants either refrained from using their phone, or used it for an unrelated task either constantly (simulated phone call) or intermittently, they were shown a first person perspective video of someone grocery shopping. The participants were given a shopping list of items and were asked to compare the list to the products the person in the video placed in the cart, or picked up and put down. The participants' mobile phone dependence was assessed via self-report. The authors found that consumers who are highly dependent upon mobile phones, characterized by excessive use of and reliance on the device, were the most at risk of deviating from a shopping plan while engaging in shopping-unrelated mobile phone use. Dr. Sciandra said: "Mobile phones are quickly becoming the principal distractor for many consumers and they offer a unique form of interruption. Our findings may influence consumers' attitudes towards mobile phone use while shopping and persuade them to reflect on how these devices impact our lives, both positively and negatively." The authors note that part of the study is based on a simulated shopping experience only, therefore caution should be taken when applying these conclusions from this to real life settings.
doi.org/10.1007/s11747-019-00647-9
Biology
Managing antibiotics not enough to reverse resistance
Allison J. Lopatkin et al, Persistence and reversal of plasmid-mediated antibiotic resistance, Nature Communications (2017). DOI: 10.1038/s41467-017-01532-1 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-017-01532-1
https://phys.org/news/2017-11-antibiotics-reverse-resistance.html
Abstract In the absence of antibiotic-mediated selection, sensitive bacteria are expected to displace their resistant counterparts if resistance genes are costly. However, many resistance genes persist for long periods in the absence of antibiotics. Horizontal gene transfer (primarily conjugation) could explain this persistence, but it has been suggested that very high conjugation rates would be required. Here, we show that common conjugal plasmids, even when costly, are indeed transferred at sufficiently high rates to be maintained in the absence of antibiotics in Escherichia coli . The notion is applicable to nine plasmids from six major incompatibility groups and mixed populations carrying multiple plasmids. These results suggest that reducing antibiotic use alone is likely insufficient for reversing resistance. Therefore, combining conjugation inhibition and promoting plasmid loss would be an effective strategy to limit conjugation-assisted persistence of antibiotic resistance. Introduction Eliminating antibiotic use is an appealing strategy to promote resistance reversal, or the elimination of resistant bacteria by displacing them with their sensitive counterparts 1 , 2 , 3 (Fig. 1 a). Indeed, resistance genes often carry a fitness cost, giving the sensitive strains a growth advantage 4 , 5 , 6 . In the absence of selection for antibiotic resistance, competition between the two populations would presumably eliminate the resistant strain over time 5 , 7 . However, despite its conceptual simplicity, this approach has been largely unsuccessful 8 , 9 , 10 . Several factors can enable the persistence of resistance in the absence of selection. For instance, co-selection could propagate genetically linked resistance genes 11 , 12 . Also, compensatory evolution ameliorating fitness cost can reduce plasmid burden 4 , 13 , 14 . Fig. 1 Conditions for plasmid persistence and elimination. a The concept of resistance reversal. A population initially consists of a mixture of sensitive (blue) and resistant (orange with plasmid) cells. In the presence of antibiotics (± indicates presence or absence of [A] antibiotic concentration), resistant cells are selected for. In the absence of antibiotics, as long as the plasmid imposes a fitness cost, then over a sufficiently long time the resistant cells will be presumably outcompeted, effectively reversing resistance. b Modeling plasmid dynamics in a single species ( S ). The plasmid-free population, S 0 , acquires the plasmid through conjugation at a rate constant \(\eta _{\mathrm{C}}\) , becoming S 1 . S 1 reverts to S 0 through plasmid loss at a rate constant κ . S 0 grows at a rate proportional to S 1 ( μ 1 = μ , μ 0 = αμ ). The plasmid is costly when α > 1 and beneficial when α < 1. Both populations turnover at a constant dilution rate D . c Simulated fraction of S 1 as a function of α and \({\eta _{\rm C}}\) after 5000 time units (~200 days). Fast conjugation can compensate for plasmid loss even if the plasmid carries a cost ( α > 1). A greater \({\eta _{\rm C}}\) is required to maintain the plasmid population as α increases. d Criterion for plasmid persistence. If \({\eta _{\rm C}}\) > \({\eta _{\rm Crit}} = \alpha \left( {\kappa + D} \right) - D\) , the plasmid will dominate (Eq. ( 1 )) Full size image Horizontal gene transfer (HGT) of plasmids, primarily through conjugation, has also been proposed as a mechanism for plasmid persistence 6 , 10 , 15 . Theoretical analysis suggests that a sufficiently fast transfer rate can compensate for fitness cost and plasmid loss 16 , 17 , 18 , although the extent to which conjugation-mediated maintenance of costly plasmids occurs in nature has been debated 16 , 17 , 19 , 20 , 21 , 22 . For example, it has been suggested that transfer efficiencies required to overcome reasonable estimates of fitness cost and plasmid loss are too high to be biologically realistic 19 , 20 , 21 , 22 . Also, the persistence of purely parasitic genetic elements is evolutionarily paradoxical. Overall, conjugation alone is usually not considered to be a dominant mechanism for maintaining plasmids 23 , 24 , 25 , although this is not always the case 26 , 27 . The fate of a plasmid is largely driven by the relative magnitude of its fitness cost and segregation error rate compared with that of its conjugation efficiency. Indeed, studies investigating conjugal plasmid dynamics in the absence of selection attribute plasmid persistence to fast conjugation rate, low/no fitness cost, or both 22 , 28 , 29 , 30 , 31 . Similarly, plasmid elimination is attributed to slow conjugation and/or high growth burden 22 , 32 , 33 . Different outcomes likely depend on underlying parameter differences between experimental systems due to different plasmids, conjugation machinery, and mating procedures, among others. Accurate quantification of all three processes should provide a general framework to reconcile diverse outcomes and establish the role of conjugation in promoting plasmid persistence. However, confounding measurements of conjugation and growth dynamics have prevented general conclusions 34 , 35 . For instance, a high segregation error rate could be obscured by a fast conjugation efficiency 36 . Also, parameters depend on a range of variables including the host strain 37 and growth conditions 32 , which complicates data interpretation. Past studies did not provide precise estimates for all three processes. In some cases, this is because reasonable parameter estimates from available data appear sufficient 22 , which is appropriate so long as the estimates are obtained from a relevant experimental framework. In others, experimental complexities prevent accurate quantification within the relevant context (e.g., in vitro growth estimates to evaluate bacterial dynamics in the mouse gut) 29 , 32 . Even when all parameters are measured, confounding factors were often not accounted for in the data interpretation. For example, in some cases quantification of the conjugation efficiency was carried out without eliminating the contribution from selection dynamics 32 , 33 . As a result, the extent to which conjugation contributes to plasmid maintenance remains inconclusive 6 , 23 , 24 , 25 , 38 , 39 . Lack of basic understanding is prohibitive when evaluating the generality of plasmid fate, such as in natural microbial communities. Typically, microbial consortia consist of multiple interacting populations, connected through a complex network of HGT 6 , 40 . High transfer rates in relevant environments (e.g., the gut) have implications in forming communal gene pools where local bacteria can share a wide range (and continuously growing pot) of beneficial traits 26 . Disrupting such networks has been recognized as a potential intervention strategy to restore antibiotic susceptibility 41 . Here, we show that common conjugal plasmids covering major incompatibility groups and a range of fitness burdens can be maintained via conjugation, and a simple stability criterion predicts this plasmid persistence. This is true for microbial communities of increasing complexity (e.g., multiple plasmids/strains). Using this framework, we show that reversal or suppression of conjugation-mediated resistance spread is possible by targeting parameters critical for plasmid maintenance. Results A criterion for plasmid maintenance via conjugation We used a simple kinetic model to investigate the extent to which HGT contributes to plasmid maintenance. The model describes one population ( S ) that either carries the plasmid ( S 1 ) or is plasmid free ( S 0 ) (Fig. 1b , Supplementary Methods, Supplementary Eqs. (3)–(4)). In particular, we determined the conditions where the plasmid-carrying population is dominant to provide a conservative estimate for a critical conjugation efficiency. For the limiting case where the rate of plasmid loss is relatively small (see Supplementary Methods, “Deriving a stability criterion”), we derived a critical conjugation efficiency ( \({\eta _{\rm Crit}}\) , Eq. ( 1 )), which approximates an upper bound for dominant plasmid persistance: $${\eta _{\rm Crit}} = \alpha \left( {\kappa + D} \right) - D,$$ (1) where α indicates the relative cost ( α > 1) or the benefit ( α < 1) of the plasmid, κ is the rate constant of plasmid loss, and D is the dilution rate of the two populations. Thus, we use the term “species” to differentiate between any two populations with a uniquely defined \({\eta _{\rm Crit}}\) , which minimally requires different bacterial clones with genetically distinct backgrounds (e.g., strains or taxonomically diverse species). According to Eq. ( 1 ), a plasmid will be maintained as long as the conjugation efficiency is sufficiently fast compared with the rate of plasmid loss and fitness burden (Fig. 1c, d ). Sufficiently fast conjugation efficiency is necessary for the plasmid-carrying population to be dominant ( S 1 > S 0 , Fig. 1c ), even when a plasmid is slightly beneficial ( α is slightly <1). Our criterion is similar to that derived by Stewart and Levin 17 , but is more stringent in that it further requires dominance of the plasmid-carrying population. It also avoids experimental challenges associated with decoupling plasmid loss measurements from fitness cost 42 . Experimentally, observed plasmid loss ( κ obs ) can be determined by measuring the time constant of decay for a non-transferrable plasmid, which represents a combined effect of true κ and α (Supplementary Fig. 1A ) 43 . Indeed, analysis shows that \(\kappa _{\mathrm{obs}} \approx \kappa\) for plasmids with minimal fitness effects ( α ≈ 1). Since these two parameters are challenging to decouple, our criterion lumps the effects of α and κ together. To determine κ obs , we chose to use a low-cost plasmid ( α = 1.02) to minimize the confounding effects of cost. Based on our experimentally determined parameters, analysis shows the standard error associated with fitting the plasmid loss rate (≈ 0.0022) is greater than the difference between κ obs and κ (see Supplementary Methods, “Plasmid loss calculations”, for complete derivation). Conjugation-assisted persistence in a synthetic system To test whether the conjugation efficiency for common conjugal plasmids is sufficiently fast to compensate for cost, we first adopted a synthetic conjugation system derived from the F plasmid. In this system, the conjugation machinery is encoded on a helper plasmid F HR , which is not self-transmissible 44 . A second plasmid can be mobilized through conjugation when it carries the F origin of transfer sequence oriT . Here, we use a mobilizable plasmid denoted K, which expresses YFP under the control of the strong constitutive promoter P R , a kanamycin (Kan) resistance gene ( k a n R ) 44 , and oriT . To quantify the effects of conjugation, we implemented a plasmid identical to K except that it does not carry oriT (K − ), and therefore cannot be transferred by conjugation. The synthetic system was introduced into an engineered derivative of Escherichia coli MG1655 expressing a constitutive blue fluorescent protein (BFP) chromosomally and carbenicillin (Carb) resistance (Amp R ) 45 , denoted B (Fig. 2a ). The plasmid-carrying populations (B K , B K− ) can be distinguished from the plasmid-free population (B 0 ) by selective plating (using Carb+Kan) or flow cytometry (using YFP) (Supplementary Fig. 2A ). This notation will be used to describe all species and plasmid combinations throughout the text (see Supplementary Methods, “Nomenclature”). This system provides a clean experimental configuration to elucidate the contribution of conjugation to plasmid maintenance. K enables more precise parameter estimates compared to natural self-transmissible plasmids. Native plasmids often encode additional functions that complicate measurements, such as addiction modules that can result in post-segregational killing of daughter cells 46 . Importantly, without a non-transmissible control plasmid, it is difficult to decouple the effects of HGT from other processes. Instead, plasmid loss and fitness burden can be precisely quantified using K − , which eliminates the confounding influence of conjugation. Since oriT did not significantly affect the burden of K compared to K − (Supplementary Fig. 1B , P > 0.5, two-sided t -test), differences that arise in the overall dynamics can be attributed to conjugation. From our measurements of K, we expect conjugation to be fast enough to enable maintenance in the absence of antibiotic selection. In particular, our measurements of κ = 0.001 h −1 (Supplementary Fig. 1A ), α = 1.02 (Supplementary Fig. 1C ), and assuming D ≈ 0.05 h −1 , we estimate the critical efficiency \({\eta _{\rm Crit}} = 0.002\) h −1 to be well below the estimate of conjugation efficiency from exponential phase growth \({\eta_{\rm C}} \approx 0.01\) h −1 34 (see Methods, “Estimating \({\eta _{\rm Crit}}\) ”). Indeed, cell physiology can drastically change the conjugation efficiency 34 , as this value is almost four orders of magnitude greater than the efficiency measured from cells harvested from stationary phase (Supplementary Table 3 ). To test conjugation-mediated plasmid maintenance, we mixed B K and B 0 in equal fractions and cultured them together. A strong dilution (10,000×) was performed every 24 h to maintain growth. Different concentrations of Kan (0, 0.5, and 2 μg/mL) were used to vary α (1.02, 0.97, 0.42, respectively) (Supplementary Fig. 1C ). Every few days, we quantified the fractions of plasmid-bearing cells (expressing BFP and YFP) and plasmid-free cells (expressing BFP only) using flow cytometry (see Supplementary Fig. 2A and Methods section “Flow cytometry calibration” for calibration details). In the absence of conjugation, when the plasmid carries a cost (e.g., Kan = 0 and α > 1) the plasmid-bearing population was eliminated after 2 weeks (Fig. 2b , left modeling and right experiment). Thus, for a non-transferrable costly plasmid, eliminating antibiotics results in resistance reversal. If the plasmid was sufficiently beneficial, the plasmid-bearing population could coexist with the plasmid-free population (Fig. 2b , left modeling and right experiment). The fraction of plasmid-bearing cells depended on the relative magnitude of growth advantage compared with plasmid loss. In contrast, if the plasmid is transferrable through conjugation, even when the plasmid carries a cost, plasmid-bearing cells dominate the population in a short period of time (Fig. 2c , left modeling and right experiment). Intuitively, decreasing cost or increasing benefit (i.e., decreasing α ) facilitates conjugation-assisted persistence, and therefore plasmid stability occurs on a faster timescale (Fig. 2c ). Once the plasmid benefit is sufficiently high (Fig. 2c ), the plasmid persists regardless of whether or not it can conjugate, indicating that conjugation is no longer required to maintain resistance. Fig. 2 Conjugation-assisted persistence of costly plasmids. For all modeling and experimental results, x -axis is days and y -axis is fraction of cells. a Engineered conjugation. The background strain, B, expresses BFP and Amp R constitutively 45 . B carries the helper plasmid F HR (B 0 ), which is non-self-transmissible, but can mobilize plasmids in trans. The mobile plasmid K carries the transfer origin ( oriT ), a kanamycin-resistant gene (Kan R ), and yfp under the control of strong constitutive promoter P R 44 . When B carries K, it is denoted B K . K without transferability (i.e., without oriT ) is denoted K − , and when carried by B, B K− . b Long-term dynamics without conjugation. Blue represents plasmid-free and orange plasmid-carrying cells. Shaded lines indicate different initial conditions generated by a strong dilution experimentally (~80 cells/well, 16 wells), or randomly chosen from a uniform distribution (total initial density maintained at 1 × 10 −6 , 20 replicates). Bold lines are the average across all initial conditions of corresponding color. Modeling (left): i–iii is α = 1.02, 0.97, and 0.42, respectively, estimated from experimental measurements (Supplementary Fig. 1C ). Experiment (right): i–iii is Kan = 0, 0.5, and 2 μg/mL. Quantification is performed using flow cytometry, where the orange lines are cells expressing both BFP and YFP (B K− ), and the blue line are cells expressing BFP only (B 0 ). c Long-term dynamics with conjugation. Experiments were done identically to (B), with B K instead of B K− . Without antibiotics, the plasmid-carrying population dominated despite the plasmid cost, exhibiting conjugation-assisted persistence. All modeling parameters are identical except for \({\eta _{\rm C}} = 0.025\) h −1 . d Nine conjugation plasmids carried by species R (except C with B 0 , which behaves similarly, Supplementary Fig. 3D) exhibit conjugation-assisted persistence. R 0 was mixed in equal fraction with R P (P for plasmid generality) and diluted 10,000× daily. CFU from four-to-six double-selection plates were divided by the total number of colonies averaged across four-to-six Cm plates for quantification. Experiments are repeated at least twice. Error bars represent the standard deviation of the four-to-six measurements. The plasmids used are (i) #168, (ii) #193, (iii) R388, (iv) C, (v) #41, (vi) RP4, (vii) K, (viii) PCU1, and (ix) R6K (see Supplementary Tables 1 and 3 ) Full size image Further, analysis suggests compensatory mutations, even at a high mutation rate, did not contribute significantly to the overall dynamics (Supplementary Fig. 2B ). We note that our model assumes a constant dilution rate constant ( D ), which represents an approximation of the discrete, periodic dilutions in our experiments. Simulations using a model implementing discrete dilutions generated qualitatively the same results (Supplementary Fig. 1D ). Finally, we introduced noise in the conjugation rate for each set of initial conditions such that \({\eta_{\rm C}}\) can vary a small amount from the basal value, within 10% of the mean, consistent with clonal variability 34 . This variability does not change the qualitative results (Supplementary Fig. 2C ). Conjugation-assisted persistence for diverse plasmids We previously demonstrated that the conjugation efficiency of the synthetic system is comparable to that of natural F plasmids and several other natural conjugation plasmids 34 . Therefore, we expect these plasmids to also exhibit conjugation-assisted persistence. To this end, we quantified the dynamics of eight additional conjugative plasmids, covering six incompatibility groups (incF, incN, incI, incX, incW, and incP) which encompass >70% of the most common large plasmids isolated from Enterobacteriacea (335 plasmids that are >20 kB from GenBank) 47 , cover a wide range of conjugation efficiencies and fitness effects (Supplementary Fig. 3A, B ), and include three clinically isolated conjugative plasmids encoding extended-spectrum β-lactamases (ESBLs). ESBL-producing pathogens are notorious for plasmid-mediated conjugation 48 , 49 , 50 and are of paramount global health concern 51 , 52 . We transferred each individual plasmid into a common background strain ( E. coli MG1655 with chromosomally integrated dTomato, and chloramphenicol (Cm) resistance (Cm R ), denoted R), and quantified the relevant parameters to estimate \({\eta _{\rm Crit}}\) ; the plasmid C was quantified with background strain B since both plasmid C and strain R express Cm R , and B behaves qualitatively similarly to R (Supplementary Fig. 3D ). Our estimates suggest a high likelihood for persistence \(\left( {{\mathrm{\Delta }}n = {\eta _{\rm C}} - {\eta _{\rm Crit}} >0} \right)\) for each of the nine plasmids (Supplementary Fig. 3C , including R K for control), either because they are sufficiently beneficial and/or transferred fast enough. To test this, we implemented the same competition experiments as previously described, and quantified the fraction of plasmid-bearing cells using colony-forming units (CFU) on double-antibiotic plates (see Methods). Daily dilutions were performed for 14–20 days. Indeed, each plasmid persisted throughout the duration of the experiment (Fig. 2d ). The maintenance or dominance of several plasmids (#168, #193, RP4, R6K, and R388) was likely due to them being neutral or slight beneficial (Supplementary Fig. 3C ), in addition to their fast transfer. In contrast, PCU1 was maintained despite its very high cost (estimated α ≈ 3, Fig. 2d ; see Supplementary Fig. 3E for logscale). Conjugation-assisted persistence in greater complexity consortia Natural environments are typically far more complex, consisting of diverse species interconnected through an intricate web of gene exchange 6 , 53 . Such networks can serve as reservoirs for antibiotic resistance in so-called HGT “hot spots”, enabling the dissemination of resistance to various pathogens or commensal microbes 54 , 55 , 56 . Therefore, we wondered whether conjugation-assisted persistence could occur in a multi-species community. This question was never conclusively explored previously. Modeling suggests that, as long as the stability criterion is met, a single plasmid can be maintained via conjugation regardless of the number of species present (Fig. 3a , Supplementary Methods, Supplementary Eqs. (7)–(10)). To test this, we introduced a second E. coli strain R with or without oriT (R K or R K− ) (Supplementary Fig. 4A ). The total plasmid content is quantified as the sum of all plasmid-bearing species (R K +B K ). Consistent with our predictions, results demonstrate that conjugation enables plasmid persistence compared to the non-conjugating control (Fig. 3a ). Fig. 3 Conjugation-assisted persistence with multiple species and/or plasmids. a – c x -axis is days and y -axis is fraction of cells. Bold and shaded lines represent average across, or individual, initial conditions, respectively. Color indicates blue for plasmid free ( S 0 ), and orange or red for plasmid-carrying cells ( S 1 ) K or C, respectively. a Two-species, one-plasmid community. Left two panels: no conjugation; right two panels: with conjugation. S 0 = B 0 + R 0 and S 1 = B K + R K . Modeling: From bottom (i) to top (iii) α 1 = α = 1.02, 0.97, and 0.42, respectively, and α 2 = 1.03, 1.02, and 0.9 (see Supplementary Eqs. (7)–(10), Supplementary Fig. 4A ). Experiment from bottom (i) to top (iii): Kan = 0, 0.5, and 2 μg/mL, respectively. b Higher cost plasmid dynamics. Modeling (left column): From bottom (i) to top (iii) α = 1.13, 1.03, and 0.3, respectively (see Supplementary Eqs. (3)–(4), Supplementary Fig. 4B ). Experiment (right column): From bottom (i) to top (iii): Cm = 0, 0.5, and 2 μg/mL. c One species, two-plasmid community. Each row represents a different combination of α , modulated with no antibiotic (i), Kan (ii–iii), or Cm (iv–v). The species can carry two ( S 11 ), one ( S 10 , S 01 ), or no plasmids ( S 00 ). Modeling (first and third columns): From bottom (i) to top (v) α 3 = 1.3, 1.2, 0.42, 1.01, 0.35 (see Supplementary Eqs. (11)–(14), Supplementary Fig. 4B ). Experiment: (second and fourth column such that S 1 = B K + B CK or S 1 = B C + B CK for K or C, respectively). B C is mixed equally with B K . d , e Three-species, three-plasmid community. Species (R, Y, and B) are uniquely fluorescent (expressing dTomato, YFP, or BFP, respectively) and plasmids (R6K, RP4, and R388, diamond, square, and circle markers, respectively) have distinct resistance markers (Strp R , Kan R , and Tm R , respectively). Shading color corresponds to the respective population fraction (left y -axis), and markers indicate fraction of each plasmid (right y -axis). The initial experimental composition consists of R 0 , R R6K , Y 0 , Y R388 , B 0 , and B RP4 . Modeling (left): Randomized initial conditions such that the total plasmid-free populations is maintained at 1 × 10 −4 , and plasmid population arbitrarily chosen between 1 × 10 –5 and 1 × 10 6 , consistent with data (Supplementary Table 2 for parameter estimates). Experiment (right): Error bars indicate averaging across four-to-six plate replicates, and repeated five times Full size image Moreover, modeling predicts conjugation-assisted persistence to occur for a single species carrying multiple conjugation plasmids. This is contingent on the plasmids’ ability to exist independently of each other (e.g., distinct incompatibility groups to ensure compatible replication machinery and the absence of surface exclusion that prevents entry of one of the plasmids), and the fact that other relevant plasmid parameters (i.e., \({\eta _{\rm C}}\) , α , or κ ) are not drastically altered by the presence of another plasmid (Supplementary Methods, Supplementary Eqs. (11)–(14)). To test this idea, we implemented a bi-directionally conjugating population by mixing B carrying either K or another mobilizable plasmid C. C expresses mCherry, Cm R , and is compatible with K (p15A and pSC101 replication origins, respectively). Independently, since C has a greater cost compared to K (Supplementary Fig. 3B ), conjugation required a longer time-scale to overcome competition and stably persist (Fig. 3b ). Together, the dynamics of each plasmid individually were identical to that of the single-plasmid population dynamics, regardless of how we modulated α (Fig. 3c , no antibiotic, Kan, and Cm; see Supplementary Fig. 4A, B for α estimates). These results suggest that, despite the apparent complexity, plasmid fate in a community consisting of multiple ( n ) species and ( p ) plasmids (leading to n 2 p populations) can be inferred from the individual plasmid dynamics, if these plasmids do not interfere with each other. The fate of each plasmid is governed by the criterion for conjugation-assisted persistence (e.g., \({\eta _{\rm C}} >{\eta _{\rm Crit}}\) ). If \({\eta _{\rm C}} >{\eta _{\rm Crit}}\) for at least one such pair, the plasmid will persist if the particular host(s) can coexist within the population long term (which is largely driven by fitness). Importantly, the coexisting species must acquire the plasmid, either in the initial population structure, or via conjugation. This results in an initial barrier for the plasmid to establish itself due to competition, resulting in a dependence on the initial composition in determining plasmid fate (see Supplementary Methods, “Three-species three-plasmids model”). To test this, we constructed a community consisting of three species ( E. coli strains denoted B 0 , R 0 , and Y 0 ) transferring three mutually compatible plasmids (Supplementary Fig. 5A , Supplementary Methods, Supplementary Eq. (16), Supplementary Table 4 ). Each plasmid was initiated in a single species, respectively (RP4, R6K, and R388, denoted 1, 2, and 3 in Fig. 3d , Supplementary Fig. 5A ). These three plasmids were chosen in particular since they belong to distinct incompatibility groups (X, P, and W), and are distinguishable using antibiotic selection (Streptomycin (Strep), Kan, and Trimethoprim (Tm), respectively, Supplementary Table 3 ). Since all species express chromosomal Cm R , we used selective plating to determine the plasmid fraction and flow cytometry to determine the species composition. In this scenario, although plasmids individually appear beneficial to their own host ( α < 1), they exhibit a cost compared to another species/plasmid pair (e.g., compare B RP4 to R R388 , Supplementary Fig. 5B ). Based on our previous estimates, we predict persistence for all three plasmids in this community. Indeed, results demonstrate that each individual plasmid exhibited persistence throughout the duration of the experiment, for up to 2 weeks (Fig. 3e ). In the absence of conjugation (R 0 , B 0 , and Y 0 only), competition between the three species favors the fittest population (Y 0 ), suppressing the growth of both R 0 and B 0 (Supplementary Fig. 5B ). Reversing conjugation-assisted persistence of resistance Our results demonstrate that diverse conjugal plasmids are indeed transferred fast enough to enable plasmid persistence. According to the existence criterion (Eq. ( 1 )), however, resistance reversal can be achieved by inhibiting conjugation, promoting the rate of plasmid loss, or both (Fig. 4a ). The efficacy of this strategy depends on how much \({\eta _{\rm C}}\) exceeds \({\eta _{\rm Crit}}\) . If \({\eta _{\rm C}}\) is only slightly greater than \({\eta _{\rm Crit}}\) , inhibiting conjugation alone might be sufficient to reverse resistance. If inhibition alone is incomplete, however, promoting κ may act in synergy to destabilize the plasmid. We first tested this inhibition strategy on the engineered conjugation system by using linoleic acid 57 (Lin) to inhibit conjugation (Fig. 4b , left panel) and phenothiazine (Pheno) to enhance the plasmid segregation error 58 , 59 (Fig. 4b , right panel). Both compounds had been identified in literature for these specific properties. Importantly, at the concentrations we used, neither compound affected the bacterial growth rate (Supplementary Fig. 6A ). Indeed, Lin alone was sufficient to destabilize a plasmid with low conjugation efficiency (Fig. 4c , plasmid K). For a plasmid with greater \({\eta _{\rm C}}\) (e.g., for plasmid #41) or conferred a benefit, Lin alone was insufficient, and the synergistic combination of Lin with Pheno was critical to reverse resistance (Fig. 4d ). We note that Pheno alone did not affect the conjugation efficiency (Supplementary Fig. 6B ). Fig. 4 Reversing resistance due to conjugation-assisted persistence. a Combining inhibition of conjugation and promotion of plasmid loss to reverse resistance. This strategy is expected to increase \({\eta _{\rm Crit}}\) and decrease \({\eta _{\rm C}}\) , potentially destabilizing the plasmid (Eq. ( 1 )). b Evaluating conjugation inhibitor linoleic acid (Lin) and plasmid loss rate promoter phenothiazine (Pheno). Left: B K and R 0 were grown overnight with or without 3.25 mM Lin to quantify conjugation efficiency (see Methods). Right: B K− was propagated daily in the presence of 50 μg/mL Kan, nothing, or 120 μM of Pheno. Kan was used as a control. Y -axis is the fraction of B K− without antibiotic normalized by that treated with Kan quantified via flow cytometry. Pheno significantly increased the rate of plasmid loss by ~four-fold (see Supplementary Fig. 6B , right panel). c , d Inhibition of R K and R 41 . R 0 and R K or R 41 were mixed in equal fractions and diluted 10,000× daily for 11 days. Y -axis is fraction of plasmid-carrying cells and x -axis is days. Green shading indicates the treatments from dark to light: control, Pheno, Lin, and combined. Both plasmids were successfully reversed; when Lin was sufficient alone, Pheno had minimal effect (K). If Lin alone was insufficient, Lin with Pheno synergistically destabilized the plasmid. e Combination treatment with Lin and Pheno suppressed or reversed resistance. The same strains and protocol were used as in Fig. 2d , except media was supplemented with 3.25 mM Lin and 120 μM Pheno fresh daily (see Methods). The plasmids used are (i) #168, (ii) #193, (iii) R388, (iv) C, (v) #41, (vi) RP4, (vii) K, (viii) PCU1, and (ix) R6K (see Supplementary Tables 1 and 3 ). All CFU measurements were done in replicates of four-to-six plates, and repeated at least twice for reproducibility. All flow measurements were propagated with at least eight well replicates and repeated at least twice for reproducibility. Error bars represent the standard deviation of the plate or well replicates Full size image We found that Lin reduced the conjugation efficiency for most of the native plasmids by three-fold (Supplementary Fig. 6C ) and even by 50-fold in one (see Supplementary Table 3 for all fold changes). Adjusting for this decrease in predicted \({\eta _{\rm Crit}}\) , maintaining the same cost (Supplementary Fig. 6D ), and assuming a four-fold increase in the Pheno-enhanced plasmid loss rate (Supplementary Fig. 6B , right), our criterion predicts that conjugation-assisted persistence would be significantly reduced for most plasmids (Supplementary Fig. 6E , Supplementary Table 3 ). Indeed, a combination of Lin and Pheno led to >99% elimination of plasmids where Δ n < 0 (Fig. 4e , plasmids K, #41, #168, PCU1, Supplementary Fig. 6F for log-scale PCU1 as comparison to Supplementary Fig. 3E ). If Δ n is close to but slightly greater than 0, the plasmid still persisted but with a reduced infectivity (Fig. 4e plasmids C, #193). If Δ n is sufficiently large (>>0), the plasmid was maintained (Fig. 4e plasmids RP4, R6K, R388). However, they were less dominant in comparison with the absence of Lin and Pheno (Fig. 2d ), indicating the role of conjugation in their maintenance. That these three plasmids were more difficult to reverse is not surprising, since they carry a small burden or even a benefit, to R (Supplementary Fig. 3B ). Discussion It is estimated that >50% of all plasmids can be transferred by conjugation 25 . The extent to which conjugation contributes to the difficulty to reverse antibiotic resistance has been debated for decades 16 , 38 , 60 . Some have suggested that conjugation is not a major mechanism responsible for the persistence of plasmids 20 , 23 , 39 , 61 . We believe that the root of this apparent confusion is the lack of precise quantification of the plasmid dynamics as affected by conjugation, under environments with varying degrees of selection. Moreover, how conjugation contributes to plasmid persistence in a generic multi-species community has not been previously investigated. Our results demonstrate that the transfer of various conjugal plasmids is sufficiently fast to exhibit persistence. This is consistent with recent work demonstrating the persistence of heavy metal resistance in the absence of positive selection when transferrable via conjugation, compared to inheritable solely through clonal expansion (the latter of which required positive selection to maintain) 62 . The plasmids tested here cover six of the major incompatibility groups. Many plasmids encoding ESBL genes spread to diverse species through conjugation 48 , 63 . Our findings may shed light on the apparent paradox regarding the high prevalence of ESBL resistance, despite studies having determined ESBL plasmids are often costly to maintain 33 , 64 , 65 . Indeed, even in the absence of β-lactam treatment, our findings suggest that ESBL resistance is likely to persist for long periods of time. We further demonstrated that conjugation-assisted persistence is generally applicable to communities containing multiple plasmids and multiple populations each defined by a unique critical conjugation efficiency. This is particularly relevant, since HGT is pervasive, and extensively occurs in hot spots associated with highly dense and complex population structures, such as in the gut microbiome 54 . That independent plasmid dynamics are additive greatly simplifies our understanding of plasmid dynamics in populations with greater complexity, as well as future work investigating intervention strategies. Several other factors may contribute to the long-term stability of such plasmids in the environment. Indeed, almost half of all plasmids are unable to transfer via conjugation. Co-evolution between the host and plasmid can compensate for fitness cost 13 , 14 , 24 , 66 , 67 , and recent work has shown that other factors such as positive selection coupled with compensatory adaptation can help explain long-term plasmid persistence 24 , 68 . Interestingly, compensatory mutations may also modulate other key processes involved with mobile genetic element (MGE) upkeep by directly or indirectly modulating processes involved in conjugation, such as decreased expression of MGE replication 67 , down regulation of global gene expression 69 , or increased plasmid copy number 68 . Indeed, recent theoretical work emphasized the potentially paradoxical role of these interacting processes, where reducing fitness cost could mitigate the evolution for higher conjugation rates 70 . Future work investigating the extent to which compensatory mutations individually modulate α , κ , \({\eta _{\rm C}}\) , or some combination thereof, would provide critical insight into predicting evolutionary trajectories that enhance plasmid stability. Population dynamics are another potential contributing factor; one study showed that the presence of alternative hosts can promote the survival of a plasmid unable to persist in monoculture 71 . The physical environment may play an important role in plasmid persistence as well, since HGT dynamics change depending on the spatial structure of the community (reviewed in ref. 6 ). Our findings demonstrate the necessity to inhibit the conjugation process for effective resistance reversal. In particular, in the absence of active intervention strategies, it is likely that judicious antibiotic use may only suspend the process of continued selection and enrichment, but would be unable to reverse resistance due to conjugation 72 , 73 . To this end, the synergy between plasmid-curing compounds and conjugation inhibitors represents a novel approach for antibiotic adjuvants aimed at targeting the ecological and evolutionary aspects of bacterial pathogenesis 3 , 41 . Methods Strains, growth conditions, and plasmid construction Different E. coli strains were used throughout the study (Supplementary Table 1 ). Derivatives of E. coli strain MG1655 with chromosomal fluorescence and antibiotic resistance were generously provided by the Andersson lab 74 . Recipients B 0 and R 0 each carry helper F plasmid F HR , express BFP and ampicillin (Amp, or Carb for carbenicillin) resistance (Amp R ), or dTomato and Cm R , respectively. Note that for the multi-plasmid multi-species experiment, B 0 expresses Cm R instead of Amp R (see Supplementary Table 1 for a complete list of strain details). Donor cells (B 0 or R 0 background) contain mobilization plasmid K from Dimitriu et al. (denoted Y in referenced publication) 44 , which carries yfp gene under the control of strong constitutive P R promoter, oriT for transfer, and expresses kanamycin (Kan) resistance (Kan R ) (denoted B K ). For conjugation controls, non-transferrable plasmid K − is used, which is identical to K but without oriT . Upon conjugation, transconjugants become indistinguishable from the donors. For experiments with a more costly plasmid (Fig. 3b ), plasmid C is used. C expresses Cm R , mCherry, and has p15A replication origin. For a complete list of strains and plasmids used in this study, see Supplementary Table 1 . For all experiments, single clones were grown separately overnight at 37 °C for 16 h with shaking (250 rpm) in Luria-Bertani (LB) broth containing appropriate antibiotics (100 μg/mL Cm, 100 μg/mL Carb, or 50 μg/mL Kan). All experiments were performed using M9 medium (M9CA medium broth powder from Amresco, lot #2055C146, containing 2 mg/mL casamino acid, supplemented with 0.1 mg/mL thiamine, 2 mM MgSO 4 , 0.1 mM CaCl 2 , and 0.4% w/v glucose). Long-term plasmid dynamics for engineered conjugation 16h overnight cultures (3 mL LB media with appropriate selecting agents, density ~1 × 10 9 CFU/mL) were resuspended in M9 medium and diluted to an initial starting density of ~80 cells/well. Strong initial dilutions were used to generate replicates with a range of initial conditions per well. Depending on the experiment, replicates ranged from 12 to 48 wells. For the one-species one-plasmid conjugation experiment (Fig. 2c ), B 0 and B K cells were mixed in an equal ratio and strongly diluted to an initial density of ~80 cells/well combined. The strongly diluted cell mixture was distributed among 24-well replicates in a 96-well plate to a final volume of 200 μL/well, supplemented with appropriate antibiotic (Kan = 0, 0.5, or 2 μg/mL). 96-well plates were covered with an AeraSeal TM film sealant (Sigma-Aldrich, SKU A9224) followed by a Breath-Easy sealing membrane (Sigma-Aldrich, SKU Z380059). Plates were shaken at 250 rpm for 23 h and 37 °C. This is denoted Day 0. To passage the plates, 198 μL/well of freshly mixed media was distributed into a new 96-well plate, and an intermediate passaging plate containing 198 μL/well of autoclaved diH 2 O was prepared. 2 μL from the previous day’s plate was transferred to the intermediate passaging plate, mixed by pipet, and then 2 μL from the intermediate passaging plate was transferred into the new media-containing plate to achieve daily dilutions of 10,000×. The new plate was sealed using both membranes, and placed back into the incubator to shake. This process takes approximately 1 h. An additional 2 μL of cells was added to the intermediate plate to be used for flow cytometry, and the experiments were carried out typically for 14–18 days. The same protocol is used for the one-plasmid two-species experiment (Fig. 3a , 48-well replicates), testing the additional, more costly, plasmid C (Fig. 3b , 12-well replicates), and the two-plasmid one-species experiment (Fig. 3c , 12-well replicates). All experiments are initiated with equal volumes of all populations, except the latter, which initiates with only the two populations carrying each plasmid (B K and B C ). The initial starting density was maintained at ~80 cells/well for each of these three experiments as well. Non-conjugating experiments had an identical setup with the substitution of K − plasmid for K. Flow cytometry calibration To calibrate flow cytometry for accurate quantification of plasmid-free and plasmid-carrying populations we mixed various ratios of the cell populations B 0 , R 0 , B K− , and R K− in different combinations with one another (Supplementary Fig. 2A ). Volume ratios were used as a proxy for cell ratios since differences in overnight CFU for all populations was statistically indistinguishable ( P > 0.5, two-sided t -test). Cell mixtures were calibrated at high (400×) and low (1000×) dilution-fold from overnight culture, resulting in a flow speed of between 3000–6000 cells/s. Fluorescent gating cutoffs were determined in such a way that were both obvious by eye, and resulted in <5% differences between the population mixtures (as determined by volume) when quantified. 5% was chosen as this is the detection limit for the machine. All data quantification was performed using MATLAB. The same threshold values were used to gate every experiment. Flow measurements were always performed on live cells, in diH 2 O, on the actual day of the experimental time point, within 1 h of removing the plate from the incubator. Quantifying modeling parameters α , η , κ α was determined using plate-reader measurements of B and R growth in the presence of 0, 0.5, or 2 μg/mL Kan using the Perkin-Elmer Victor ×3. Overnight cultures of cells (prepared as described previously) were resuspended into M9 media and diluted 10,000× prior to starting the experiment. Four technical replicates per concentration were used for quantification. Growth rates were quantified by log-transforming the growth curves, using K-means clustering to non-arbitrarily locate the region of longest exponential growth, smoothing, and fitting the linear portion (all processing done using MATLAB). α is determined by normalizing the growth rate obtained from the plasmid-free population (R 0 or B 0 ) by the plasmid-carrying population (R K or B K ). These experiments were performed at three antibiotic concentrations used in the main experiments to accurately model the individual dynamics (Supplementary Figs. 1 C, 4 A, B). The growth temperature used to quantify α corresponds to that which was used for long-term experiments (e.g., 37 °C for R/R K , and 30 °C for R/R RP4 , see Supplementary Table 1 for each specific condition). Conjugation efficiency was estimated using the protocol established by Lopatkin et al 34 . 16 h overnight cultures of B K and R 0 (3 mL LB media with appropriate selecting agents, density ~1 × 10 9 CFU/mL) were resuspended in M9 medium (0.4% w/v glucose) and mixed in a 1:1 ratio using 400 μL. Mixtures were incubated at room temperature (25 °C) for 1 h in the absence of shaking. R 0 and B K are used pairwise in these experiments to take advantage of the different resistant markers: Kan (50 μg/mL) is used to quantify donors, Cm (100 μg/mL) for recipients, and the Kan+Cm combination can distinguish transconjugants. Parental densities were obtained by serially diluting the mixture to net 5 × 10 7 -fold, and spreading four-to-six replicate measurements onto respective single-antibiotic-containing plates. To quantify transconjugants (R K , as a result of transfer from B to R), cells were plated at a dilution of 40-fold onto plates containing both antibiotics. Plates were incubated overnight at 37 °C and CFUs were counted the following day. The conjugation efficiency is estimated as \(\eta = \frac{{R^{\mathrm{K}}}}{{B^{\mathrm{K}}R^0\Delta t}}\) , with units of cell mL −1 h −1 , and is dependent on the cell density. For all modeling and plasmid persistence predictions, we use the cell density-independent rate constant \({\eta _{\rm C}}\) , by normalizing η with respect to the carrying capacity. That is, we let \({\eta _{\rm C}} = \eta N_m\) , where \(N_m = 10^9\) cell/mL (the carrying capacity). \({\eta _{\rm C}}\) has units of h −1 . To estimate the plasmid loss rate, experiments were initiated with 100% of B K− , the non-transferrable variant of K. Cells were diluted 10,000× daily and monitored for approximately 1 week, at which point sufficient plasmid loss was observed (Supplementary Fig. 1A ) such that exponential functions could be fit to the loss curves. The percentage B K− (B K− with no antibiotics normalized by B K− with 50 μg/mL Kan) is fit to an exponential decay curve such that \(\frac{{\% B_0^{\mathrm{K}}}}{{\% B_{\mathrm{Kan}}^{\mathrm{K}}}} = x_1{\mathrm{e}}^{ - x_2t}\) , where the calculated plasmid loss rate constant for plasmid K is \(\kappa _{\mathrm{K}} = \frac{{x_2}}{{24}}\) h −1 . Similarly, the same procedure is performed to determine the effect of Pheno on κ K , where %B K− treated with Pheno is used instead of %B K− without any treatment, e.g.: \(\frac{{\% B_{\mathrm{Pheno}}^{\mathrm{K}}}}{{\% B_{\mathrm{Kan}}^{\mathrm{K}}}} = x_1{\mathrm{e}}^{ - x_2t}\) . Conjugation dynamics for native plasmids For all plasmids tested, those previously characterized were measured using known resistance markers from literature (e.g., R388, R6K, RP4, PCU1, K, and C). ESBL pathogen conjugation ability and resistance spectrum was established in a previous publication 34 . In particular, Carb resistance was transferred from conjugation-capable ESBL donors to R 0 recipient for further characterization. R carrying one of the nine tested plasmids is denoted R P for generality in this description (Supplementary Table 1 ). Incompatibility groups are determined via MLST courtesy of the Anderson group 75 . Some ESBLs indicate multiple MLST identification, suggesting either fused incompatibility groups or multiple plasmids. Indeed, gel electrophoresis indicates multiple plasmids in recipient strains for these cases (not shown). All additional plasmids were transferred to R for all other measurements. Growth rates of the additional plasmids R P were obtained using the same methodology for the engineered system, namely, log-transforming growth curves from a plate reader fitting the linear portion to a regression. α was calculated in the same way as described previously, namely, by normalizing the growth rate of R 0 by the growth rate of R P (Supplementary Fig. 3B ). Conjugation efficiency estimates for all native plasmids were performed using R 0 as the recipient except for the plasmid C, which used B 0 as recipient. For all ESBL conjugation efficiency estimates, the native strains were used as donors. Donors with compatible resistance markers with the plasmid and R 0 were used for the remaining plasmids (Supplementary Table 1 , Supplementary Fig. 3A ). For the long-term experiments, equal volumes of R 0 and R P were mixed and diluted 10,000× from the overnight culture into a 96-well plate. Plates were sealed using both sealing membranes and grown for 23 h at 30 °C or 37 °C (see Supplementary Table 1 ), and shaking at 250 rpm. Every few days, the percentage of R P remaining in the mixture was quantified using CFU. The percentage was obtained by spreading diluted culture to a range that resulted in 20–150 countable colonies, which was typically a net of 1 × 10 7 -fold onto Cm and Cm+Carb plates (but changed depending on the growth rate of the strain being tested). For all plasmid information, see Supplementary Table 1 . Identical protocols for the engineered system and native plasmids were used for all inhibition experiments, where the media was supplemented with 3.25 mM Lin or 120 μM Pheno, or both fresh daily (Fig. 4e , Supplementary Fig. 6C ). To initiate the three-species (distinct E. coli strains) three-plasmid experiment, six populations (R 0 , Y 0 , B 0 , R R6K , Y R388 , and B RP4 ) were mixed together in arbitrary fractions. The mixture was propagated daily every 24 h at a dilution of 10,000×. To quantify population fraction, flow cytometry was used to distinguish red, yellow and blue fluorescence, and measurements were taken daily. To quantify plasmid fraction every 3–4 days, selective plating was performed to determine the fraction of each individual plasmid (CFU from plates containing both Cm and either Strep, Kan, or Tm divided by CFU obtained from Cm plates). To determine competition between the three populations, R 0 , B 0 , and Y 0 only were mixed and similarly propagated. No CFU measurements were taken as there were no plasmids present. Note that for this set of experiments, B 0 is different from B 0 used in the previous ones, as it expresses Cm R chromosomally instead of Amp R . Estimating \({\boldsymbol{\eta}} _{\mathbf{Crit}}\) The cost of each plasmid was determined relative to R 0 , the plasmid-free counterpart (Supplementary Figs. 3 , 6 , and Supplementary Table 3 ). We assume the plasmid loss rate to be small for all plasmids (0.001 h −1 , as with K, Supplementary Fig. 1A ). Indeed, large plasmids (like most conjugal, >20 kb) 25 are typically low copy number to minimize burden 60 , 76 , 77 , and therefore often employ one or more active partitioning mechanisms to promote stable inheritance 78 , 79 , 80 . To compare with \(\eta_{\rm Crit}\) , \({\eta _{\rm C}}\) was determined using R 0 as the recipient harvested during stationary phase (Supplementary Fig. 3A , Supplementary Table 3 ). Similar to K, we account for the physiological influence on η C by using an upper estimate 2.5 × 10 3 greater than the measured value for comparison with \({\eta _{\rm Crit}}\) . Data availability The authors declare that all the relevant data supporting the findings of the study are available in this article or its Supplementary Information files, or from the corresponding author upon request.
Researchers have discovered that reducing the use of antibiotics will not be enough to reverse the growing prevalence of antibiotic resistance for some types of bacteria. Besides passing along the genes bestowing antibiotic resistance to their offspring, many bacteria can also swap genes amongst themselves through a process called conjugation. There has long been a debate, however, as to whether this process occurs fast enough to spread through a population that is not under attack by antibiotics. In a new study, researchers from Duke University believe they have found a definitive answer to that question. Through a series of experiments with bacteria capable of conjugation, they show that all of the bacteria tested share genes fast enough to maintain resistance. They also show, however, that there are ways to disrupt the process and reverse antibiotic resistance. The results appear online on Nov. 22 in the journal Nature Communications. "The results came as a surprise to me when I first saw the data," said Lingchong You, the Paul Ruffin Scarborough Associate Professor of Engineering at Duke University and corresponding author on the paper. "For all of the bacteria we tested, their conjugation rate is sufficiently fast that, even if you don't use antibiotics, the resistance can be maintained —even if the genes carry a high cost." Most resistance to antibiotics arises and spreads through natural selection. If a few lucky bacteria have genes that help them survive a round of antibiotics, they quickly parent the next generation and pass on those genes. Many of these genes, however, come at a cost. For example, a mutation may allow a bacterium to build a thicker membrane to survive a particular antibiotic, but that mutation might also make it more difficult for the cell to reproduce. Without the selective pressure of antibiotics killing off the competition, bacteria with this mutation should disappear over time. But when the genes responsible for resistance can also be swapped between cells, the equation gets more complicated. In favor of maintaining the resistance is the rate at which the genes are shared. Working against it is the previously mentioned biological cost of the genes, and the natural error rate in genes when they are passed on. This movie depicts the basic process of antibiotic resistance spreading through horizontal gene transfer. Two strains of bacteria are grown together for four hours. One strain appears red and carries resistance to one type of antibiotic, while the other carries mobile genes that appear green and provide resistance to another antibiotic. After four hours, both antibiotics are applied, and the red bacteria that have received the green resistance genes through horizontal gene transfer appear yellow and take over the culture because they are resistant to both antibiotics. Credit: Allison Lopatkin, Duke University "There have been some studies on how critical conjugation is to maintaining resistance despite its cost, but there has been a lack of careful and well-defined experiments to come to definitive conclusions," said You. "That's where Allison has made a central contribution. Her incredibly thorough measurements allow us to draw our conclusions with high confidence." Allison Lopatkin, a doctoral student in You's laboratory and first author of the paper, carefully measured the rate of conjugation and antibiotic resistance in pathogens for more than a month. The strains were obtained through a parallel project with Duke Health, in which You is trying to determine just how common conjugation is amongst pathogens. So far, You has found that more than 30 percent of the bacterial pathogens he has tested spread resistance through conjugation. And of those, nine were further tested by Lopatkin to see how well they would maintain their resistance in the abscence of antibiotics. "Every single clinical strain we tested maintained its resistance through conjugation even without the selective pressure of antibiotics," said Lopatkin. The results indicate that—at least for bacteria that swap resistance genes—simply managing the amount of antibiotics being used will not turn the tide on the growing problem of resistance. To make any headway, according to You and Lopatkin, drugs will also be needed that both stop the sharing of genes and decrease the rate at which they are passed on through reproduction. Luckily, such drugs already exist, and there may be many more out there waiting to be discovered. "We did the same experiments with one drug that is known to inhibit conjugation and another that encourages resistance genes to be lost," Lopatkin said. "We found that without the presence of antibiotics we could reverse the bacteria's resistance in four of the pathogens we tested and could stop it from spreading in the rest." One of the drugs is a benign natural product and the second is an FDA-approved antipsychotic. While the team has filed a provisional patent for the use of the combination to reverse antibiotic resistance, they hope future work will reveal even better options. "As a next step, we're interested in identifying additional chemicals that can fill these roles more effectively," said You. "Historically, when researchers screened huge libraries for medicines, they focused on drugs that can kill the bacteria. But what our studies suggest is that there is a whole new universe where you can now screen for other functions, like the ability to block conjugation or to induce the loss of resistance genes. These chemicals, once proven safe, can serve as adjuvants of the standard antibiotic treatment, or they can be applied in an environmental setting as a way of generally managing of the spread of antibiotic resistance."
10.1038/s41467-017-01532-1
Chemistry
Predicting efficient oxygen reduction electrodes for ceramic fuel cells
Shuo Zhai et al, A combined ionic Lewis acid descriptor and machine-learning approach to prediction of efficient oxygen reduction electrodes for ceramic fuel cells, Nature Energy (2022). DOI: 10.1038/s41560-022-01098-3 Journal information: Nature Energy
https://dx.doi.org/10.1038/s41560-022-01098-3
https://phys.org/news/2023-01-efficient-oxygen-reduction-electrodes-ceramic.html
Abstract Improved, highly active cathode materials are needed to promote the commercialization of ceramic fuel cell technology. However, the conventional trial-and-error process of material design, characterization and testing can make for a long and complex research cycle. Here we demonstrate an experimentally validated machine-learning-driven approach to accelerate the discovery of efficient oxygen reduction electrodes, where the ionic Lewis acid strength (ISA) is introduced as an effective physical descriptor for the oxygen reduction reaction activity of perovskite oxides. Four oxides, screened from 6,871 distinct perovskite compositions, are successfully synthesized and confirmed to have superior activity metrics. Experimental characterization reveals that decreased A-site and increased B-site ISAs in perovskite oxides considerably improve the surface exchange kinetics. Theoretical calculations indicate such improved activity is mainly attributed to the shift of electron pairs caused by polarization distribution of ISAs at sites A and B, which greatly reduces oxygen vacancy formation energy and migration barrier. Main Functional materials play an important role in renewable, green energy technologies, which are of strategic importance to achieve carbon-neutrality. The solid oxide fuel cell (SOFC), as a representative of green electrochemical devices, has attracted great attention for its high energy efficiency, low emissions and fuel flexibility 1 , 2 , 3 , 4 , 5 . However, commercialization of the ceramic fuel cell technology has been largely hampered due to its high operating temperatures (800–1,000 °C), at which difficult sealing, high operational costs and accelerated material degradation are the main challenges 6 , 7 . Lowering the operation temperature while maintaining sufficient power output is a critical step towards its wider application and portability 8 . Over the years, numerous efforts have been devoted to explore functional materials to realize a reduction in SOFC operation temperatures 9 , 10 . However, the trial-and-error process for the material design, characterization and complicated test procedure could make for a long and tedious research cycle, especially considering the vast non-precious metal-based perovskite candidates 11 . Under this limitation, revealing the relationship between parameters and properties is of high merit in material design 12 . Multiple descriptors of the activity of perovskite oxide electrodes have previously been proposed to provide guidance for prediction of new materials in a more efficient way 13 . Theoretically, the bulk oxygen p -band centre 14 , the bulk vacancy formation energy 15 and the charge-transfer energy 16 were reported to be closely associated with the catalytic activities of the oxygen reduction/evolution reaction (ORR/OER) under various conditions. For instance, a huge number of perovskite compositions has been screened as SOFC cathode candidates with a high oxygen p -band centre value by employing high-throughput density functional theory (DFT) methods 17 . Unfortunately, such studies rely heavily on indispensable complex computational simulation methods that require time-consuming, costly simulation and calculation processes; thus, the key constraints affecting screening efficiency still exist. Physically, the A-site ionic electronegativity 18 , number of outer electrons 19 , σ *-orbital occupation 12 and nominal B-site oxidation state 20 were indexed as the intrinsic OER/ORR activity descriptors; however, most of these single physical descriptors are specific to a particular system, such as perovskite oxides with fixed B-site cations, which limits their wider applicability. A particular area of interest in recent studies is to acquire potential information hidden behind existing datasets and establish the trend of materials discovery through data-driven methods 21 . For instance, machine-learning techniques with powerful capabilities to reorganize information structure and support multidimensional features are broadly used in materials informatics 22 . The material information available in published experimental data and open-source databases such as the Inorganic Crystal Structure Database has substantially democratized access to high-quality datasets for scientific researchers. After decades of development, large amounts of published experimental data are available for SOFC cathodes, which are worth collecting, organizing and serving as a cornerstone for derivative new electrode candidates. To construct an accurate machine-learning model for rapid and efficient exploration of active electrode materials, high-quality datasets, appropriate descriptors for perovskite compositions and selection of suitable regression model are indispensable. However, we currently lack a representative physical descriptor that accurately reflects the mechanistic rationale of the ORR process at elevated temperatures. To bridge this gap, it is imperative to search for an effective physical descriptor, determine the control rule for the ORR kinetics and identify a reliable regression model for data mining and reconstruction. In this work we propose an experimentally validated machine-learning-driven approach to screen highly active cathodes for ceramic fuel cells from huge perovskite compositions via the Lewis acid strength (ISA) descriptor, which is strongly associated with the ORR kinetic rate at elevated temperatures. To verify the predictions, we synthesize four perovskite oxides screened from 6,871 distinct compositions and further conduct characterization and electrochemical tests. Sr 0.9 Cs 0.1 Co 0.9 Nb 0.1 O 3 (SCCN) in particular exhibits excellent ORR activity with an extremely low area-specific resistance (ASR). Owing to the regulation of ISA by increasing A-site and reducing B-site ISAs, increased surface exchange kinetics are achieved. Density functional theory calculations reveal that the polarization distribution of ISA is associated with an optimal structure with a reduced V O formation energy and migration barrier due to the shift of electron pairs. Workflow for discovery of oxygen reduction electrodes The overall workflow for the discovery of highly active oxygen reduction electrodes is shown in Fig. 1 , which includes three major parts: machine-learning model training and material screening, experimental verification and DFT analysis. We first collected the ASR values of the various materials as the activity indicators via electrochemical impedance spectroscopy (EIS) based on symmetrical cell tests as the initial dataset. Using rational descriptors to denote various compositions is of vital importance for eventually obtaining a valid prediction of activity. Here we chose nine ionic descriptors including A- and B-site: ISA values (AISA and BISA), ionic electronegativities (AIEN and BIEN), ionic radii ( R A and R B ) 23 and ionization energies (AIE and BIE) 24 , as well as the tolerance factor ( t ). The complete dataset—containing the perovskite composition, electrolyte type and ASR values at 650 °C and 700 °C—is listed in Supplementary Table 1 . Eight different regression methods were then implemented to fit the prepared dataset, including four linear regression methods (ordinary least-squares, Lasso, Ridge and elastic net regression) and four nonlinear regression methods (supporting vector regression, random forest, Gaussian process regression and artificial neural networks (ANNs); Supplementary Fig. 1a–g and Fig. 2a ). The regression model with the best fitting result in this work was used to predict the ASR values of unexplored materials. To verify the predictions, we selected potential materials with low predicted values for synthesis, characterization and testing. Finally, we performed DFT calculations to obtain the electronic structure information, clarifying the role of a specified descriptor in the ORR process. Fig. 1: The overall workflow diagram. Machine-learning model training and material screening, experimental verification and DFT analysis. θ is X-ray incidence angle. Full size image Fig. 2: Model evaluation and descriptor importance degree analysis. a , Experimentally measured polarization resistance versus predicted polarization resistance of a set of materials at 700 °C in the training and test sets of the ANN model and the screened materials in this work. b , c , Feature importance ( b ) and feature combination importance (FCI) ( c ) of the ionic descriptors on the basis of the sensitivity analysis of the ANN model. The error bars in b and c , which represent the s.d. of ten independent training runs, are presented as mean values ± s.d. d – g , Correlation with intrinsic ORR activity (log_10(ASR)) trends as a function of AISA and BISA ( d ), R A and R B ( e ), AISA + BISA ( f ) and R A + R B ( g ). Three-dimensional visualization graphs of f and g are shown in Supplementary Fig. 2 . The corresponding data in d – f are provided in Supplementary Table 1 . Source data Full size image Machine-learning model training and analysis We used mean square error (MSE) as the training and evaluation metrics and a lower MSE indicates a better performance. As shown in Table 1 , the nonlinear methods considerably outperform the linear models, indicating that the relation between ASR and ionic descriptors is complex and implicit (see Supplementary Note 1 for detailed discussions). Among all methods, the ANN model achieves the best ASR fitting result, with MSE values of 0.0090 and 0.0131 Ω cm 2 on the training and test sets, respectively. Table 1 The performance comparison of the different regression models on training and test sets Full size table Next, the importance degree of each descriptor was analysed to obtain a deeper understanding of the generated ANN model. As shown in Fig. 2b , all of the descriptors contribute to the build of the ANN model as they are strongly related to the physical and chemical features of A- or B-site cations. The BISA shows the greatest importance degree in the model, and thus it is likely to have a strong correlation with the intrinsic ORR activity; B-site transition metal cations with various oxidation states are known to be the adsorption and desorption intermediates for the ORR 25 , whereas the BISA indicates its ability to accept a pair of electrons from molecular oxygen or oxygen intermediates, which act as Lewis bases in the reaction process. Note that a low importance does not mean that the corresponding descriptor is performance-independent, as there may be multiple descriptors encoding the same information, and the model selects one of them. We further analysed the importance of different feature combinations (Fig. 2c ). The ISA combination exhibits the greatest importance, suggesting it can be identified as an influential factor for ORR activity. We then presented the intrinsic ORR activity trends as a function of AISA, BISA and a combination of the two (Fig. 2d,f ). Despite some deviations, a linear general trend is observed across most data points, where a lower AISA and higher BISA lead to a better ORR activity within a certain range (Fig. 2f ). It should be mentioned that no single AISA or BISA exhibits such a good relationship with ORR activity, which emphasizes the importance of the synergy of multiple descriptors in model construction. By contrast, R A and R B , which have also been reported as effective descriptors for developing perovskite catalysts, do not show a strong correlation (Fig. 2e,g ). The dataset at 650 °C was then also analysed to verify the importance of ISA. Similarly, a good fitting result was obtained by the ANN model (Supplementary Fig. 3a ); BISA and AISA + BISA combination also show the strongest importance degree according to feature importance and feature combination importance analyses, respectively (Supplementary Fig. 3b,c ). Screening and verification The flowchart for screening highly active materials in terms of ASR is described in Supplementary Fig. 4 . We preliminarily screened the doping cation type of potential materials according to the ISA value. The A- and B-site candidate elements were labelled in the periodic table (Supplementary Fig. 5 ). By limiting the stoichiometric ratio of doping elements and composition increment of host elements, 6,871 distinct perovskite oxide compositions meeting the principle of charge balance were generated automatically. A detailed generation process and the corresponding pseudo codes are shown in Supplementary Figs. 6 and 7 , respectively. The predicted ASR values of these automatically generated materials were obtained through the ANN regression model (see Supplementary Data 1 ). We found that the predicted values of many substances are lower than the ASR of landmark cathode material, Ba 0.5 Sr 0.5 Co 0.8 Fe 0.2 O 3−δ (BSCF) 26 , indicating that there is a whole range of unexplored potential materials out there. To verify the availability and predicted low ASR value of the materials, we selected four perovskite oxides for synthesis and characterization: SCCN, Ba 0.4 Sr 0.4 Cs 0.2 Co 0.6 Fe 0.3 Mo 0.1 O 3 (BSCCFM), Ba 0.8 Sr 0.2 Co 0.6 Fe 0.2 Nb 0.2 O 3 (BSCFN) and Sr 0.6 Ba 0.2 Pr 0.2 Co 0.6 Fe 0.3 Nb 0.1 O 3 (SBPCFN). The detailed selection criteria on these four materials can be found in Supplementary Note 2 . As expected, the X-ray diffraction (XRD) patterns show that the four powders all comprise a pure cubic phase (space group: Pm -3 m ), indicating the practical availability of these potential materials predicted by machine learning (Fig. 3a ). Electrochemical impedance spectroscopy measurements were then performed in an Sm 0.2 Ce 0.8 O 1.9 (SDC) electrolyte-supported symmetrical cell from 450 to 700 °C to evaluate their ORR activities. Figure 3b summarizes the Arrhenius plots of the ASRs of the four oxides (Fig. 3c and Supplementary Fig. 8a–c ). Compared with the benchmark material BSCF 26 , SCCN and BSCCFM exhibit a much lower ASR at all of the investigated temperatures. The ASR values of SCCN and BSCCFM are only 0.0101 and 0.0113 Ω cm 2 , respectively, at 700 °C, which are close to the predicted values generated by machine learning (Fig. 2a ). The ASR values of the four materials also follow the trend of ISA descriptor, further corroborating the strong correlation between ISA and the ORR activity (Supplementary Fig. 2a ). As expected, the activation energies ( E a ) of the four materials are lower than that of BSCF, implying their high potential as cathodes of reduced-temperature SOFCs. Fig. 3: Structure and electrochemical performance of the synthesized perovskite oxide sample. a , b , XRD patterns ( a ) and Arrhenius-type plots ( b ) of ASR values of SCCN, BSCCFM, BSCFN and SPBCFN. The solid lines in b represent the least-squares fitting for the ASR values. c , Nyquist plots for the SCCN cathode in an SDC-based symmetrical cell. d , Distribution of relaxation time analysis of the SCCN cathode at 450–700 °C. e , Distribution of relaxation time analysis of SCCN, BSCCFM, BSCFN and SPBCFN cathodes at 500 °C and 550 °C. f , Comparison of R HF and R IF values of SCCN, BSCCFM, BSCFN and SPBCFN, with an equivalent circuit in the inset, where CPE denotes constant phase element. Source data Full size image To elucidate the ORR kinetics of these screened cathode materials, the EIS spectra were analysed via the distribution of relaxation time (DRT) model 27 (Fig. 3d and Supplementary Fig. 9a–c ). The stage of the ORR process recorded in the 2–500 Hz range exhibits a significant thermal activation, and the impedance increases exponentially as the temperature decreases. We also compared the DRTs of the four materials measured at 550 and 500 °C (Fig. 3e ). As illustrated in Fig. 3e , the electrochemical processes could be deconvolved into three peaks denoted as high- ( P HF ), intermediate- ( P IF ) and low-frequency ( P LF ) (Supplementary Note 3 ); P HF and P IF show distinct bulge, whereas the P LF peaks are hardly observed from the DRT plots, implying the good mass transfer processes in the cathode 28 . This could be attributed to the porous cathode morphology, which allows for the rapid gas diffusion and supply for electrochemical reactions (Supplementary Fig. 10 ). To quantify the contribution of P HF and P IF to the integral area of the peak impedance, an equivalent circuit model, R Ω – ( R HF /CPE HF ) – ( R IF /CPE IF ), was used to fit the EIS spectra at 450–550 °C (Fig. 3f ). The result indicates charge transfer is not the restriction factor as the R HF values of these four oxides are similar and only account for a small fraction of the total polarization impedance; by contrast, the R IF values exhibit a large difference and are characterized by a pronounced thermal activation. As seen, the R IF value of SCCN cathode is much lower than the other three materials; thus it has a faster surface oxygen-transfer process, which contributes to its excellent ORR activity. We also assessed the stability of the SCCN electrode, with the ASR kept stable at ~0.088 Ω cm 2 during the 800 h test at 550 °C in air, exhibiting a good electrochemical stability based on SDC electrolyte (Fig. 4a ). The performance of the SCCN electrode was also evaluated in the SDC + Ni cermet-anode-supported single cells (Fig. 4b ). It achieves the maximum power densities of 2.05, 1.52 and 1.19 W cm –2 at 650, 600 and 550 °C, respectively, which are appreciably higher than most of the single cells with the well-known cathode materials. As shown in the cross-section morphology, the thickness of the electrolyte and SCCN electrode is ~14 µm and ~10 µm, respectively (Fig. 4c ). Moreover, the single cell with the SCCN cathode exhibits a good stability at 200 mW cm −2 at 450 °C operating on H 2 fuel (Supplementary Fig. 11 ). These results confirm the superior electro-activity of the SCCN electrode, demonstrating the effectiveness of the ANN model for ceramic fuel cell electrode prediction. Fig. 4: Symmetrical cell stability and single cell performance based on SCCN cathode. a , Area-specific resistance values of SCCN in a symmetrical cell measured for 800 h in air at 550 °C. b , I – V – P curves of a single cell with the configuration Ni + SDC|SDC|SCCN at 450–650 °C. c , Scanning electron microscopy cross-sectional images of a single cell with the configuration Ni + SDC|SDC|SCCN. Source data Full size image Morphology and oxygen-transfer-related characterization To better understand these highly active electrodes predicted by the ANN regression model, we took BSCCFM and SCCN as examples for further characterization. Perovskite solar cells with Cs + as the A-site dopant cation have recently exhibited an enhanced crystal structure, manifesting via thermal stability, humidity resistance and photostability 29 ; however, Cs + , which has an extremely low ISA, is rarely used in a ceramic fuel cell electrode. In this case, the morphology and element distribution of BSCCFM powder was examined by high-angle annular dark-field detector and mapping (Fig. 5a,b ). The sample exhibits a bulk property with the size of ~500 nm and a homogeneous distribution of elements, suggesting the incorporation of caesium in the perovskite oxide. Fig. 5: Morphology of BSCCFM. a , Scanning transmission electron microscopy image of BSCCFM. b , Scanning transmission electron microscopy–energy-dispersive X-ray spectroscopy mappings of the elemental distributions of barium, strontium, caesium, cobalt, iron, molybdenum and oxygen in BSCCFM. Full size image To investigate the effect of doping elements, we also synthesized the cornerstone materials Ba 0.5 Sr 0.5 Co 0.7 Fe 0.3 O 3 (BSCF73) and SrCoO 3 . BSCF73 and BSCCFM show the same cubic phase, whereas SrCoO 3 exhibits a hexagonal phase that—on the basis of the XRD patterns and the structural refinement results—is different from SCCN (Fig. 6a and Supplementary Figs. 12 and 13 ). It has been reported that the Mo 6+ dopant will render the lattice contraction of BSCF as the ionic radius of Mo 6+ is smaller than that of Sr 2+ and Ba 3+ (ref. 30 ); however, we found the main peak position of the BSCCFM is consistent with that of BSCF73, further suggesting the successful incorporation of Cs + with a large ionic radius. Fig. 6: Oxygen-transfer-related characterization. a , X-ray diffraction patterns of BSCF73 and BSCCFM. b , Electrochemical impedance spectroscopy curves at various \(P_{\mathrm{{O}_{2}}}\) (atm) for BSCCFM at 600 °C. c , Dependence of polarization resistance on \(P_{\mathrm{{O}_{2}}}\) (atm) for BSCCFM and BSCF73 at 600 °C. The solid lines represent least-squares fitting for the ASR values. d , The deconvolution of oxygen 1 s XPS curves for BSCCFM (top) and BSCF73 (bottom). In the fitted XPS spectra: dots, raw data; red line, overall fitting envelope; blue peak, H 2 O ad ; orange peak, O adsorbed ; green peak, O lattice . e , Thermogravimetric curves and oxygen non-stoichiometry increment analysis as a function of temperature for BSCCFM and BSCF73. f , The electronic conductivity of BSCCFM and BSCF73. g , Lewis acid strengths for Cs + , Ba 2+ , Sr 2+ cations with twelvefold coordination, and Co 3+ , Co 4+ , Fe 3+ , Fe 4+ , Mo 6+ cations with sixfold coordination. h , Schematic diagram of oxygen vacancy formation in BSCCFM. Source data Full size image We resorted to EIS techniques to verify whether the enhanced performance of BSCCFM can be correlated with the optimal surface oxygen exchange kinetics within the perovskite lattice. The dependence of ASR on \(P_{\mathrm{{O}_{2}}}\) were investigated at 600 °C, within the oxygen partial pressure range of 0.1 to 1 atm (Fig. 6b and Supplementary Fig. 14 ). The specific rate-limiting step in the ORR process could be ascertained by analysing the change in ASR at different oxygen partial pressures, which can be expressed as ASR ∝ \({{P}_{\mathrm{{O}_{2}}}}^{-m}\) (Supplementary Note 3 ) 31 . The EIS spectra result of BSCF73 and BSCCFM show a similar dependence on oxygen partial pressures, and the corresponding m -values of BSCF73 and BSCCFM are fitted to be 0.34 and 0.39, respectively (Fig. 6c ). Note that when m = 0.375, \(O_{\mathrm{ads}} + {\mathrm{e}}^ - + V_{{\mathrm{O}}(s)}^{..} \leftrightarrow O_{{\mathrm{O}}(s)}^.\) is the rate-determining step. This result indicates that the ORR process of BSCF73 and BSCCFM is mainly controlled by the surface vacancy concentration. To verify this, we analysed oxygen 1 s X-ray photoelectron spectroscopy (XPS) spectra of BSCF73 and BSCCFM and compared their thermogravimetric analysis curves. The oxygen 1 s curves were deconvoluted into three peaks, corresponding to lattice oxygen (O lattice ), less electron-rich oxygen species (also known as adsorbed oxygen, O adsorbed ) and adsorbed water (H 2 O ad ), respectively (Fig. 6d ) 32 . BSCCFM possesses a higher concentration of O adsorbed species at surface than that of BSCF73 according to the fitting result (Supplementary Table 3 ), suggesting the BSCCFM sample holds more surface oxygen vacancies at room temperature 33 . Furthermore, the oxygen non-stoichiometry of BSCCFM is always higher than that of BSCF73 at 200–700 °C (Fig. 6e ). These results could indicate the superior ORR catalytic activity of BSCCFM due to its higher oxygen vacancy concentration 34 . In addition to oxygen surface kinetics, electrical conductivity as a determinant of charge transfer also plays a key role in the ORR process. BSCCFM displays a conductivity of about 120 S cm −1 under air between 550 °C and 900 °C, which is nearly four times that of BSCF73 (Fig. 6f ). Similar results were obtained by comparing SCCN with SrCoO 3 via EIS techniques, XPS analysis, thermogravimetric measurements and conductivity tests (Supplementary Figs. 15 – 18 and Supplementary Table 4 ). The doping of low-ISA A-site cations (Cs + ) and high-ISA B-site cations (Nb 5+ or Mo 6+ ) into host perovskite oxides increases ORR activity regardless of whether the two samples are in the same or different phases. As shown in Fig. 6g , the ISA value of Cs + is close to zero, which means that it is almost unable to accept electron pairs, like the A-site deficiency. In addition, Cs + and Mo 6+ exhibit polarization distribution in ISA, which could cause the shift of electron pairs, thus changing the coordination environments of B-site active sites and creating more oxygen vacancies (Fig. 6h). DFT calculation of electronic structure evolution Density functional theory calculations were conducted to verify whether ISA regulation at the A- and B-site is associated with the optimized electron configurations. The bulk models of pristine Ba 0.5 Sr 0.5 Co 0.75 Fe 0.25 O 3 (BSCF-m) and Ba 0.375 Sr 0.375 Cs 0.25 Co 0.625 Fe 0.25 Mo 0.125 O 3 (BSCCFM-m) were set up to represent BSCF73 and BSCCFM without an oxygen vacancy, respectively (Fig. 7a and Supplementary Fig. 19a ). The charge density difference was analysed on the basis of the three-dimensional crystal models to investigate the charge displacement (Fig. 7b and Supplementary Fig. 19b ). As expected, the electron cloud density around molybdenum is higher than those around cobalt and iron in BSCCFM-m. Under the effect of caesium and molybdenum, cobalt and iron in BSCCFM-m exhibit much lower Bader charge due to their variable valence states, whereas barium and strontium remain stable, indicating that the increased performance is mainly caused by the change of coordination environment of the B-site active sites (Fig. 7c ). Fig. 7: DFT calculation of electronic structure evolution. a , b , The model ( a ) and differential charge density ( b ) of BSCCFM-m. The yellow region represents charge accumulation, whereas the blue region represents charge reduction. c , The net charge of barium, strontium, cobalt and iron in BSCF-m and BSCCFM-m. d , e , The oxygen vacancy formation energy at the O1 site ( d ) and the oxygen migration barrier from site O1 to O2 ( e ) in the model of BSCF-m and BSCCFM-m. The solid lines in e represent the spline fitting for the relative energy of oxygen migration. f , Partial density of states of oxygen 2 p and cobalt 3 d orbitals for BSCF-m and BSCCFM-m. Source data Full size image The reduced electron density displayed in the interatomic region represents weaker bond with lower energy for oxygen removal 35 . To verify this, the oxygen vacancy formation energy (Δ E vac ) values were calculated by removing one oxygen atom from the stoichiometric bulk. Here we analysed different sites in the configurations of the two models, including seven oxygen sites in BSCF-m and eleven oxygen sites in BSCCFM-m (Supplementary Fig. 20 ). The calculated Δ E vac values are listed in Supplementary Table 5 ; the lowest Δ E vac values in BSCF-m and BSCCFM-m are 0.74 and 0.26 eV, respectively, indicating that the oxygen vacancy concentration in BSCCFM-m should be higher, which is consistent with the experimental results (Fig. 7d ). Remarkably, the lowest Δ E vac in BSCCFM-m is located at the O1 site, which is close to molybdenum and caesium, suggesting that the regulation of ISA favours the formation of oxygen vacancies. Oxygen migration is also a critical factor for kinetic rate of ORR process. Consequently, the migration barriers from site O1 to O2 in the BSCF-m and BSCCFM-m were calculated (Fig. 7e ). The BSCCFM-m exhibits a much lower migration energy than BSCF-m, suggesting that oxygen migration is more favourable in BSCCFM-m. The partial density of states calculations of oxygen 2 p and cobalt 3 d orbitals were further conducted to observe the electronic energy band structures of BSCF-m and BSCCFM-m (Fig. 7f ). Under ISA regulation, the orbital states of BSCCFM-m shift to near the E F , rendering a narrowing bandgap from 0.49 to 0.16 eV, which is beneficial to enhancing the stimulation of charge carrier migration to conduction band and thus improving the conductivity 36 . Discussion of secondary descriptors In the machine-learning training process, the B-site host element candidates were selected by intrinsic properties of transition metal elements as they also influence the performance of perovskite oxides for the catalytic ORR. We thus conducted a systematic investigation of the B-site elements on the catalytic performance by synthesizing a series of La 0.8 Sr 0.2 MO 3 (M = Cr, Mn, Co, Fe, Ni) samples for their performance comparison (Supplementary Fig. 21a ). The superior performance of the cobalt-, iron- and nickel-based perovskite oxides compared with chromium- and manganese-based perovskite oxides is consistent with the general observations in the literature that the high-performing electrodes are usually based on cobalt, iron or nickel. Furthermore, by adjusting the ISA of existing perovskites via doping strategy, the properties of the B-site transitional metal elements in La 1– x Sr x CoO 3 could be tailored, thus leading to a further performance enhancement (Supplementary Fig. 22 ); however, the ISA descriptor describes the ORR activities mainly based on the materials’ compositions. Although the descriptor can capture the activity trends among a wide range of material composition, it may not be able to distinguish activities among different structures/phases at a fixed composition (such as La 0.2 Sr 0.8 CoO 3 ). In addition, it is hard to distinguish the activity difference between the La 0.8 Sr 0.2 MO 3 series perovskite materials with similar ISA but noticeable difference in ASR. It suggests that a secondary descriptor is necessary, and the different secondary descriptor should be selected and well determined for different situations. In the literature, the tolerance factor is often used to correlate the lattice symmetry of perovskite, and most of the high-performance perovskite-type cathode materials show cubic structure as shown in Supplementary Table 1 . For example, by adjusting the tolerance factor to unit, SrCoO 3 -based materials transform from hexagonal to cubic structures with enhanced electrocatalytic activity (Supplementary Fig. 23 ). We therefore may use the tolerance factor as a secondary descriptor together with the ISA primary descriptor for more precise prediction of high-performance perovskite-type materials for ORR in SOFCs. In addition, a pioneering and influential work has proposed multiple descriptors of oxygen-evolution activity at room temperature for oxides 37 . Here the different activity for LSM series be well correlated to the e g occupancy 38 of the transition metal cations, and a similar volcano-type trend is also observed as shown in Supplementary Fig. 21b . Thus, e g occupancy may be applied as a potential secondary descriptor in this case. It should be mentioned that there are still not sufficient data available in literature, while the precision of prediction based on machine learning is also closely related to the size of the dataset. With the further enrichment in the dataset about cathode materials in literature, more accurate prediction is expected. Conclusions Overall, a machine-learning technique is successfully applied to the development of a highly active fuel cell cathode. Compared with previous high-throughput DFT methods, our approach is able to predict material properties after training that is based solely on molecular formulas without building molecular models, and thus has the technological advantage of low cost and high development efficiency. However, as a data-driven approach, the quantity and quality of data directly affect the machine-learning accuracy. The current data are still insufficient for activity prediction towards lower temperatures, and therefore it is necessary to accelerate the construction of material databases for future machine-learning development. Note that the predicted samples may not be able to form perovskite structures, or the defects inside the lattice may form association. All of these situations may cause a change in the intrinsic activity, and the other important factors such as durability/stability cannot be predicted by the current descriptor. Related experimental studies are still needed to ensure the applicability of the predicted materials. To illuminate the role of ISA descriptor in the ORR process, DFT calculations are conducted to obtain electronic structure information. Density functional theory calculation can therefore provide material properties that are not available from machine learning, and the two are complementary to each other. Based on above integrated approach, we realize rapid and efficient discovery of highly active oxygen reduction electrodes via introducing ISA as an effective ORR descriptor at elevated temperature and proposing an experimentally validated machine-learning-driven approach. As such, four oxides are successfully predicted and confirmed superior electro-activity. Experiment characterization and DFT calculations demonstrate that the polarization in ISA values is associated with an optimized electronic structure with reduced V O formation energy and migration barrier, for which providing a mechanistic rationale of oxygen reduction electrode design. Methods Data acquisition Oxygen reduction reaction activities of different ABO 3 -type perovskite oxides from the literature, and our previous work under the same test conditions, were collected as an initial dataset. We discarded materials with extremely large ASR values (>5 Ω cm 2 ) to avoid their potential effect on the stability of the models, and finally obtained 85 reliable materials with 162 ASR values at 700 °C and 650 °C. Based on experience, nine ionic descriptors, including AISA, BISA, AIEN, BIEN, R A , R B , AIE, BIE and t , were chosen as material features to link with the ORR activity by implementing machine learning; t is defined as \(\frac{{R_{\mathrm{A}} + R_{\mathrm{O}}}}{{\sqrt 2 (R_{\mathrm{B}} + R_{\mathrm{O}})}}\) , where R O is the ionic radius of oxygen ion. Among them, AISA and BISA, which reflect the ability to accept the Lewis base’s electron pair, were introduced. The descriptors of A-site cations with twelvefold coordination, and B-site cations with sixfold coordination, are listed in Supplementary Tables 6 and 7 , respectively; A- and B-site parameters were calculated by a simple weighted average of the ionic properties. Note that constant valence values were used for some high-valence transition metal cations such as V 5+ , Nb 5+ and Mo 6+ . Furthermore, note that the valence state values of other transition metal elements such as iron, cobalt and nickel were calculated on the basis of the charge balance, because different combinations of a certain transition metal element with different oxidation states—but a fixed average oxidation state—resulted in similar descriptor values (see Supplementary Note 4 for detailed calculations). The nine ionic descriptor values for each perovskite are listed in Supplementary Data 2 . Data processing for machine learning Each perovskite was represented as a 16-dimension feature vector, and all dimensions were normalized to the distribution with zero mean and unit variance to diminish the influence of different feature scales. Instead of directly modelling resistance \({\mathrm{ASR}} \in [0,\, + \infty ]\) , \({{{\mathrm{log}}}}(10,{\mathrm{ASR}}) \in [ - \infty ,\, + \infty ]\) was predicted to avoid negative ASR outputs. Machine-learning model training The whole dataset was split into a training set and a test set. The training set (70 examples) was used to fit the model parameters, whereas the test set (13 examples) was used to evaluate the model’s generalizability to unseen data. We found that the performance of machine-learning models has certain variances across different splits as our dataset is small. We therefore considered the balance of the feature and ASR distribution when creating the splits (see Supplementary Note 1 for more discussions). We trained eight different machine-learning models on the training set and used the MSE as the evaluation metric. The MSE is defined as the average of \(|y_i - \hat y_i|^2\) , where y i and \(\hat y_i\) represent the real value and the predicted value of the i th example, respectively. More information about the regression models can be found in the Supplementary Methods . Hyperparameters selection of regression models We tuned the hyperparameters of different machine-learning models separately via a grid search strategy. Specifically, for each hyperparameter dimension, we first selected a small set of candidates empirically and then obtained all hyperparameter combinations from a Cartesian product. We compared the effectiveness of different setups using leave-one-out cross validation which could not only make full use of collected data but also eliminate the variances of validation process (see Supplementary Note 1 for more discussions). The detailed hyperparameter search space and their validation results can be found in Supplementary Note 5 . Feature importance analysis We used sensitivity analysis to compare the importance of different descriptor quantitively. Given a trained machine-learning model and a descriptor to be analysed, we constructed a corrupted dataset where the dimension corresponding to that descriptor is masked. We then computed the model’s performance (MSE) decline on such data, which can be regarded as a proxy of descriptor importance. We performed analysis on our best three ANNs (Supplementary Table 8 ) due to their good performance. The results are substantially consistent: ISA (especially B-site) descriptor has the strongest influence on the ASR. Synthesis of materials The perovskite samples were synthesized via the ethylenediaminetetraacetic acid (EDTA)–citrate combustion method 39 . Nitrate precursors (Aladdin Chemical Reagent) were dissolved in deionized water, followed by adding citric acid, EDTA and ammonia. The mole ratio of the total number of metal ions, citric acid and EDTA was 1:1.5:1, and the PH of the mixture was adjusted to 7 using ammonia. Then the gel was dried at 180 °C to obtain black precursors, followed by calcinating at 550 °C for 5 h in the air to burn off carbon. Finally, the remaining mixtures were pressed into pellets, covered with the same mixture and calcined at 1,100 °C for 10 h to obtain the sample. Fabrication of symmetrical and single cells The cathode slurry was first made by ball milling the samples with isopropanol, ethanediol and glycerol. Symmetrical cells were prepared by spraying cathode slurry on each side of SDC substrate, followed by sintering at 900 °C for 2 h. The NiO/SDC anode-supported half-cells were fabricated by dual dry pressing method and high-temperature sintering. Specifically, the anode powders were prepared by ball milling NiO (Aladdin Chemical Reagent, China), SDC and corn starch (mass ratio of 7:3:1). The SDC electrolyte powders were evenly distributed on the surface of anode substrates followed by sintering at 1,350 °C for 10 h in air. Finally, cathode slurry was sprayed on the centre of the SDC layer and then sintered at 900 °C for 2 h in air. Basic characterizations X-ray diffration (Bruker-D8 advance) was conducted to investigate the phase structure of the powders. Structural refinements of powders were obtained via the TOPAS-4.2 software. 40 The morphology and element distribution of powders were examined using scanning transmission electron microscopy–energy-dispersive X-ray spectroscopy (FEI Talos F200X). The cross-section morphologies of cells were observed via scanning electron microscopy (TESCAN MIRA4). The XPS (Thermo Scientific K-Alpha) measurements were conducted to examine the surface chemistry of elements. The thermogravimetric analysis (TGA5500) experiments were performed at a temperature range of 200 °C to 700 °C under flowing air. The electrical conductivities of the electrode materials were tested with a four-probe direct current mode under flowing air (100 ml min −1 ) at a temperature range of 400–900 °C. A dense bar was prepared by dry pressing the sample and sintering at 1,100 °C for 10 h in air to determine the electrical conductivity. Electrochemical testing The ASR of the cathode was measured based on an SDC-based symmetrical cell under flowing air (100 ml min −1 ) at Solartron (1287 + 1260A) electrochemical workstations from 100 KHz to 0.1 Hz, with 30 mV amplitude under an open circuit. The single cell’s electrochemical performance was tested at a Keithley 2420 test station; H 2 with a flow rate of 80 ml min –1 was fed into the anode chamber, and ambient air was used as the cathode atmosphere. Theoretical calculations Density functional theory calculations implementing the generalized gradient approximation and the Perdew–Burke–Ernzerhof 41 formulation were performed via Vienna ab initio simulation package 42 , 43 . We used the cubic perovskite structure to calculate the oxygen vacancy formation energy and migration energy barrier. For the BSCCFM-m structures, the caesium and molybdenum atoms were doped into the perovskite BSCF-m structure via substitution methods. To realize the proportion of the concentration doped, the rationality of the calculation and the balance of the calculation amount, an equal proportion of 5 × 5 × 2 was expanded with the lattice parameters of a = b = 15.571 Å, c = 3.892 Å, and a total of 138 atoms in the calculation using the periodic structures. We determined the stable perovskite structure before and after doping with BSCCFM-m and BSCF-m structures using the structural optimization from first-principles. The projector augmented-wave 44 , 45 method and a plane wave kinetic energy cutoff of 450 eV were applied. A Gaussian smearing value of 0.05 eV was employed to identify the partial occupancies. The self-consistent energy convergence was chosen as 10 −5 eV, and the geometry optimization with a force tolerance of 0.04 eV Å –1 was performed. Grimme’s DFT-D3 46 scheme was adopted to correct the dispersion interactions. A vacuum spacing perpendicular of 15 Å was added to the structure plane. The Brillouin zone was sampled using 2 × 2 × 1 Monkhorst-Pack K-point mesh. Finally, Δ E vac was calculated as follows: $${\Delta}E_{\mathrm{vac}} = E({\mathrm{total}}) - E(A) + 1/2E({\mathrm{O}}_2)$$ where the E (total) is structure energy with the oxygen vacancy, E (A) is the structure energy, and E (O 2 ) represents the O 2 energies. The migration barrier was calculated using the nudged elastic band methods 46 . Data availability All relevant data are included in the paper and its Supplementary Information . Source Data are provided with this paper. Code availability The Python script for machine-learning model training and material screening are available at .
Ceramic fuel cells, also known as solid oxide fuel cells, are promising green electrochemical devices offering high energy efficiency, low emissions and fuel flexibility. The development of high performance and durable cathode materials is key for efficient and durable ceramic fuel cells. However, previous cathode material development based on a trial-and-error approach is time-consuming, expensive and difficult to optimize. A new paper published in Nature Energy demonstrates an experimentally validated machine-learning-driven approach to accelerate the discovery of efficient oxygen reduction electrodes. Significantly, ionic Lewis acid strength (ISA) is introduced as an effective physical descriptor for the oxygen reduction reaction activity of perovskite oxides. Using an integrated approach combining machine-learning, density functional theory (DFT) computation and experimental testing, the team successfully identified potential cathode materials for ceramic fuel cell from over 6,000 possible material compositions. These new materials could enable ceramic fuel cells to achieve high performance and excellent durability. This research study demonstrates a new strategy to facilitate ceramic fuel cell development for clean power generation and carbon neutrality.
10.1038/s41560-022-01098-3
Medicine
Blood of COVID patients could predict severity of illness from virus and lead to targeted treatments, study finds
Jesús F. Bermejo-Martin et al. Viral RNA load in plasma is associated with critical illness and a dysregulated host response in COVID-19, Critical Care (2020). DOI: 10.1186/s13054-020-03398-0 Journal information: Critical Care
http://dx.doi.org/10.1186/s13054-020-03398-0
https://medicalxpress.com/news/2020-12-blood-covid-patients-severity-illness.html
Abstract Background COVID-19 can course with respiratory and extrapulmonary disease. SARS-CoV-2 RNA is detected in respiratory samples but also in blood, stool and urine. Severe COVID-19 is characterized by a dysregulated host response to this virus. We studied whether viral RNAemia or viral RNA load in plasma is associated with severe COVID-19 and also to this dysregulated response. Methods A total of 250 patients with COVID-19 were recruited (50 outpatients, 100 hospitalized ward patients and 100 critically ill). Viral RNA detection and quantification in plasma was performed using droplet digital PCR, targeting the N1 and N2 regions of the SARS-CoV-2 nucleoprotein gene. The association between SARS-CoV-2 RNAemia and viral RNA load in plasma with severity was evaluated by multivariate logistic regression. Correlations between viral RNA load and biomarkers evidencing dysregulation of host response were evaluated by calculating the Spearman correlation coefficients. Results The frequency of viral RNAemia was higher in the critically ill patients (78%) compared to ward patients (27%) and outpatients (2%) ( p < 0.001). Critical patients had higher viral RNA loads in plasma than non-critically ill patients, with non-survivors showing the highest values. When outpatients and ward patients were compared, viral RNAemia did not show significant associations in the multivariate analysis. In contrast, when ward patients were compared with ICU patients, both viral RNAemia and viral RNA load in plasma were associated with critical illness (OR [CI 95%], p ): RNAemia (3.92 [1.183–12.968], 0.025), viral RNA load (N1) (1.962 [1.244–3.096], 0.004); viral RNA load (N2) (2.229 [1.382–3.595], 0.001). Viral RNA load in plasma correlated with higher levels of chemokines (CXCL10, CCL2), biomarkers indicative of a systemic inflammatory response (IL-6, CRP, ferritin), activation of NK cells (IL-15), endothelial dysfunction (VCAM-1, angiopoietin-2, ICAM-1), coagulation activation (D-Dimer and INR), tissue damage (LDH, GPT), neutrophil response (neutrophils counts, myeloperoxidase, GM-CSF) and immunodepression (PD-L1, IL-10, lymphopenia and monocytopenia). Conclusions SARS-CoV-2 RNAemia and viral RNA load in plasma are associated with critical illness in COVID-19. Viral RNA load in plasma correlates with key signatures of dysregulated host responses, suggesting a major role of uncontrolled viral replication in the pathogenesis of this disease. Background With well over 43 million cases and 1.56212 deaths globally, Coronavirus disease 2019 (COVID-19) has become the top economic and health priority worldwide [ 1 ]. Among hospitalized patients, around 10–20% are admitted to the intensive care unit (ICU), 3–10% require intubation and 2–5% die [ 2 ]. SARS-CoV-2 RNA is commonly detected in nasopharyngeal swabs; however, viral RNA can be found in sputum, lung samples, peripheral blood, serum, stool samples and to a limited extent urine [ 3 , 4 , 5 , 6 ]. While the lungs are most often affected, severe COVID-19 also induces inflammatory cell infiltration, haemorrhage and degeneration or necrosis in extra-pulmonary organs (spleen, lymph nodes, kidney, liver, central nervous system) [ 7 , 8 ]. Patients with severe COVID-19 show signatures of dysregulated response to infection, with immunological alterations involving moderate elevation of some cytokines and chemokines such as IL-6, IL-10 or CXCL10, deep lymphopenia with neutrophilia, systemic inflammation (elevation of C-reactive protein, ferritin), endothelial dysfunction, coagulation hyper-activation (D-dimers) and tissue damage (LDH) [ 9 , 10 , 11 , 12 , 13 , 14 ]. Our hypothesis is that systemic distribution of the virus or viral components could be associated with the severity of COVID-19, and in turn to a number of parameters indicating the presence of a dysregulated response to the infection. While the SARS-CoV-2 virus has been reported to be difficult to culture from blood [ 4 ], PCR-based methods are able to detect and quantify the presence of genomic material of the virus in serum or plasma, representing an useful approach to evaluate the impact of the extrapulmonary dissemination of viral material on disease severity and also on the host response to the infection [ 5 , 15 ]. An excellent approach for achieving absolute quantification of viral RNA load is droplet digital PCR (ddPCR). ddPCR is a next-generation PCR method, which offers absolute quantification with no need of standard curve and greater precision and reproducibility than currently available qRT-PCR methods, as revised elsewhere [ 16 ]. We employed here ddPCR to detect and quantify viral RNA in plasma from COVID-19 patients discharged from the emergency room with mild severity, patients admitted to the ward with moderate severity and critically ill patients. Our objectives in this study were: (1) to evaluate if there is an association between SARS-CoV-2 RNAemia and viral RNA load with moderate disease; (2) to evaluate if there is an association between SARS-CoV-2 RNAemia and viral RNA load with critical illness; and (3) to evaluate the correlations between SARS-CoV-2 RNA load in plasma and parameters of dysregulated host responses against SARS-CoV-2. Methods Study design A total of 250 adult patients with a positive nasopharyngeal swab polymerase chain reaction (PCR) test for SARS-CoV-2 performed at participating hospitals were recruited during the first pandemic wave in Spain from March 16th to the 15th of April 2020. The patients recruited were of three different categories. The first corresponded to patients examined at an emergency room and discharged within the first 24 h (outpatients group, n = 50). The second group were patients hospitalized to pneumology, infectious diseases or internal medicine wards (wards group, n = 100). Patients who required critical care or died during hospitalization were excluded from this group, in order to have a group of clear moderate severity. The third group corresponded to patients admitted to the ICU (n = 100). Patient`s recruited by participating hospital are detailed in the Additional file 1 . Twenty healthy blood donors were included as controls. These controls were recruited during the pandemics, in parallel to the SARS-CoV-2 infected patients, and were negative for SARS-CoV-2 IgG. This study was registered at Clinicaltrials.gov with the identification NCT04457505. Blood samples Plasma from blood collected in EDTA tubes samples was obtained from the three groups of patients in the first 24 h following admission to the emergency room, to the ward, or to the ICU, at a median collection day since disease onset of 7, 8 and 10, respectively, and also from 20 blood donors (10 men and 10 women). Biomarker profiling A panel of biomarkers was profiled by using the Ella-SimplePlex™ immunoassay (San Jose, California, USA), informing of the following biological functions potentially altered in severe COVID-19, based in the available evidence on COVID-19 physiopathology [ 13 ] [ 17 ] and also in our previous experience on emerging infections and sepsis [ 18 , 19 , 20 , 21 ]: neutrophil degranulation: Lipocalin-2/NGAL, myeloperoxidase; endothelial dysfunction: ICAM-1, VCAM-1/CD106, angiopoietin-2; T cell survival and function: IL-7, Granzyme B; immunosuppression: IL-1ra, B7-H1/PD-L1, IL-10; chemotaxis: CXCL10/IP10, CCL2; Th1 response: interleukin 1 beta, IFN-γ, IL-12p70, IL-15, TNF-α, IL-2; Th2 response: IL-4, IL-10; Th17 response: IL-6, IL-17A; granulocyte mobilization / activation: G-CSF, GM-CSF; coagulation activation: D-Dimer; acute phase reactants: ferritin (C-reactive protein and LDH were profiled in each participant hospital by their central laboratories). Detection and quantification of SARS-CoV-2 RNA in plasma RNA was extracted from 100 µl of plasma using an automated system, eMAG® from bioMérieux® (Marcy l'Etoile, France). Detection and quantification of SARS-CoV-2 RNA was performed in five µl of the eluted solution using the Bio-Rad SARS-CoV-2 ddPCR kit according to manufacturer’s specifications on a QX-200 droplet digital PCR platform from the same provider. This PCR targets the N1 and N2 regions of the viral nucleoprotein gene and also the human ribonuclease (RNase) P gene using the primers and probes sets detailed in the CDC 2019 Novel Coronavirus (2019-nCoV) Real-Time RT-PCR Diagnostic Panel [ 22 ]. Samples were considered positive for SARS-CoV-2 when N1 and/or N2 presented values ≥ 0.1 copies/µL in a given reaction. RNase P gene was considered positive when it presented values ≥ 0.2 copies/µL, following manufacturer`s indications. The test was only considered valid when RNase P gene was positive. Final results were given in copies of cDNA / mL of plasma. IgG specific for the nucleocapsid protein of SARS–CoV-2 was detected in 150 µl of plasma using the Abbott Architect SARS-CoV-2 IgG Assay (Illinois, USA). Viral RNA and SARS-CoV-2 IgG were profiled in the same plasma sample. Statistical analysis For the demographic and clinical characteristics of the patients, the differences between groups were assessed using the Chi-square test / Fisher's exact test where appropriated for categorical variables. Differences for continuous variables were assessed by using the Kruskal–Wallis test with post hoc tests adjusting for multiple comparisons. Multivariate logistic regression analysis was employed to evaluate the association between viral RNAemia and viral RNA load in plasma with severity, in the comparisons [outpatients vs ward patients] and [ward patients vs ICU patients]. Variables showing significant differences between groups in each comparison in the Kruskal–Wallis test were further introduced in the multivariate analysis as adjusting variables. The list of variables considered as potential adjusting variables were [Age (years)], [Sex (male)], [Alcoholism], [Smoker], [Drug abuse], [Cardiac disease], [Chronic vascular disease], [COPD], [Asthma], [Obesity], [Hypertension], [Dyslipidemia], [Chronic renal disease], [Chronic hepatic disease], [Neurological disease], [HIV], [Autoimmune disease], [Chronic inflammatory bowel disease], [Type 1 diabetes], [Type 2 diabetes], [Cancer], [Invasive mechanical ventilation], [Non-invasive mechanical ventilation], [SARS-CoV-2 IgG], [Temperature (ºC)], [Systolic pressure (mmHg)], [Oxygen saturation (%)], [Bilateral pulmonary infiltrate], [Glucose (mg/dl)], [Creatinine (mg/dl)], [Na (mEq/L)], [K (mEq/L)], [Platelets (cell × 10 3 / µl)], [INR], [D Dimer (pg/ml)], [LDH (UI/L)], [GPT (UI/L)], [Ferritin (pg/ml)], [CRP (mg/dl)], [Haematocrit (%)], [Lymphocytes (cells/mm 3 )], [Neutrophils (cells/mm 3 )], [Monocytes (cells/mm 3 )]. Multivariate logistic regression analysis was performed using the “Enter” method, but also the backward stepwise selection method (Likelihood Ratio) was employed in each case to confirm the association between viral RNAemia and viral RNA load in plasma with disease severity (pin < 0.05, pout < 0.10), not forcing entry of these variables in the model. Correlation analysis was performed using the Spearman test applying the Bonferroni correction of the p value. Variables evaluated for correlation with viral RNA load were: [Temperature (ºC)], [Systolic pressure (mmHg)], [Oxygen saturation (%)], [Lymphocytes (cells/mm 3 )], [Neutrophils (cells/mm 3 )], [Monocytes (cells/mm 3 )], [Creatinine (mg/dl)], [LDH (UI/L)], [GPT (UI/L)], [Platelets (cell × 10 3 / µl)], [INR], [CRP (mg/dl)], and all the biomarkers analysed by Ella-SimplePlex. Statistical analysis was performed with IBM SPSS® version 20 (IBM, Armonk, New York, USA). Results Clinical characteristics of the patients (Table 1 ) Table 1 Clinical characteristics of the patients Full size table Patients requiring hospitalization (either general ward or ICU) were older than those patients discharged to their home from the ER. Critically ill patients were more frequently male than those in the other groups. Comorbidities of obesity, hypertension, dyslipidemia and type 2 diabetes were more commonly found in patients requiring hospitalization, with no significant differences found in the comorbidities profile between critically ill and non-critically ill hospitalized patients. Fourteen per cent of the patients in clinical wards required non-invasive mechanical ventilation, while 96% of the patients admitted to the ICU required invasive mechanical ventilation. Critically ill patients had increased glucose levels, along with higher concentration of neutrophils in blood, increased levels of ferritin and C-reactive protein (denoting activation of the systemic inflammatory response). Increased levels of INR and D-dimers (reflecting activation of the coagulation system), as well as LDH and GPT, which levels raise as consequence of tissue and liver damage, were also observed in critically ill patients. Patients admitted to the ICU also showed a lower haematocrit, pronounced lymphopenia and lower monocyte counts at admission. ICU patients stayed longer in the hospital than ward patients, with 49% having a fatal outcome. Viral RNAemia, viral RNA load in plasma and specific SARS-Cov-2 IgG in the three groups of patients As depicted in Table 1 , the frequency of the detection of SARS-CoV-2 viral RNA (RNAemia) was significantly higher in the critically ill patients (78%) compared to ward patients (27%) and outpatients (2%) ( p < 0.001). Similarly, the group of critically patients showed higher viral RNA loads in plasma than either ward or outpatients ( p < 0.001) (Table 1 and Fig. 1 ). Non-survivors showed the highest concentrations of viral RNA in plasma: viral RNA load (N1 region) in ICU non-survivors: 1587 copies / ml [10248]; viral RNA load (N1 region) in ICU survivors 574 copies / mL [1872] (results expressed as median [interquartile rank]); viral RNA load (N2 region) in ICU non-survivors: 2798 copies / ml [12012]; viral RNA load (N2 region) in ICU survivors 523 copies / mL [1478]. Patients admitted to the wards showed a significant higher frequency of viral RNAemia than outpatients, but viral RNA loads were not significantly different with the latter group (Table 1 and Fig. 1 ). Critically ill patients also had a higher frequency of specific SARS-CoV-2 IgG responses than the other groups (70% in ICU compared to 52% and 49% in the outpatients and ward groups, p < 0.05, Table 1 ). No significant differences were found between the group of outpatients and those admitted to the ward. The prevalence of viral RNAemia did not differ between those patients testing positive and those testing negative for SARS-CoV-2 IgG (43.8% and 40.4%, respectively, p = 0.586), who in addition showed no differences in viral RNA load (data not shown). Patients with viral RNAemia showed no differences in the days since onset of symptoms compared to those with no viral RNAemia (8.0 days [6.0]; 8.0 days [7.2], p = 0.965). In contrast, samples from patients with SARS-CoV-2 IgG were collected later since disease onset that those without SARS-CoV-2 IgG (10.0 days [ 7 ]; 7.0 days [6.0], p = 0.003). Fig. 1 Viral RNA load in plasma, targeting the N1 region (left) and the N2 region (right), in the three groups of patients. Results are provided as copies of cDNA per mL of plasma Full size image Multivariate analysis to evaluate the association between viral RNAemia and viral RNA load in plasma with moderate disease and critical illness While the proportion of patients with viral RNAemia was higher in the wards group compared to the outpatients’ group (Table 1 ), the multivariate analysis did not show a significant association between the presence of viral RNAemia and being hospitalized at the ward, with none of the both methods employed (Additional files 2 and 3 ). In contrast, when the ward group was compared with critically ill patients, a significant direct association was found between viral RNAemia and viral RNA load with critical illness, using the “Enter” method (Table 2 ) but also the backward stepwise selection method (Additional file 4 ). Table 2 Multivariate logistic regression analysis comparing wards patients against critically ill patients (Enter method) Full size table Correlations between viral RNA load in plasma and biological responses to SARS-CoV-2 infection viral RNA load in plasma (targeting either the N1 and the N2 regions) showed the strongest direct correlations with plasma levels of CXCL10, LDH, IL-10, IL-6, IL-15, myeloperoxidase and CCL-2 (MCP-1) and inverse correlations with lymphocytes, monocytes and O2 saturation (Fig. 2 ). These were the parameters whose levels varied the most in critically patients compared with ward and outpatient groups (Figs. 3 , 4 and Additional file 5 ). CXCL10 was the most accurate identifier of viral RNAemia in plasma (area under the curve (AUC), [CI95%], p) = 0.85 [0.80 – 0.89), < 0.001), and IL-15 was the cytokine which most accurately differentiated clinical ward patients from ICU patients (AUC: 0.82 [0.76 – 0.88], < 0.001). Plasma viral RNA load also showed significant direct correlations with levels of VCAM-1, PDL-1, GM-CSF, G-CSF, neutrophil counts, IL-1ra, CRP, INR, D-dimer, TFNα, angiopoietin-2, GPT, ICAM-1, IL-7 and ferritin (Fig. 2 ), with most of these mediators showing the highest variations in the critically ill patients (Figs. 3 , 4 and Additional file 5 ). Fig. 2 Heat map representing the Spearman correlation coefficients between viral RNA load in plasma targeting the N1 and the N2 regions and representative indicators of host dysregulated response Full size image Fig. 3 Levels of laboratory parameters indicating host dysregulated response across groups. † indicates significant difference with the healthy control and the bars significant differences between the other groups Full size image Fig. 4 Levels of laboratory parameters indicating host dysregulated response across groups. † indicates significant difference with the healthy control and the bars significant differences between the other groups Full size image Discussion Our study demonstrates that the presence of SARS-CoV-2-RNA in plasma is associated with critical illness in COVID-19 patients, with the strength of association being the highest in those patients with the highest viral RNA loads. This association was independent of other factors also related to disease severity. Moreover, those critically ill patients who died presented with higher viral RNA loads in plasma than those who survived. SARS-CoV-2 viral RNA was detected in the plasma of the vast majority of those COVID-19 patients admitted to the ICU (78%). As far as we know, our study is the largest one to date using ddPCR to quantify SARS-CoV-2 RNA load in plasma from COVID-19 patients, and the only one with a multicentric design. Our results are in consonance with those from Veyer et al., who, in a pilot study using this technology, found higher viral RNA loads and a prevalence of RNA viremia of 88% in twenty six COVID-19 patients who were critically ill [ 16 ]. The results are also in agreement with those of Hagman et al., who, using standard RT-PCR technology, found that the presence of SARS-CoV-2 RNA in serum at hospital admission was associated with a seven-fold increased risk of critical disease and an eight-fold increased risk of death in a cohort of 167 patients hospitalized for COVID-19 [ 23 ]. Although our study did not determine if the presence of viral RNA in plasma reflects the presence of live virus in peripheral blood, the association found between the presence and concentration of viral RNA in plasma and critical illness suggests that viral replication is more robust in severe COVID-19, and/or that critically ill patients with this disease are not able to control viral replication. This notion is further supported by the correlations found in our study between viral RNA load in plasma and hypercytokinemia involving CXCL10, IL-10, CCL2, IL-6 and IL-15, where the levels of these cytokines were the highest in patients with critical illness. The correlation between viral RNA load and higher levels of cytokines has also been described in the severe infections caused by H5N1 and pandemic H1N1 influenza strains [ 24 ] [ 25 ]. Active viral replication stimulates the secretion of cytokines by the recognition of viral RNA by endosomal receptors such as toll like receptor 7 (TLR7) in human plasmacytoid dendritic cells and B cells, or TLR8 in myeloid cells [ 26 ]. While the elevation of CXCL10, IL-10, CCL2, IL-6 has been extensively documented in severe COVID-19 [ 9 , 10 ], our work demonstrates a clear correlation between these cytokines and plasma viral load. Furthermore, we report for the first time a major role of IL-15 in severe COVID-19. High levels of IL-15 in critically ill patients with high SARS-CoV-2 RNA load in plasma could be an attempt to stimulate Natural Killer cells to fight the virus [ 27 ]. We previously demonstrated that high levels of IL-15, along with IL-6, constituted a signature of critical illness in H1N1 pandemic influenza infection [ 20 ]. Viral RNA load correlated with higher levels of myeloperoxidase in plasma, which were the highest in those patients admitted to the ICU. This is a marker of neutrophil degranulation and a potent tissue damage factor which has been proposed to play a role in the pathogenesis of ARDS secondary to influenza, by mediating claudin alteration on endothelial tight junctions, eventually leading to protein leakage and viral spread [ 28 ]. In this regard, the correlation found between viral RNA load in plasma and higher levels of LDH and GPT could suggest a direct or indirect role of viral replication in mediating tissue destruction in COVID-19. Interesting, but less robust, direct correlations were found between viral RNA load in plasma with GM-CSF and neutrophil counts in blood, further reinforcing the role of neutrophil mediated responses in the pathogenesis of severe COVID-19. The direct correlation with soluble PDL-1 is also relevant, since this is the ligand of the inhibitory co-receptor PD1 on T cells, which activation induces anergy of T lymphocytes [ 29 ]. This finding reinforces the potential role of immune checkpoint inhibitors in severe COVID-19 [ 30 ]. In turn, the association found between viral RNA load in plasma and three mediators of endothelial dysfunction (VCAM-1, angiopoietin-2 and ICAM-1), and with coagulation activation markers (D-dimers and INR prolongation) suggests a potential virally linked mechanism in the pathogenesis of endotheliitis and thrombosis in COVID-19 disease [ 7 ]. Finally, the correlation with the acute phase reactants CRP and ferritin suggests a connection between shedding of genomic material of the virus to the blood and the induction of a systemic inflammatory response which is observed in those patients needing critical care. The strongest inverse correlations found in our study were between SARS-CoV-2 RNA load in plasma and lymphocyte and monocyte counts in peripheral blood, for which critically ill patients showed the lowest values. Active viral replication could be a precipitating event in the pathogenesis of lymphopenia and monocytopenia in severe COVID-19 patients [ 11 , 31 ], by mediating direct cytopathic actions or stimulating the migration of these cells to the extravascular space to reach the infected tissues [ 32 ]. A limitation of our work is its observational nature, which precludes to infer causality. Nonetheless, the observed associations could serve as hypothesis generators, leading to the development of animal models to confirm the potential link between SARS-CoV-2 replication and the dysregulated host responses observed in severe COVID-19. Conclusion The presence of SARS-CoV-2 RNA in plasma is associated with critical illness in patients with COVID-19. The strength of this association increases with viral RNA load in plasma, which in turn correlates with key signatures of dysregulated host response in COVID-19 (Fig. 5 ). Our findings suggest a major role of uncontrolled viral replication in the pathogenesis of this disease. Assessment of viral RNAemia and viral RNA load in plasma could be useful to early detect those patients at risk of clinical deterioration, to assess response to treatment and to predict disease outcome. Fig. 5 Integrative model depicting the correlations found between the indicators of host dysregulated responses and SARS-CoV-2 RNA load in plasma Full size image Availability of data and materials The datasets generated and/or analysed during the current study are not publicly available since they are still under elaboration for publication by the authors but are available from the corresponding author on reasonable request. Abbreviations SARS-CoV-2: Severe acute Respiratory Syndrom-Coronavirus-2 LDH: Lactate dehydrogenase G-CSF: Granulocyte colony-stimulating factor TLR: Toll like receptor
An international team of researchers led by Dalhousie immunologists and critical care specialists in Spain has found key biomarkers in the plasma of COVID-19 patients, which will help predict the severity of illness and could lead to new treatments for the virus. The findings, published Tuesday in the journal Critical Care, involved 250 patients with COVID-19 whose plasma was tested in Spain for the presence of ribonucleic acid or RNA, the virus's genetic blueprint. Dr. David Kelvin, a professor and Canada Research Chair at the Dalhousie's Department of Microbiology and Immunology, collaborated with critical care scientists Dr. Jesus Bermejo-Martin of the Institute of Biomedical Investigation and Dr. Antoni Torres in Spain. More than 30 research hospitals in the country were involved in assessing the viral loads in the plasma of three groups of patients with varying degrees of illness during the first wave of the pandemic wave in Spain from March 16 to April 15, 2020. High levels of RNA The team found that 78 percent of patients who were severely ill had higher amounts of viral RNA than those with mild cases. Those who died had the highest concentrations of viral RNA in their plasma, leading the researchers to conclude that the presence of the RNA in a patient's blood is linked to critical illness. Dr. Kelvin (shown left) says the research could help in quickly identifying patients at crowded hospital emergency rooms who need immediate and intensive care, and those who may be able to go home. "We now have an extremely reliable indicator for identifying severe COVID-19 patients who require critical care and should be admitted to the ICU, which will help intensive care unit doctors prioritize severely ill patients," says Dr. Kelvin. Dr. Kelvin says he and his colleagues are working with a large pharmaceutical company to develop a rapid test that could produce results in 15 minutes and indicate whether a positive patient has the viral RNA and what level of care they may need. Such a device could be a valuable asset as confirmed case counts and hospitalizations continue to surge, particularly in the U.S. Surging hospitalizations In Canada, Chief Public Health Officer Dr. Theresa Tam said Tuesday that the number of people reporting severe illness was rising steadily. She added that over the last week, "there were an average of 3,020 individuals with COVID-19 being treated in Canadian hospitals, including over 600 in critical care." Several provinces, including Ontario and Quebec, have seen sharp increases in hospitalizations in recent weeks. "The importance of this study is that it allows identification of those patients most in need of ICU admission—all of it done by a simple blood test for presence of viral RNA," Dr. Kelvin says. Importantly as well, the team also discovered that the presence of COVID-19 viral RNA was directly linked to a dysfunctional immune response. That may make severe patients unable to fend off the COVID-19 infection, due in part to elevated levels of certain proteins. The identification of these patients will also help identify those who could be treated with new therapeutics, such as antibody cocktails. New clues "We are also trying to figure out the why viral components are in the blood and how this leads to immune dysfunction and severe disease. Several clues are now before us and we hope to develop novel therapeutics to aid in treating severe patients," he said. The team used a method called droplet digital PCR to quantify the virus's genomic material in plasma in the 50 outpatients, 100 hospitalized ward patients and 100 who were critically ill in what is believed to be the largest study of its kind using such methodology. Patients requiring hospitalization were older than those who were discharged from the ER. Critically ill patients were more frequently male, while obesity, hypertension and type 2 diabetes were more commonly found in patients requiring hospitalization.
10.1186/s13054-020-03398-0
Biology
Research on world's largest tree group will help conservation and management of rain forests
David Burslem et al, Genomic insights into rapid speciation within the world's largest tree genus Syzygium, Nature Communications(2022). doi.org/10.1038/s41467-022-32637-x Journal information: Nature Communications
https://doi.org/10.1038/s41467-022-32637-x
https://phys.org/news/2022-09-world-largest-tree-group-forests.html
Abstract Species radiations, despite immense phenotypic variation, can be difficult to resolve phylogenetically when genetic change poorly matches the rapidity of diversification. Genomic potential furnished by palaeopolyploidy, and relative roles for adaptation, random drift and hybridisation in the apportionment of genetic variation, remain poorly understood factors. Here, we study these aspects in a model radiation, Syzygium , the most species-rich tree genus worldwide. Genomes of 182 distinct species and 58 unidentified taxa are compared against a chromosome-level reference genome of the sea apple, Syzygium grande . We show that while Syzygium shares an ancient genome doubling event with other Myrtales, little evidence exists for recent polyploidy events. Phylogenomics confirms that Syzygium originated in Australia-New Guinea and diversified in multiple migrations, eastward to the Pacific and westward to India and Africa, in bursts of speciation visible as poorly resolved branches on phylogenies. Furthermore, some sublineages demonstrate genomic clines that recapitulate cladogenetic events, suggesting that stepwise geographic speciation, a neutral process, has been important in Syzygium diversification. Introduction Species radiations—wherein perplexing amounts of diversity appear to have formed extremely rapidly—have featured prominently in the history of evolutionary theory 1 . Various underlying mechanisms for their formation have been proposed 2 , including adaptation 2 , non-adaptive processes 3 , 4 , hybridisation 5 , 6 , and polyploidy 7 , 8 , but the relative importance of these drivers remains incompletely understood. Species radiations on islands have been among the most prominently studied systems 9 , 10 , 11 , 12 . For example, the Malesian archipelago in the tropical Far East 13 , consisting of thousands of islands and including New Guinea and Borneo, the second and third largest islands in the world, is a biodiversity hotspot containing many radiations of plant and animal species. Among forest trees, local tree species richness across Southeast Asian forests is largely driven by a small number of highly species-rich genera 14 . The clove genus, Syzygium , is one of the most important of these genera, and therefore understanding diversification and its underlying drivers within Syzygium may help explain large-scale patterns of diversity in the Palaeotropics. However, Syzygium , like many other species radiations that hold immense morphological and ecological variation, has so far been difficult to resolve phylogenetically 15 , 16 , 17 , 18 , leading to the impression that evolutionary change can be a swift process that may not require substantial underlying genetic change 9 . Here, we employ genome-scale approaches to investigate speciation patterns and their potential drivers in the most species-rich tree genus worldwide, Syzygium 19 . Syzygium , which includes 1193 species recognised worldwide 20 , is a genus in the myrtle family (Myrtaceae). Syzygium is restricted to tropical and subtropical regions of the Old World, where it is distributed from Africa through to India, across Southeast Asia and extending to Hawaii in the Pacific Ocean, with the centre of species diversity in Indomalesia 20 . The type species of Syzygium is S. caryophyllatum , a poorly known, small to medium-sized tree endemic to southern India and Sri Lanka 21 . The best-known species in the genus is the clove tree, Syzygium aromaticum , from which flower buds are gathered, dried, and used as a spice, a preservative and in pharmacology 22 . In addition, Syzygium aqueum , S. cumini , S. jambos , S. malaccense and S. samarangense are widely cultivated in the tropics for their large edible fruits 23 . Syzygium samarangense is cultivated commercially in Southeast Asia, where it is marketed as the wax apple, java apple, rose apple, or samarang rose apple. Apart from being used as cooking ingredients or cultivated for fruits, Syzygium species with dense and bushy crowns, such as S. antisepticum , S. australe , S. luehmannii , S. myrtifolium and S. zeylanicum , are used in the horticulture industry in Australia, Indonesia, Malaysia and Singapore for hedges, natural fences, natural sound barriers and privacy screens 24 . Syzygium species are generally medium-sized to large, characteristically sub-canopy trees that are sometimes emergent, while some also form shrubs, small forest understorey treelets, swamp and mangrove forest trees, and rheophytic vegetation 25 . As is true of many tropical trees, Syzygium flowers are visited by a large diversity of insects and vertebrates, and their fruits are typically eaten by a variety of flying and arboreal vertebrates and even terrestrial bird, mammal and reptile browsers 25 . Syzygium species also occur as dominant mid-level canopy trees, affecting the ecosystems of plants, animals, and fungi in lower forest layers 25 . Many species co-occur; for example, there exist ca. 50 taxa on a single 52-ha. ecological plot in the Lambir Hills National Park (Sarawak, East Malaysia, Borneo 26 ), where they display fine-scale differentiation in habitat occupancy and stature 14 . The genus is notorious as one of the most difficult to identify due to the paucity of clear, diagnostic morphological characters for distinguishing species; 25 , 27 , 28 morphological variation in the genus can appear as continua of traits rather than collections of discrete units. Given the immense number of species assigned to Syzygium , it contributes disproportionately to the diversity of Southeast Asian tropical forests. Therefore, understanding diversification and its underlying drivers within Syzygium may help explain large-scale patterns of diversity. Thus far, however, phylogenetic studies of Syzygium have involved only a few PCR-amplified plastid and nuclear marker genes 15 , 16 . An infrageneric classification proposed in 2010 was based on three plastid loci 17 , and although it resolved some major clades, interrelationships within the bulk of the genus, species of Syzygium subg. Syzygium , were left largely unresolved. Here, we sequence whole genomes to vastly increase the available data in an attempt to more fully resolve phylogenetic relationships among Syzygium species. We use Oxford Nanopore Technology (ONT) long-read sequencing 29 to assemble and annotate a chromosome-scale reference genome for the sea apple, Syzygium grande 23 . This species was selected as a representative because it is a well-known member of the most diverse, broadly distributed group within Syzygium , and one of the most commonly cultivated shade and firebreak trees planted along streets in Singapore and Peninsular Malaysia. We examine the palaeopolyploid history of Syzygium to assess whether whole genome duplications may have played a role in speciation through sub- or neo-functionalisation events, eventually fixed by natural selection or drift processes during species transitions 7 . We use whole-genome sequencing of 292 Syzygium individuals and outgroups to address evolutionary relationships among the species. Both Illumina short-read assemblies, as well as mapping of the read data to the S. grande genome, are brought to bear for phylogenomic investigations of possible rapid diversification in the group. Results and discussion Assembly and annotation of the reference and resequenced Syzygium genomes A chromosome-level assembly of Syzygium grande (Fig. 1a ) was carried out using wtdbg2 30 to generate a 405,179,882 bp genome in 174 contigs (N50 of 39,560,356 bp) from more than 60 Gb of ONT long reads, and the assembly was subsequently polished with 30 Gb Illumina short reads. Finally, scaffolding into pseudo-chromosomes was carried out using Dovetail HiC technology 31 to generate 11 pseudo-chromosomes (Fig. 1b ). Fig. 1: Assembly and structural evolution of the Syzygium grande reference genome. a S. grande inflorescence, flowers and fruits; the latter evoke the common name “sea apple”; b HiC contact map for the scaffolded genome, showing 11 assembled chromosomes; c Phylogeny of major lineages of Myrtales, following Maurin et al. 44 . Genera of Myrtaceae used in genome structural and phylogenetic analyses are also depicted. Punica (Lythraceae) was also examined for structural evolution. Open circles represent the multiple, independent polyploidy events predicted by the 1KP study 42 ; our results here suggest instead a single Pan-Myrtales whole genome duplication (blue rectangle) which followed the gamma hexaploidy (orange triangle) present in all core eudicots. d Synonymous substitution rate density plots for internal polyploid paralogs within Syzygium , Eucalyptus , Punica, Populus and Vitis . Modal peaks in these three Myrtales species suggest a single underlying polyploidy event. Ks asymmetries were calibrated using the gamma event present in each species. Both histograms and smoothed curves are shown. e Fractionation bias mappings of Myrtales chromosomal scaffolds, 2 each (different colours), onto Vitis vinifera chromosome 2 show similar patterns for all three Myrtales species (excluding cases of chromosomal rearrangements among the three, which are discernible as different scaffold colour switchings compared to the Vitis chromosome). X -axis shows the percent retention of fractionated gene pairs following polyploidisation; Y -axis shows the position of gene pairs along the Vitis chromosome. Photograph credit: WHL ( a ). Source data are provided as a Source Data file. Full size image Following the assembly, repeat masking (Supplementary Table 1 ) and gene prediction were carried out using evidence from Syzygium grande RNA-seq data and protein sequences from Arabidopsis thaliana and Populus trichocarpa . Altogether 39,903 gene models were predicted with 86.6% of benchmarked universal single-copy orthologous (BUSCO 3.0.2 32 ) genes being present. In addition to the reference assembly, 30 Gb of Illumina HiSeqX sequencing data was generated for each of 289 Syzygium individuals and three outgroup taxa (two Metrosideros and one Eugenia species, both Myrtaceae; Supplementary Data 1 ) and assembled de novo using the MaSuRCA assembler 33 (Supplementary Data 2 and Supplementary Fig. 1 ). The average single-copy completeness across this set of genomes was 89.23% (Supplementary Data 2 ), indicating that the draft assemblies were of acceptable quality. Genome structure of Syzygium reveals that a single polyploidy event underlies all Myrtales We used our chromosome-level assembly of Syzygium grande to re-evaluate the polyploid history of its family, Myrtaceae, and order, Myrtales. Myrtales are a diverse rosid lineage comprising approximately 13,000 species across 380 genera and 9 families 34 . All rosids share the gamma triplication event that occurred in the core eudicot common ancestor 35 , 36 , 37 . Sequencing of the Eucalyptus grandis (Myrtaceae) genome revealed an additional whole genome duplication (WGD) in its lineage 38 , and later analyses of the Punica granatum (pomegranate) genome in the related family Lythraceae suggested that this polyploidy event may have been shared 39 , 40 , occurring near the base of the order. Further work on the Psidium guajava (guava) genome came to a similar conclusion 41 . However, the broad, transcriptome-based 1KP project suggested that the Lythraceae and Myrtaceae WGDs might be independent events. Indeed, seven independent, lineage-specific WGDs were predicted by 1KP (their Supplementary Fig. 8 ) to characterise a larger lineage containing Larrea, Tribulus (both Zygophyllaceae), Combretaceae, Onagraceae, Melastomataceae, Lythraceae and Myrtaceae 42 . Syntenic alignments of the Syzygium grande genome against itself revealed at least one whole genome multiplication event since the gamma palaeohexaploidy (Supplementary Fig. 2 ), and alignment against the Vitis vinifera genome confirmed the single lineage-specific WGD (Supplementary Fig. 3 ). A more detailed study against both Eucalyptus grandis and Punica granatum revealed 1:1 syntenic relationships (Supplementary Figs. 4 and 5 ), strongly suggesting a shared polyploid history. We investigated this further by extracting internally syntenic gene pairs in Eucalyptus grandis, Punica granatum, Vitis vinifera and Populus trichocarpa . When rate-corrected against the gamma hexaploidy event 43 , an ancient pan-Myrtales WGD was supported, approaching gamma in age (Fig. 1 c and d). Furthermore, subgenome-wise syntenic depths and fractionation patterns were extremely similar in Syzygium grande , Eucalyptus grandis , and Punica granatum , supporting the hypothesis that a single polyploidy event underlies all Myrtales (Fig. 1e and Supplementary Fig. 6 ). Furthermore, alignment of Populus trichocarpa against given Syzygium grande chromosomes showed the expected 2:1 syntenic pattern indicative of an independent Salicaceae-specific WGD in the rosid order Malpighiales (Supplementary Fig. 7 ). Based on phylogenetic relationships recently solidified for Myrtales families 44 , we conclude that earlier genome-based determinations of shared polyploid status within Myrtales are correct in indicating one basal WGD, and that the transcriptome-based 1KP study erroneously inflated the number of WGDs within the clade (Fig. 1c ). Since some polyploid events such as the gamma triplication 35 , 36 , 37 and the pan-angiosperm WGD 45 co-occur with major flowering plant radiations 7 (here, the core eudicots and all angiosperms, respectively), a single polyploid event shared by all Myrtales might hold implications for early diversification in the order. However, it is well-known that some large angiosperm diversifications, such as Gentianales (an even larger lineage than Myrtales at >20,000 species across 1121 genera and 5 families 34 ), are not marked by ancestral WGDs, leaving polyploidy as a causal mechanism for diversification rather inconclusive, or at the very least an incomplete explanation. Polyploidy within Syzygium similarly appears to play little role in its infrageneric diversification. BUSCO duplicate ( D ) scores suggest that the majority of species have remained at the same ploidy level following the Pan-Myrtales WGD event (Supplementary Data 2 ). At least one clear case of neopolyploidy is observable in S. cumini , which has the highest D score in our sample and a known haploid chromosome number of n = 22 46 , double the number of our S. grande pseudochromosomes. Single nucleotide polymorphisms and single-copy nuclear genes yield well resolved major branchings within Syzygium Species-level interrelationships within Syzygium have not yet been investigated in depth. To obtain a whole genome-level phylogeny we used the Syzygium grande genome assembly as a reference for mapping variants from three outgroup taxa and 289 independently sequenced Syzygium accessions representing at least 182 distinct species, 49 repeated species samples, and 58 additional as-yet-unidentified taxa. SNP calling yielded 1,867,173 variants across all 292 samples, from which we determined genome-wide phylogenetic relationships using RAxML 47 (Fig. 2 ). Since the SNPs could be identified only from the relatively conserved parts of the genome, we also collected predicted universal single-copy genes from BUSCO analyses (source data are provided at Dryad, ) estimate using ASTRAL 48 , a coalescence-based approach that incorporates individual gene trees into species tree estimation. Fig. 2: Phylogenetic tree based on single nucleotide polymorphisms among all 292 resequenced Myrtaceae accessions. Black circles represent the at-least 12 independent invasions of Sunda from Sahul. The green circle represents migration from Sunda to the Indian subcontinent, and the purple circle denotes further migration from there to Africa. Blue versus red circles at the leaves of the tree represent Syzygium accessions from Bukit Timah Nature Reserve and Danum Valley Conservation area, respectively. Background colours represent the recognised subgenera, (clockwise from the root, excluding the outgroup taxa) Syzygium subg. Sequestratum (green), S . subg. Perikion (yellow), S . subg. Acmena (red), the S. rugosum clade (purple) and S . subg. Syzygium (cyan). Full size image Phylogenetic analysis using genome-wide SNPs (source data are provided at Dryad, ) resulted in a phylogeny (Fig. 2 and Supplementary Fig. 8 ) that was robust and well-resolved with the outgroup-based rooting, namely Metrosideros excelsa , M. nervulosa (tribe Metrosidereae) and Eugenia reinwardtiana (tribe Myrteae). Five major clades resolved in the phylogeny were all well-supported, with most branches receiving 100% bootstrap support at the nodes, indicating strong internal consistency within the dataset. These five major clades represent the previously characterised Syzygium subg. Syzygium , S . subg. Acmena , S . subg. Perikion , S . subg. Sequestratum and a subgenus yet to be named that includes S . cf. attenuatum , S. rugosum and an unidentified species from Sulawesi labelled here as “SULAWESI2” (henceforth, we refer to this lineage as the S. rugosum clade). It is noteworthy that internal branch lengths are heterogeneous in length, indicating that the clades are differentially divergent either in time, diversification rate, population size, or all of these factors 49 . The largest clade in the phylogeny, having both the most recognised species and the most representative individuals in our current sample, is the Syzygium subg. Syzygium clade. Relationships within this clade are largely well resolved and supported, as are interrelationships among the five subgenera. Despite strong support, it is important to note that such an analysis generates a phylogeny that represents a genome-wide average, rather than taking into account the independent inheritance of different loci across the genome characteristic of incomplete lineage sorting or adaptive processes. To obtain an independent view of the Syzygium species tree, we used our BUSCO single-copy gene sets (source data are provided at Dryad, ) to compare the gene trees derived from independent nuclear loci in a coalescence species tree approach. We analysed two different BUSCO gene sets that differed in their completeness among accessions: the set of 229 genes containing representatives from all sequenced individuals, and a second set with 1227 genes present in ∼ 95% of accessions. The phylogenies obtained from both single-copy genes and genome-wide SNPs (Supplementary Fig. 8 ) concordantly displayed the five major, well-supported clades representing the five subgenera of Syzygium , including also their relative branching order from the outgroup root, albeit with some minor disagreement of taxon placement within clades. These corroborating results inferred from two different approaches indicate strong and consistent phylogenetic signals within our genomes. Furthermore, Syzygium interrelationships based on plastid mappings (source data are provided at Dryad, ), derived by mapping the Illumina sequence reads for each accession onto the Syzygium grande plastid genome, yielded partly incongruent results that may be traceable to ancient hybridisation and plastome capture, or to incomplete lineage sorting (ILS) (Supplementary Fig. 9 ). Diversification bursts characterise many terminal branchings within Syzygium phylogeny Despite a strong overall signal supporting a bifurcating evolutionary history, the many extremely short coalescent branch lengths generated by the ASTRAL approach suggest that ILS 49 may have been a confounding biological factor at various points during the Syzygium radiation. These branch lengths, which are interpretable in terms of time in generations ( g ) divided by effective population size ( N e ) 49 , provide evidence that many Syzygium clades either radiated extremely rapidly, or that their ancestral population sizes were comparatively large, or both. Such g and N e conditions are known to promote gene-tree/species-tree discordance through ILS 49 . We sought to investigate signatures of ILS in the data further using NeighborNet, a distance-based method based on neighbour-joining that generates phylogenetic networks 50 . The character incongruence that is manifested as extra edges in these networks beyond a perfectly bifurcating tree has been interpreted both in terms of interspecies admixture and/or incomplete lineage sorting phenomena 51 , 52 . NeighborNet analysis of our genome-wide SNP data for Syzygium subg. Syzygium including a single outgroup species, S. rugosum , showed that while many of the evolutionary relationships among taxa were strongly tree-like, at least one major clade (which we informally term here the “ Syzygium grande group”) likely involved a burst of lineage splits (Fig. 3a ), as evidenced by the predominantly noncoding (i.e., neutrally evolving) SNPs which illustrate a highly webbed, fan-like network of splits at its base. While the stem lineage of the Syzygium grande group was strongly supported in the BUSCO and SNP trees, it is noteworthy that in the SNP analysis many parallel edges nonetheless appear along it, suggesting internal incongruence among SNPs, possibly reflecting differential inheritance with ILS (see Suh et al. 52 ). A further, larger lineage including the Syzygium grande group and its outgroups was similarly well supported in the BUSCO and SNP trees; however, its own stem lineage contained even more parallel edges and potentially even more severe ILS (Fig. 3a ). Fig. 3: Principal component analysis and phylogenetic reconstructions of single nucleotide polymorphism variation within Syzygium . a A NeighborNet phylogenetic network shows considerable character discordance among genome-wide SNPs that may be indicative of incomplete lineage sorting. This discordance is particularly noteworthy at the highly webbed base of the Syzygium grande group (close-up view in square inset; see also the labelled network in Supplementary Fig. 89 ). b PCA of principal components 1 and 2 of Syzygium subg. Syzygium individuals. Clinal patterns are readily observed; the Syzygium grande group (centred around 0.10 on PC2) is comprised of a medium-blue paraphyletic grade subtending a pink terminal lineage, as shown in ( c ), the RAxML SNP tree, which is colour-coded following the PCA. Shading of two groups on ( a ) matches the colour coding on the tree as well as the colours and symbols on the PCA plot. Small matching symbols in shaded areas are shown for clarity. Edge with two horizontal lines at tip represents the outgroup taxon, Syzygium rugosum , clipped for length. Source data are provided as a Source Data file. Full size image Incomplete lineage sorting rather than hybridisation may confound phylogenetic inferences Next, we used the same SNP data with the ADMIXTURE software 53 to search for genomic partitioning among the clades and accessions that might be attributable to admixture (introgression) or differential blockwise inheritance through extremely narrow species splits (ILS). ADMIXTURE assumes K ancestral population clusters on the data; it is not decisive regarding mechanisms underlying any K -cluster mixtures within individuals analysed. The approach was developed for population-level data wherein mixed K -clusters are most likely attributable to admixture rather than ILS through lineage splits (e.g., speciation events). However, results at the interspecific level are often interpreted uncritically as actually indicative of cross-lineage admixture 54 (see ref. 55 ). Indeed, the K components from ADMIXTURE simply represent subsets of inherited SNP variation that could reflect any underlying mixtures, of which ILS can be one mechanistic basis (Supplementary Figs. 10 and 11 ). We, therefore, propose ILS to be a likely underlying causal factor for some of the K mixtures given both the short coalescence branch lengths on the ASTRAL species tree and the reticulation of the NeighborNet. Our ADMIXTURE analysis cross-validation scores supported K = 14 as the best representation of ancestral population structure (Supplementary Fig. 12 ). At this K , the Syzygium grande group is almost entirely assigned to one K in the ADMIXTURE results (orange, Supplementary Fig. 11 ). However, at other K values (e.g., K = 10,11,12; Supplementary Figs. 10 and 11 ), K mixtures within this group are apparent. It is worth noting that the fold level of cross-validation affects the preferred number of components, since the optimum results in a case where for each component there is at least one representative in the test set. The outgroups to the Syzygium grande group also contain the orange-coloured cluster at K = 14, but they additionally include mixtures with other ancestral populations (Supplementary Fig. 11 ). These K -cluster mixtures appear to be consistent with the multiple edges underlying this larger lineage in the NeighborNet analysis. In other words, they are likely indicative of differential inheritance of genomic regions and their SNPs through ILS. To rule out admixture generating these results, we formally tested for gene flow within the Syzygium grande group using Patterson’s f 3 statistic 56 , which tests for patterns of allele sharing (source data are provided at Dryad, ). We calculated all three-way taxon comparisons of source1, source2, and target taxa to evaluate signatures of admixture. These results demonstrated no evidence for admixture, but did reveal instances where significant negative Z -scores across all possible source combinations reflected close relationships through identity by descent (described in detail by Lan et al. 57 ) (Supplementary Figs. 13 – 87 ). With ILS the more likely explanation for these results, we used local principal component analysis (PCA) 58 to examine whether patterns of SNP-based relatedness differed instead by location along chromosomes. Clear distinctions in the sample projections on PCA components along a scaffold would indicate that different genomic blocks have different evolutionary histories, of which introgression, ILS, local selection, or even drift are suspect source mechanisms. Local PCA takes window-wise PCA projections of SNP variation and arrays differences among them on a multidimensional scaling (MDS) plot; three distinct “corners” are then selected from the MDS plot, and the corner-wise variation is pooled for final analyses 58 . We analysed both whole-chromosomal variations as well as repeat-masked data, the latter to ensure that distinct patterns obtained were not solely related to ambiguous mappings due to different transposable element families. The Syzygium grande group characteristically appears as a tight cluster across different corners on the 11 chromosomes (Supplementary Figs. 88 – 100 ). However, in some of these collections of windows, the group is unresolved from its closest outgroups and from the rest of Syzygium subg. Syzygium ; in other corners, these outgroups are poorly distinguished from the remainder of the subgenus, while the S. grande group stays distinct. We infer that these results support the hypothesis of underlying ILS—i.e., regional block-wise genomic distinction vs. indistinction of these taxa, as reflected by the many-paralleled edges of their corresponding stem lineages in the NeighborNet result (Fig. 3a ). Principal component analysis reveals clinal patterns reflective of isolation by distance We further studied the SNP data genome-wide using standard PCA 59 , 60 . Plots of principal components focussing on Syzygium subg. Syzygium illustrated clear clines (Fig. 3b ) that mostly correspond to sublineages on the BUSCO and SNP trees (Fig. 3c and Supplementary Figs. 101 – 121 ). Several filtrations of data (Supplementary Table 2 ), including analyses of homozygous sites only (as well as checks for coverage that suggested no apparent biases), yielded similar results and therefore increased confidence that the clinal patterns were not artefactual (Supplementary Figs. 107 – 121 ). A simple explanation for these linear gradations is that allelic variation in Syzygium became fixed in consecutive speciation events, along an ongoing cladogenetic process. The PCA analysis highlights that different lineages within the S. grande group partly overlap (Fig. 3b and Supplementary Figs. 122 – 127 ), consistent with short internal coalescence branch lengths on the BUSCO tree. In other words, the clinal patterns may reflect a neutral process akin to isolation by distance 59 , 60 , 61 , 62 (IBD; see ref. 63 ), for example, comprising serial founder events in an island-hopping model of geographic speciation 3 (but see ref. 64 ). Similar clinal variation among Big Island (Island of Hawai’i) accessions of a closely related Myrtaceae species, Metrosideros polymorpha (see Fig. 1C of ref. 10 ), might also reflect simple IBD processes in its extremely young and rapidly expanding/dissecting volcanic environment. Allopatric speciation does not necessarily require adaptive differences, only the null model of reproductive isolation and genetic drift 65 . The possibility of entirely neutral phenotypic clines forming in a model of progressive cladogenesis, such as we hypothesise here for diagnosable Syzygium species, may attest to IBD, and reflect environmental gradients that accompany spatial population expansion, or even involve admixture between previously isolated populations or clades 63 . However, many Syzygium species are sympatrically distributed, which, if the splits observed were time-coincident or nearly so, could suggest that ecological speciation 66 , 67 , 68 (and therefore adaptive differences, such as flowering allochrony and other gene flow barriers) could also be operative. Even weak selection via such local adaptation can significantly speed up an entirely drift-based geographic speciation process 65 . For example, the phylogenetic results presented here show that Syzygium species sympatric in the Bukit Timah and Danum Valley forest plots are broadly distributed across the phylogenetic tree, but there are also clear clusters of species from these plots within some subclades (Fig. 2 , Supplementary Note 1 and Supplementary Table 2 ). However, there is reason to suspect that sampling biases from within larger species ranges may influence this clustering, since Bukit Timah- or Danum-enriched clades are not entirely autochthonous to these plots, but also distributed elsewhere. For example, we sampled a S. barringtonioides specimen from Brunei that groups within an otherwise Danum cluster (both nonetheless being Bornean), and the S. chloranthum - S. cerasiforme clade that was largely sampled from Bukit Timah also contains S. ampullarium , which was collected in Borneo. As such, considerable reproductive isolation and lineage diversification likely occurred prior to many migrations into sympatric niches. The clearest inference from Fig. 2 is therefore that the Bukit Timah and Danum Syzygium floras are assemblages of phylogenetically- and time-diverse lineages. Demographic analysis of the Syzygium grande group also implies rapid diversification Pairwise Sequentially Markovian Coalescent (PSMC) analysis uses the two haploid genomes present in each collection of reads for a given diploid individual to estimate past effective population sizes over time. We ran PSMC demographic curves for most individuals in the closely interrelated Syzygium grande group using Illumina reads mapped against each taxon’s own MaSuRCA genome assembly. We scaled the time and N e axes of the demographic reconstructions uniformly by employing an approximate Syzygium generation time of 5 years (roughly the median of the data in Supplementary Table 3 ) and a mutation rate of 1E−08, in line with previous work on woody plants 69 . Comparing demographic curves together, all individuals appear to follow similar trajectories wherein the genetic variation in all taxa coalesces between 9 and 20 million years ago (Mya), followed by a peak in N e at 3–4 Mya, various N e fluctuations/crashes in intermediate times from 1 to 0.1 Mya, followed by strong N e collapse in recent-most time (Fig. 4 ). The impression that the demographies seem largely alignable in gross aspect while differing in ancient coalescence times, may reflect their joint membership in a stem lineage as well as real generation time differences among the taxa, or differences in past heterozygosity levels 70 . Indeed, maximum time at coalescence strongly correlates with overall heterozygosities of individuals (from SNP calls) as well as numbers of segregating sites (from reads mapping to individual genomes only; Supplementary Fig. 128 ). Fig. 4: Genomic palaeodemography of Syzygium accessions from the S. grande group. Pairwise Sequentially Markovian Coalescent analyses, coloured by groups/clades in Fig. 3 . Source data are provided as a Source Data file. Full size image Moreover, PSMC curves for many taxa in the pink terminal lineage in Fig. 3c converge together at lower N e in ancient to intermediate times than do most individuals in the blue paraphyletic group that subtends it, which mostly follow higher N e trajectories. These distinctions visible as early as 10 Mya suggest that the pink lineage may have begun splitting from within the blue group as early as then, while the N e fluctuations closer to the present may represent the period of rapid cladogenesis reflected in the “fan-like” reticulate base of the S. grande group that is visible in the NeighborNet result. Phylogenetic resolution of rapid splits such as these can be particularly confounded by ILS, which by coalescent theory may itself be exacerbated by any N e size increases. The final N e crashes closest to the present may in turn mark the individuation of the lineages (e.g., via founder effect 71 ) visible past the stage of the NeighborNet fan (i.e., the tips extending from its basal web). Syzygium radiated multiple times from Sahul into Sunda and elsewhere In a review on the origins and assembly of Malesian rainforests 72 , Syzygium was highlighted as a key genus for understanding the floristic evolution of the region. Formal biogeographic analyses using the BioGeoBEARS 73 and RASP (Reconstruct Ancestral State in Phylogenies) 74 software each demonstrate, despite limited taxon sampling of outgroups, that the genus Syzygium is of Sahul origin, i.e., centred on Australia and New Guinea (Supplementary Figs. 129 and 130 ; source data are provided at Dryad, ). This finding is consistent with previous work on Syzygium and Myrtaceae as a whole, which similarly finds Sahul as the ancestral area 75 . We also generated a dated ultrametric SNP tree to provide split and crown group times for subclades and species diversifications (Supplementary Fig. 131 ; source data are provided at Dryad, ). We used as a calibration point the minimum and maximum ages of a fossil assignable to Syzygium subg. Acmena (20.9–22.1 Mya) 76 . The crown group of the entire genus Syzygium is dated at 51.2 Mya, and the crown groups of subgenera Sequestratum , Perikion , Acmena , the S. rugosum clade, and Syzygium date to 34.2, 24.1, 15.8, 7.0 and 9.4 Mya, respectively (Supplementary Fig. 131 ). As such, Syzygium itself dates to before the Sunda-Sahul convergence which occurred ~25 Mya 77 , with most subgenera diversifying after the convergence. Repeated invasions both westward and northward from Sahul that correspond with species diversifications are clearly apparent. For example, parallel migrations into Sunda occurred at least 12 times (Fig. 2 ), sometimes corresponding with large radiations, but only within Syzygium subg. Syzygium (Supplementary Figs. 129 and 130 ). The earliest migration to Sunda was by 17.1 Mya, the crown group age for the Sunda half of the first split in Syzygium subg. Sequestratum (Supplementary Fig. 131 ). The S. rugosum clade migrated to Sunda by 7.0 Mya, and Syzygium subg. Perikion had entered Sunda (Peninsular Malaysia) and later migrated to Sri Lanka by 3.0 Mya. Within Syzygium subg. Acmena , Sunda had been accessed by 390 Kya. Syzygium subg. Syzygium is resolved as having a Sahul origin, with a crown group age of 9.4 Mya (corresponding to the young end of PSMC curve coalescences for the S. grande group; see above and Fig. 4 ). Following Hall’s 78 land/sea level reconstruction at 10 Mya, entry of subgenus Syzygium into Sunda, potentially via the Sula Spur, may have involved considerable island hopping from Sahul. As many as seven invasions of Sunda occurred, at least three of which (according to our sampling) resulted in hyperdiverse subclades. The earliest Sunda migrations within the type subgenus involved the hyperdiverse Syzygium pustulatum group, with a minimum crown group age of 2.8 Mya, and the large S. creaghii group, which has a similar minimum crown age of 2.5 Mya (Supplementary Fig. 131 ). These lineages entered Sunda following the New Guinea uplift, which began about 5 Mya 79 , possibly correlating with population expansions seen around this time in PSMC curves for the extremely diverse Syzygium grande group. The S. grande lineage migrated much later from Sahul into Sunda by 165 Kya, overlapping with the PSMC N e fluctuations seen in intermediate times (Fig. 4 ). It subsequently radiated broadly and very recently into the North Pacific (by 14.6 Kya), the Indian subcontinent (by 21.9 Kya), and from there on to Africa (by 6.5 Kya). These recent dates correspond well with the individuation of clades within the S. grande group inferred from NeighborNet (Fig. 3a ), and discussed above in reference to population crashes in PSMC analyses (Fig. 4 ). The Syzygium pustulatum and S. creaghii groups, which are also marked by fan-like radiations in the NeighborNet analysis (see labelled network in Supplementary Fig. 89 ), unlike the S. grande group, do not show considerable character incongruence suggestive of ILS at its base. The Syzygium pustulatum group and smaller and late-migrating S. jambos group (the latter having entered Sunda by 123 kya; Supplementary Fig. 131 ) also represent rapid diversifications into Sunda with significant tree-like structure at their stem-lineage bases in the NeighborNet analysis. The last 1 Mya in Southeast Asian biogeography was marked by cyclical sea level changes that repeatedly divided and rejoined vegetation 80 , 81 , and the minimum invasion dates for the Syzygium grande and S. jambos groups correspond with periods when sea levels were lower than today 82 and therefore lowland rainforest vegetation more continuous. To summarise, parallel dispersals from Sahul into Sunda and beyond sometimes correlate with what appear to be rapid radiations that at least in one case, the Syzygium grande group, appears to have been marked by significant ILS. Morphological transitions may accompany some Syzygium species diversifications We thereafter sought, using Mesquite 83 parsimony optimisations, to provide qualitative first approximations of morphological trait evolution and accompanying ecological variables that might correspond with these East-to-West migrations (source data are provided at Dryad, ). We employed the BUSCO species tree for this exercise to minimise any topological biases that might arise from ILS. An interesting trait is the presence of a pseudocalyptrate (or “calyptrate” in Syzygium barringtonioides and S. perspicuinervium ) versus free corolla (Fig. 5a–c , Supplementary Note 2 and Supplementary Figs. 132 and 133 ). A pseudocalyptrate corolla, which is relatively common among genera of Myrtaceae, describes a perianth that is variously fused into a cap-like structure that may protect developing stamens from predation, degradation by desiccation, or fungal rot 84 . As determined previously, based on PCR marker phylogenies 15 , 16 and ontogenetic studies 84 , pseudocalyptrate corollas evolved convergently in several Syzygium groups. One remarkable transition from free to pseudocalyptrate corollas appears at the base of the Syzygium grande group; indeed, it was apparently fixed first in its outgroup taxa (Supplementary Fig. 133 ). Several evolutionary reversals thereafter to free corolla lobes occurred, including one reversal that marks a large sublineage of the Syzygium grande group including 75 species as well as S. grande itself. The Syzygium creaghii and S. jambos groups have free corollas, but the S. pustulatum group may have been primitively pseudocalyptrate. Regardless, this trait seems highly labile within Syzygium , and other than the possible exception of the S. grande group’s ancestral state, there is no clear connection with Sahul-to-Sunda diversification. Interestingly, the most-parsimonious resolution of green fruits as ancestral to this clade and some of its outgroup species accompanies diversification of the Syzygium grande group into Sunda (Fig. 5d and Supplementary Fig. 134 ). Later most-parsimonious state transitions from green to purplish-black fruit are also noteworthy in the group. We speculate that this combination of traits—pre-anthesis protection by pseudocalyptrae and bearing of green to purplish-black fruits that attract far-flying birds or bats 85 , 86 , 87 —may have together pre-adapted this group to broad migration. Fig. 5: Reproductive trait diversity in the genus Syzygium , as examined to reconstruct ancestral states using Mesquite. a Free petals ( Syzygium pendens ); b Calyptrate calyx ( Syzygium paradoxum ); c Pseudocalyptrate corolla ( Syzygium adelphicum ); d Fruits maturing green ( Syzygium cf. dyerianum ); e Pendulous inflorescence or infructescence ( Syzygium boonjee ). Photograph credits: YWL ( a )–( e ). Full size image One other trait of note that marks large diversifications is the presence of pendulous inflorescences, which characterises the Syzygium creaghii group and largely marks the S. longipes group (Fig. 5e , Supplementary Note 2 and Supplementary Fig. 135 ). This trait is correlated with large fruits, often fleshy, which are known to reflect a specialised fruit display and dispersal strategy called flagellichory that increases fruit display for echolocating bats 88 , other flying/arboreal vertebrates 18 or large vertebrate browsers (e.g., cassowaries 89 ). Implications from Syzygium for understanding species radiations Here, we have explored species diversification patterns and their drivers in the world’s most species-rich tree genus, Syzygium . We generated a high-quality reference genome for Syzygium grande , the sea apple, and shotgun sequenced more than 15% of the species of this large genus to study their phylogenomic relationships. Through this extensive sampling of Syzygium diversity, we were able to solidify major clade relationships within the genus, currently recognised as subgenera, and, within Syzygium subg. Syzygium , provided unprecedented clarity on subclades that may become sectional units in the future. We discovered that many Syzygium species, particularly within Syzygium subg. Syzygium , likely branched from one another in rapid succession, yielding radiations of morphological and ecological diversity. One example was a group of species containing Syzygium grande itself that was marked by extremely short coalescence time intervals in our BUSCO species tree; this result was matched by highly networked edges at the base of the group in our NeighborNet analysis, which reflects underlying incongruence in the data. Since none of the f 3 tests showed admixture, we interpret such webbed stem lineages in the NeighborNet network to reflect incomplete lineage sorting during rapid species radiation. PCA analysis of our samples illustrated clines of fixed allelic variation arrayed by Syzygium sublineage, possibly reflecting that a simple process of neutral geographic speciation predominated during most of the group’s cladogenesis. Plotting occurrences of species native to Singapore’s Bukit Timah Nature Reserve and East Malaysia’s Danum Valley Conservation Area illustrated that large-scale lineage diversification occurred before sympatric occupation of these habitats to generate diverse, closely associated Syzygium floras. As such, the immense radiation of the world’s largest tree genus may serve as a model for further detailed research, for example at the population level—integrating transcriptomic, proteomic, and metabolomic data—to explore actual mechanisms underlying morphological and ecological specialisation during a diversification that rivals any others under current study. Methods Oxford Nanopore sequencing of Syzygium grande Young leaf tissue and twigs of Syzygium grande from a cultivated individual (Gleneagles Hospital, along Napier Road, Singapore; Low s.n . [SING]) were gathered, cleaned and flash frozen in liquid nitrogen, and then stored in −80 °C prior to extraction. About 10 g of flash frozen tissue was used for high-molecular-weight (HMW) genomic DNA isolation. The first step followed the BioNano NIBuffer nuclei isolation protocol in which frozen leaf tissue was homogenised in liquid nitrogen, followed by a nuclei lysis step using IBTB buffer with spermine and spermidine added and filtered just before use. IBTB buffer consists of Isolation Buffer (IB; 15 mM Tris, 10 mM EDTA, 130 mM KCI, 20 mM NaCl, 8%(m/V) PVP-10, pH 9.4) with 0.1% Triton X-100, and 7.5% (V/V) β-Mercaptoethanol (BME) mixed in and chilled on ice. The mixture of homogenised leaf tissue and IBTB buffer was strained to remove undissolved plant tissue. 1% Triton X-100 was added to lyse the nuclei before centrifugation at 2000× g for 10 min to pellet the nuclei. Once the nuclei pellet was obtained, we proceeded with cetyltrimethylammonium bromide (CTAB) DNA extraction with modifications for Oxford Nanopore sequencing 90 . The quality and concentration of HMW genomic DNA was checked using a Thermo Scientific™ NanoDrop™ Spectrophotometer, as well as on agarose gel electrophoresis following standard protocols. Genomic DNA obtained was further purified with a Qiagen® Genomic-Tip 500/G following the protocol provided by the developer. The purified genomic DNA sample obtained was sequenced on the Oxford Nanopore Technologies (ONT) PromethION platform. We generated 60,136,770,518 bp of Nanopore reads with a read length N50 of 9382 bp and an average read quality score of 6.5. Raw ONT reads (fastq) of Syzygium grande were filtered prior to assembly using seqtk 91 such that only reads 35 kb or longer were used for genome assembly, which was performed using wtdbg2 30 version 2.2 with flags -p19 -AS2 -e2. The genome consensus was also generated with wtdbg2. Consensus correction was performed with the input ONT reads and three rounds of racon 92 . The assembly generated was polished with Pilon 93 using 30 Gb of 2 × 150 paired Illumina HiSeqX reads of Syzygium grande that were trimmed and filtered. The assembly of Syzygium grande comprised 1669 contigs with an N50 length of 556,915 bp. The assembly was filtered for organellar and contaminating contigs using the blobtools 94 pipeline, resulting in the removal of 30 out of 1669 contigs. Next, purge haplotigs 95 was used to identify 744 contigs contributing to a diploid peak, which were then removed. These contigs comprised <40 Mb of the genome assembly. This filtered primary assembly was thereafter scaffolded into chromosomes by Dovetail HiC technology 31 . The final scaffolded assembly size was 405,179,882 bp. Transcriptome assembly and annotation of the Syzygium grande genome Transcriptome assembly was carried out for 3 RNASeq libraries (S1: young leaves, S2: mature leaves, S3: twig tips; sequencing performed by NovogeneAIT) separately using an in-house custom assembly pipeline. The first step involved de novo assembly for multiple kmer values— 51, 61, 71, 81, 91, 101 using TransAbyss 96 v2.0.1, and for kmer value 25 using Trinity 97 v2.8.5. The second step comprised genome-guided assembly using StringTie 98 v2.0. The input for this second step involved aligning the RNASeq reads against the reference genome using HISAT2 99 v2.1.0. The third step encompassed combining all the results from the first and the second steps using EvidentialGene 100 v2018.06.18 to obtain a final high-confidence transcriptome assembly. S1 produced 57,746 transcripts (BUSCO completeness 92.9%), S2 produced 56,536 transcripts (BUSCO completeness 94.6%) and S2 produced 64,163 transcripts (BUSCO completeness 94.1%). The genome annotation of the reference Syzygium grande genome was carried out using an in-house custom annotation pipeline. The first step involved the preparation of a de novo repeat library using RepeatModeler v1.0.11. This library was used to mask the repetitive regions in the genome assembly using RepeatMasker 101 v4.0.9 resulting in 45.09% of the genome being masked. The second step was the gene prediction step, based on a modular approach using three different gene predictors gene markers, braker (using the three RNASeq libraries) and GeMoMa 102 (using gene models from the model species Arabidopsis thaliana [TAIR10] and Populus trichocarpa [v3.1]). Additionally, the spliced transcript aligner PASA 103 (using transcripts from the three RNASeq libraries) was used to generate evidence for gene structures. These results were then combined using the combiner tool EvidenceModeler 104 to produce a single high confidence final prediction of 39,903 gene models with a BUSCO completeness score of 86.6%. A graphic workflow of these procedures is presented in Supplementary Fig. 136 . Please see additional details in Supplementary Note 3 . Genome structural analyses The chromosome-level Syzygium grande genome assembly and annotation were uploaded to the online CoGe comparative genomics platform ( ) 105 . Syntenic dot plots and data for synonymous substitution rate (Ks) calculations were derived from CoGe SynMap 105 calculations using default settings, with CodeML set to “Calculate syntenic CDS pairs and colour dots: Synonymous (Ks) substitution rates”. Ks data were collected from corresponding downloads at the “Results with synonymous/non-synonymous rate values” tabs. Each pairwise SynMap analysis (including self:self) was performed for the following species and CoGe genome IDs: Syzygium grande (id60239), Eucalyptus grandis (id28624), Punica granatum (id61248), Populus trichocarpa (id25127), Vitis vinifera (id19990). Syntenic dot plots from SynMap were further investigated for synteny relationships within and between species using the FractBias tool 106 . FractBias mappings for fractionation profiles between species were generated using Quota Align syntenic depth of 2:1 for Syzygium , Eucalyptus, and Punica against Vitis (analyses can be regenerated at , , and , respectively), max query chromosomes = 100, max target chromosomes = 25, and “Use all genes in target genome”. For Populus against Syzygium (which can be regenerated at ), mapping of the former assembly against the latter used a Quota Align syntenic depth of 2:2 and the same options as described above for depth 2:1. Density plots (both histogram and smoothed curve) of Ks values for syntenic paralogs were generated in R 107 using the tidyverse 108 , ggplot2 109 , RColorBrewer 110 , ggridges 111 , and ggpmisc 112 packages. Ks peaks were calibrated by their shared gamma hexaploidy event using the method described by Wang et al. 43 . Illumina sequencing of Syzygium and outgroup individuals A total of 289 Syzygium individuals were selected to represent the six subgenera recognised by Craven and Biffin (2010) 17 , across its natural distribution from Africa to the Indian subcontinent, through the Indomalaya region and into the Pacific. Three outgroup taxa in Myrtaceae, Metrosideros excelsa , M. nervulosa (tribe Metrosidereae) and Eugenia reinwardtiana (tribe Myrteae), were also sampled. Most of the 292 samples used in this study were freshly collected in the field, utilising the silica gel teabag method for preserving plant DNA 113 , between 2017 and 2019 either from collecting expeditions conducted in Singapore, Australia, Brunei, Indonesia (West Papua and Papua provinces) and Malaysia or from cultivated specimens in the Singapore Botanic Gardens (Singapore), Bogor Botanical Garden (Bogor, Indonesia), Cairns Botanic Gardens (Queensland, Australia) and Royal Botanic Gardens, Kew (UK). Approximately 20 mg of silica-dried leaf tissue were sampled for genomic sequencing. Plant tissue was ground to a fine powder using Omni International Bed Rupture Homogeniser. DNA isolation was carried out at the molecular lab of the Singapore Botanic Gardens using the Qiagen DNeasy® Plant Mini Kit, following the protocol provided by the manufacturer. In rare cases, DNA yields were low when obtained from Qiagen DNeasy® Plant Mini Kit; hence for these problematic samples, the Qiagen DNeasy® Plant Maxi Kit was used instead. Quality and concentration of DNA aliquots were checked using a Thermo Scientific™ NanoDrop™ Spectrophotometer before submission to NovogeneAIT (Singapore) for QC, library construction and sequencing of 30 Gb each (150 × 150 paired ends) on an Illumina HiSeqX. Assembly, BUSCO QC, and species tree phylogeny of the resequenced Syzygium and outgroup accessions The 292 Illumina resequenced accessions were assembled using MaSuRCA 33 v3.3.1 with library insert average length of 350 bp and a standard deviation of 100 bp. The genome completeness percentages were estimated using BUSCO v4.0.2 based on eudicots_odb10 database. The phylogeny for the Syzygium and outgroup species was estimated using the BUSCO genes. Two species tree versions were estimated. The first tree was estimated using 229 BUSCO genes that were complete and found in all 292 species. The second tree was estimated using 1227 BUSCO genes that were present in 286 species and above. The species tree generation was constructed using an in-house phylogeny pipeline. The first step involved extraction of BUSCO genes from all resequenced individuals, generating a multi-fasta file for each BUSCO gene containing a representation of that gene from the available species. The second step involved performing multiple sequence alignment (MSA) for each BUSCO multi-fasta file using MAFFT 114 v7.407. The resulting MSA files were used to generate gene trees using RAxML 47 v8.2.12. These gene trees were concatenated and sent as a single input to ASTRAL 48 v5.15.1 to generate the final species tree. Mapping the resequenced individuals to the Syzygium grande reference genome The 30 Gb each of raw Illumina reads was trimmed to remove adapters using default settings of Trimmomatic 115 version 0.38. Following trimming, the samples were mapped using bwa mem 116 (version 0.7.17), and the subsequent bam files were filtered for a quality score of 20 using samtools 117 view and sorted using samtools sort. Picard MarkDuplicates (version 2.7.1; ) was used to remove PCR duplicates from the mapped reads. Depth and width of mapping coverage were calculated using BEDTools 118 version 2.23.0. SNP calling and statistics SNP calling was performed using GATK version 3.8 in ERC mode for each sample. GenotypeGVCFs were used to call joint genotypes; due to RAM and time limitations, this was split into 70 intervals using the –L flag. To combine the 70 files, GatherVcfs was used to generate a VCF file. As a quality control, GATK VariantFiltration was used with the following filter expression based on GATK recommendations: ‘QD < 2.0 || FS > 60.0 || MQ < 50.0 || MQRankSum < −12.5 || ReadPosRankSum < −8.0 || SOR > 4.0’. Further filtrations were carried out in VCFtools 119 (version 0.1.13) to create various datasets for downstream analyses (Supplementary Table 2 ). The –plink flag was applied to generate .ped and .map files, and also –recode was used to generate a filtered vcf file. SNP statistics were calculated through vcftools options –het and –singletons for dataset FRSA-1. The output was subsequently plotted using the R package ggplot ( ). RAxML SNP tree A pseudo-alignment of SNPs was generated for phylogenetic reconstruction for dataset FRSA-1. The plink.ped file was used to convert into fasta input for RAxML. Only variable SNPs were retained for a total of 2,384,277 SNPs. A maximum likelihood tree was generated using RAxML version 8 including adjustments for ascertainment bias (—asc-corr lewis) and 500 bootstraps. Trees were viewed and edited using FigTree 120 . Plastome assembly and phylogenetic tree We filtered and removed nuclear reads from the Syzygium grande Nanopore assembly and constructed a complete chloroplast genome of 158,980 bp in length. This genome was used as a reference to examine phylogenetic relationships of Syzygium based on the plastome. Before mapping Illumina reads of 289 Syzygium individuals and three outgroup taxa (two Metrosideros and one Eugenia ) to the reference, one of the Inverted Repeat regions (IRs) was removed to prevent sequence calling bias. The combined DNA alignment file of the 292 individuals was then subjected to an ML analysis using RAxML with 1000 bootstrap replicates. NeighborNet analysis We used the NeighborNet 50 , 121 approach to assess incongruence within our SNP data set for Syzygium subg. Syzygium and S. rugosum as an outgroup taxon (FRSA-5). We used SplitsTree 122 version 4.17.0 to calculate the network with LogDet 123 distances. ADMIXTURE analysis We ran ADMIXTURE 53 (version 1.3) for K values 5–15 and used the –cv option to find the best (lowest) K value for the number of ancestral populations (Supplementary Figs. 10 and 11 ). The results were then plotted using the barplot function in R. Local PCA Single nucleotide polymorphisms (SNPs) called against the draft assembly were transferred to the Hi-C scaffolded assembly through the use of Minimap2 124 , transanno ( ), and LiftoverVcf 125 . The BED file needed to remove SNPs from repeat regions was generated using convert2bed 126 . SNPs from repeat regions were removed using the VCFtools—exclude-bed option 119 . The VCF file was divided by the 11 pseudomolecules using HTSlib 127 , and converted to BCF format and indexed using BCFtools 128 . Local PCA was carried out using the R 107 lostruct package 58 and window size was chosen to include about 1000 SNPs per window as recommended by the authors. PCA The eigensoft 129 package (version 6.1.3) was used to convert plink.map and .ped files into .ind, .geno and .snp files. Thereafter, the smartpca.perl script was used to run PCA for PC1 to PC10 under default parameters for datasets FRSA-1 (all Syzygium ) and FRSA-3 ( Syzygium subg. Syzygium ). Taxa that were removed using the smartpca default five rounds of outlier removal shown in red (Supplementary Fig. 101 ). In addition, three separate PCA checks were performed to confirm that clinal results were not artefactual. PCA was run with SNPs using (i) a more stringent minimum depth of coverage of 20 (Supplementary Figs. 112 – 116 ), (ii) homozygous sites only (Supplementary Figs. 117 – 121 ), and (iii) LD correction turned on using the nsnpldregress option in the smartpca programme to control for linkage (Supplementary Figs. 107 – 111 ). In order to search for possible correlations, the PCA was coloured in numerous ways, by geography, by ecoplots, and also according to ADMIXTURE K = 14 ancestral groups (Supplementary Fig. 11 ), using the scatterpie package in R. f 3 statistics Dataset FRSA-5 was used to formally test for admixture using the f 3 56 statistic implemented in the qp3pop (version 650) function of the AdmixTools package ( ). A total of 7,195,530 SNPs were used to test 3,889,44 triplets, every possible combination within the Syzygium grande group. We then applied a FDR correction to the Z -scores using a custom R function developed as part of the silver birch genome project 130 . Next, heatmaps were plotted in R for each target. PSMC We used PSMC 131 to infer past demographies for most members of the Syzygium grande group. To accomplish this, we mapped trimmed reads for each sample to its de novo MaSuRCA 33 assembled genome, using the same mapping parameters used for S. grande reference mapping. Consensus sequences were called using samtools 117 mpileup to generate diploid sequences for input to PSMC, with all parameters set to default. Demographic curves were subsequently plotted using the psmc_plot.pl script. As a first quality control, we only included samples with de novo assemblies that had N50 > 10,000 basepairs and a BUSCO completeness score >80%, which excluded additional eight samples. Following plotting, for clarity, we removed an additional six samples that deviated strongly from the general trends of the clade. The values of sum_n, the number of segregating sites, were extracted from the .psmc files for each sample. The maximum coalescence date and its corresponding N e values were extracted from the plot files. Since strict filtering of SNPs lowers heterozygosity drastically, the dataset FRSA-GATK was used to calculate heterozygosity using the –het option in VCFtools. To plot the correlation matrix, the pairs.panels function from the R package psych was used. The parameter lm was set to true to display linear regression fit, and ci was set to true to display confidence intervals. To display R -squared values, the source code was edited. Stars were assigned to the following categories based on p -value significance (<0.001***, <0.01**, and <0.05*). Biogeography An ultrametric dated SNP phylogeny was generated both for phylogenetic dating and biogeographic reconstruction. The SNP tree was employed since ASTRAL species tree branch lengths are not interpretable for ultrametric conversion (see the comment from the ASTRAL developer on GitHub: ). The function chronos() in the R package ape 132 (version 3.5.2) was used to create the ultrametric tree. The model used was correlated, and the calibration applied a minimum of 20,900,000 and maximum of 22,100,000 years at the node for which Syzygium subg. Acmena and S . subg. Syzygium share a common ancestor. This calibration is based on a Syzygium fossil ( S. christophelii ) found in New South Wales, Australia 76 . The ultrametric tree was also used as input to search for and execute the best model using RASP 74 and BioGeoBEARS 73 (which was BAYAREALIKE + J in both cases). Each of the 292 samples was assigned to one or more of eight geographic regions (Africa, India, Mainland Asia, Sunda, Sahul, Wallacea, Zealandia, and Pacific islands) based on their distribution patterns, and RASP and BioGeoBEARS were both restricted to a maximum of six areas at each node. Character evolution with Mesquite States for three morphological characters—specifically (i) inflorescence habit (erect vs. pendent), (ii) shedding fused corolla present as a true calyptra, a pseudocalyptra, vs. corolla free at anthesis, and (iii) mature fruit colour (green, white or cream, black, pink, purple, red, brown, orange, yellow, blue, or grey)—were gathered from living material, herbarium specimens, published flora accounts, and species protologues. The categorical morphological characters were coded into the form of numbers 0–9 and/or letters a–z. The ASTRAL tree of BUSCO genes, and selected traits, were loaded into Mesquite 83 version 3.61 and the Trace Character Evolution option with parsimony was selected to predict ancestral states. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The genome data generated in this study has been deposited in the NCBI database under accession code PRJNA803434 and BioSample ID SAMN29207412 . The Syzygium grande genome assembly and annotation are also available on CoGe [ ]. Processed data generated in this study and used for main text figures are provided in source data files. Additional processed data are available at Dryad [ ]. Source data are provided with this paper.
An international study that analyzed the world's largest tree group has made breakthrough findings. The results are expected to guide future conservation of tropical and subtropical rain forests as well as help with predicting how certain plants will respond to climate change. More than 60 researchers explored the evolution and speciation patterns of the tree group Syzgium, which includes the trees that gives us the spice clove as well as numerous fruits. The study was led by the Singapore Botanic Gardens of Singapore's National Parks Board in collaboration with 26 international research institutions including the University of Aberdeen, Royal Botanic Gardens, Kew, Nanyang Technological University and the University of Buffalo. Published in Nature Communications the research is the most extensive study of any Syzgium group of trees to date and was carried out over two years using samples from some 300 species growing in Africa, Sri Lanka, Malaysia, Singapore, Indonesia, Japan, Australia and the Pacific Islands. Trees growing in tropical areas are understood to be some of the most valuable in the world in protecting biodiversity and global warming, but they are also among some of the most threatened because of commercial needs and use of the land for farming. Native and widespread in tropical and subtropical rain forests, studying the origins and drivers of the large hyperdiverse tree group Syzygium contributes to the understanding of how plant species have emerged in the past in response to environmental changes. This knowledge is valuable for predicting how plants might respond to ecological changes brought about by climate change and will guide conservation and management efforts for plant communities. The Syzygium species may be found growing together with other trees within the understory and canopy layers of forests. Because of its large diversity, they play an inordinate role in the functioning of forest ecosystems. Many Syzygium species are also cultivated in tropical countries for different types of spices as well as their large edible fruits. Understanding how Syzygium species have evolved will help to advance knowledge of the highly complex species-environment relationships in forest ecosystems and anticipate forest ecosystem changes in response to climate change. The University of Aberdeen's Interdisciplinary Director for Environment and Biodiversity Professor David Burslem says that "tropical forests are under severe threat from conversion, industrial logging and climate change. The new results on the origins and biodiversity of Syzygium, an important group of tropical trees containing many species of commercial importance for timber and fruit production, provides the raw material for devising strategies for species conservation and restoration." "This new paper will serve as a benchmark for future studies combining genomic analyses with extensive data-sets on species distributions and satellite-derived environmental sensing to finally understand the mechanisms that drive patterns of tropical forest biodiversity." "It was a privilege for the University of Aberdeen to work closely with experts from Singapore Botanic Gardens as well as many other prestigious international organizations on this vitally important project which is one of the first genome-scale plant evolutionary studies on a single group to be published that was so widely-sampled." Dr. David Middleton, coordinating director at Singapore Botanic Gardens, says that "Southeast Asia is a region of exceptionally rich species diversity. The Singapore Botanic Gardens has played a contributory role to the study of plant diversity in the region since its founding in 1859, as part of the Gardens' core roles in research, conservation and education." "The enormous genus Syzygium, the species of which are mainly understory trees, has long been neglected in comparison to the iconic forest giants and plant groups of more immediate economic interest. However, their role in the diversity and functioning of our forests must be better understood if we are to succeed in our conservation goals." "In partnership with our collaborators at home and abroad, we have begun to understand what drives such exceptional species diversity in tropical Southeast Asia and can make better informed decisions on how to conserve this diversity. This strengthens the science on conservation in the region and contributes towards Singapore's City in Nature vision."
doi.org/10.1038/s41467-022-32637-x
Earth
Scientists unravel history of lost harbour of Pisa
D. Kaniewski et al. Holocene evolution of Portus Pisanus, the lost harbour of Pisa, Scientific Reports (2018). DOI: 10.1038/s41598-018-29890-w Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-018-29890-w
https://phys.org/news/2018-08-scientists-unravel-history-lost-harbour.html
Abstract The ancient harbour of Pisa, Portus Pisanus , was one of Italy’s most influential seaports for many centuries. Nonetheless, very little is known about its oldest harbour and the relationships between environmental evolution and the main stages of harbour history. The port complex that ensured Pisa’s position as an economic and maritime power progressively shifted westwards by coastal progradation, before the maritime port of Livorno was built in the late 16 th century AD. The lost port is, however, described in the early 5 th century AD as being “a large, naturally sheltered embayment” that hosted merchant vessels, suggesting an important maritime structure with significant artificial infrastructure to reach the city. Despite its importance, the geographical location of the harbour complex remains controversial and its environmental evolution is unclear. To fill this knowledge gap and furnish accurate palaeoenvironmental information on Portus Pisanus , we used bio- and geosciences. Based on stratigraphic data, the area’s relative sea-level history, and long-term environmental dynamics, we established that at ~200 BC, a naturally protected lagoon developed and hosted Portus Pisanus until the 5 th century AD. The decline of the protected lagoon started at ~1350 AD and culminated ~1500 AD, after which time the basin was a coastal lake. Introduction While Italy’s rich maritime history has sharpened focus on ancient harbours and human impacts in port basins (e.g. Altinum -Venice, Portus Lunae -Luni, Portus -Rome, Ostia -Rome, Neapolis -Naples) 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , the evolution of Portus Pisanus , the powerful seaport of the city of Pisa (Italy, Tuscany), was, until recently, largely unknown 9 , 10 , 11 , 12 , 13 , 14 , 15 . The city is currently located ~10 km east of the Ligurian Sea coast but its long and complex history 16 , 17 is closely linked to its port, named Portus Pisanus by late Roman sources (Rutilius Namatianus descriptions in de reditu suo , Itinerarium Maritimum 501 ) 15 , 18 , 19 , 20 . Since the Imperial Roman period, most of the supplies imported from Mediterranean provinces reached the city of Pisa through a complex harbour system that included Portus Pisanus , the fluvial Ports San Piero a Grado and Isola di Migliarino, and other minor landings along the ancient coast, the Arno and Serchio rivers and canals including the site of Pisa-Stazione Ferroviaria San Rossore 21 , 22 , 23 , 24 , 25 , 26 . The heyday of the seaport played out from the Late Republican Roman period to the Middle Ages, when Pisa became an influential Commune and maritime power 16 , 18 , 27 . In the Middle Ages, Portus Pisanus was the main harbour of Pisa: the harbour system included Vada, the ports of the Piombino promontory, Castiglione della Pescaia and other minor landings 27 . During the 13 th century AD, it was one of the most important seaports in Italy, rivaling Genoa, Venice, and Amalfi 16 , 28 , 29 , 30 . The name Portus Pisanus , documented since the 5 th –6 th centuries AD, was probably in use earlier. In 56 BC, Cicero mentioned a harbour ( Ad Quintum fratrem 2, 5), Portus Labro , probably located in the same area. Is Portus Labro an earlier name for Portus Pisanus (as might be suggested by the current hydronym Calambrone, situated north of Livorno) or a different harbour? In the Middle Ages, the name Portus Pisanus was used to identify both the harbour and a large part of the Livorno hinterland. Between the second half of the 14 th century AD and the mid-15 th century AD, the names Portus Pisanus and Porto di Livorno coexisted. While the appellation Porto di Livorno became prevalent, part of the harbour basin was still named Portus Pisanus . The name Porto di Livorno has prevailed since the late 1500 s AD. The exact location of Portus Pisanus has long been discussed, despite several Roman literary sources (e.g. Itinerarium Maritimum , de reditu suo ) and studies 10 , 31 , 32 that placed the harbour ~10–13 km south of the city of Pisa, in an area which presently corresponds to the north-eastern edge of the port city Livorno, and despite an accurate description of the area (Santo Stefano ai Lupi) published by Targioni Tozzetti in 1775 33 with an associated map. Recent archaeological excavations at Santo Stefano ai Lupi corroborate both the classical sources and Targioni Tozzetti’s description with the discovery of portions of sea bed covered by fragments of ancient pottery (dated to the 6 th –5 th centuries B.C. and to a period between the 1 st century BC and the 6 th century AD), ballast stones, part of a small stone dock, some buildings including a warehouse, and a necropolis dated to the 4 th –5 th centuries AD 14 , 34 , 35 . These structures belong to Portus Pisanus’ harbour system, but just a small area of the ancient port city and the adjacent basin has been excavated. The pottery fragments found on the ancient sea bed prove that, in this part of what was called Portus Pisanus , boats and ships were loaded and unloaded. The associated harbour basin is believed to be a large, naturally sheltered embayment that accommodated ships from the Etruscan and Roman periods to the Late Medieval Ages, when Pisa grew into a very important commercial and naval center, controlling a significant merchant fleet and navy 35 . The existence of a highly protected natural basin would have been of great benefit to navigation and trade, facilitating the establishment of port complexes, through the use of the natural landscape. Historical sources or archaeological excavations have not unearthed any artificial moles at Portus Pisanus , suggesting a very confined harbour that was naturally protected by its geomorphological endowments 20 . Here, we use bio- and geosciences to probe the long maritime history of the seaport of Pisa. We present a 10,500-year Relative Sea-Level (RSL) reconstruction to understand the role of long-term sea-level rise in shaping the harbour basin. We also report an 8000-year reconstruction of environmental dynamics in and around the harbour basin to establish when Portus Pisanus became a “naturally sheltered embayment”, reported in literary sources as the main hub of the ancient Pisa harbour system. Finally, we map the location and evolution of the harbour basin since the Roman period. These environmental data were compared and contrasted with the history of Pisa (written sources and archaeological data). Results RSL reconstruction The harbour basin is the outcome of an environmental history in which long-term sea-level rise has played a key role. A total of 31 RSL index points has been used to frame the Holocene sea-level evolution of the eastern Ligurian Sea (Fig. 1 ). At ~8550 BC, the oldest index points place the RSL at ~35 m below current Mean Sea Level (MSL). Younger index points indicate that RSL rose rapidly until ~5050 BC followed by a slowdown in the rising rates. At ~4050 BC, multiple RSL index points constrain the RSL to ~−5 m MSL. Since this period, index points delineate a significant reduction in the rising rates that become minimal during the last ~4000 years, when the total RSL variation was within 1.5 m MSL. The reconstructed RSL history reflects the pattern observed in the mid to northern portion of the western Mediterranean 36 , 37 . The significant slowdown in rising rates after ~5550 BC is consistent with the final phase of North American deglaciation 38 while the further decrease in rising rates is related to the progressive reduction in glacial meltwater inputs that were minimal during the last ~4000 years 39 , 40 . Figure 1 Relative sea-level reconstruction of the eastern Ligurian Sea. The RSL history is based on 31 index points deriving from lagoonal sediment archives of the Arno and Versilia coastal plains and fossil Lithophyllum byssoides rims from Northern Corsica. The blue boxes represent index points from lagoons and salt marshes. The black boxes are L . bissoides -derived index points. The dimensions of the boxes denote the 2 s altitudinal and chronological errors associated with each index point. The map is an original document drawn using Adobe Illustrator CS5 ( ). Full size image Drilling the harbour basin The terrestrial, marine and freshwater biological indicators used to reconstruct palaeoenvironmental evolution of Portus Pisanus were extracted from an 890-cm sedimentary core (PP3, 43°35′55.33″N, 10°21′41.71″E; +2 m MSL; Fig. 2 ) drilled ~5 km from the sea on the southern portion of the modern Arno River alluvial-coastal plain, close to Livorno and the Pisa Hills. During ancient times, this part of the alluvial plain, located ~10 km south of Pisa, was fed by a former branch of the Arno River 41 here reported as the Calambrone River. According to previous studies 15 , 18 , 20 , the sedimentary core was taken from the former harbour area, active from the Archaic Etruscan (6 th –5 th centuries BC) to the Late Roman periods (5 th century AD). In the Middle Ages, the port shifted westwards, as testified by the building of the fortified harbour basin of Leghorn including medieval towers dated from 1300 and 1400 AD (Fig. 2 ), and then again further westwards in modern times, consistent with the progradation dynamics of the delta. The core was recovered behind the innermost beach-ridge of the Arno Delta complex, where a meter-thick back-barrier succession occurs, recording the development and prolonged persistence of a wide lagoon basin during the Late Holocene. Back-barrier deposits, mainly represented by fossiliferous clay-silt interbedded with sandy layers, are overlain by a progradational suite of coastal-alluvial facies, deposited during the recent phase of decelerating sea-level rise. Figure 2 Study area and location of the archaeological site. (A ) Geomorphological map of the study area (modified from CARG Regione Toscana). 1 current beach; 2 shallow swale; 3 wetland; 4 beach ridge, superimposed dunes; 5 alluvial plain; 6 residual relief; 7 Livorno urban/industrial area; 8 mountains and hills; dashed line: 17 th century AD coastline; dotted line: 12 th century AD coastline; arrow: current drift. ( B ) Photograph of the archaeological site. Full size image The chronology of the core PP3 is based on nine accelerator mass spectrometry radiocarbon ( 14 C) dates (Fig. 3 ). Dated samples (short-lived terrestrial samples: seeds, small leaves of annual plants) were calibrated [2-sigma (σ) calibrations, 95% of probability] using Calib-Rev 7.1 with IntCal13. According to the 14 C chronology, the core covers the last 8000 years (Fig. 3 ). The age model (Fig. 3 ) was calculated using Xl-Stat 2017 and Calib-Rev 7.1. The dates obtained in between each 14 C dating are modeled and therefore are liable to mask some of the temporal variability in the depositional patterns. The calculated model displays an average 2-sigma range of 50 years (P < 0.001) for the whole sequence. All the calibrated ages are shown/discussed as BC/AD to fit with the archaeological-historical data and are presented at the 2-sigma range (95% probability). The two scales, BC-AD and BP, are both displayed on each figure. While the average chronological resolution of the core stratigraphy is 9 years per cm −1 (1.11 mm per yr −1 ), a homogeneity test (Monte Carlo simulation, standard test, P value < 0.001) suggests two abrupt changes in the sedimentation rate at 740 cm depth (4300 ± 70 BC) and 290 cm depth (200 ± 30 BC). Figure 3 Details of Portus Pisanus basin in North Tuscany, Italy. The lithology of the core in the basin area, with the influence of marine components, is reported according to depth. The main sedimentary environments are plotted on a linear depth-scale. The radiocarbon dates are depicted as 2σ calibrations (95% of probability). The age model is superimposed on the 2σ calibration curve, and a linear-model was also added showing a theoretical continuous sedimentation rate. The timescale is shown as BP and BC-AD. Full size image Biological proxies in the harbour basin Terrestrial data retrieved from Portus Pisanus’ harbour basin were analysed using a cluster analysis (descending type). Each cluster was summed to generate pollen-derived vegetation patterns and assigned to a potential location, from the intertidal zone to the hinterland (Fig. 4 ), referring to modern patches of vegetation along the Ligurian coast (local data and Vegetation Prodrome 42 ). To ascertain the ordination of terrestrial data according to the “sea” factor, a second cluster analysis (descending type; Fig. 5A ) was calculated using the vegetation patterns and the marine proxies [dinoflagellate cysts and marine components (foraminifera, marine bivalves, debris of Posidonia oceanica )]. Three vegetation communities (backshore scrubs, shrubland, and coastal pine-oak woodland) are linked to a marine influence (from the supratidal zone to the lower coastal zone; Figs 4 – 5A ) whereas the other communities (mixed oak forest, wet meadow, fen trees, freshwater plants) are related to fluvial inputs from the Calambrone River (coastal alluvial zone and marsh-swamp zone). Cross-correlations (vegetation patterns versus marine proxies; Fig. 5B ) also indicate that seawater has influenced proximal vegetation patterns (positive correlations on the null lag score: Lag 0 = 0.665, Lag 0 = 0.547 and Lag 0 = 0.307, P = 0.05) around the basin. The vegetation group “warm woodland” is set apart as this cluster is mainly related to a third influence, agro-pastoral activities (Fig. 5A ) that are observed in the area after 3350 ± 90 BC. Agriculture is composed of cereals (Poaceae cerealia), olive trees ( Olea europaea ), common grape vines ( Vitis vinifera ), and other trees ( Prunus ). The associated anthropogenic indicators are common weeds ( Centaurea , Plantago and Rumex ). Figure 4 Pollen-based ecological clusters from the harbour basin for the last 8000 years. A cluster analysis (paired group as algorithm, Rho as similarity measure) was used to define the ecological assemblages. Each cluster was summed to create pollen-derived vegetation patterns. The potential location of each cluster, from the intertidal zone to the hinterland, is indicated. Full size image Figure 5 Environmental-based clusters from the harbour basin for the last 8000 years. ( A ) A cluster analysis (paired group as algorithm, Correlation as similarity measure) was used to define the environmental assemblages (marine versus fluvial influence). ( B ) The two cross-correlograms (P = 0.05) depict the marine influence on ecosystems around the basin. Full size image Pre-harbour facies ( sensu Marriner and Morhange) Marine versus fluvial influence since 8000 years is represented by the importance of marine indicators (dinoflagellate cysts and marine components) and supratidal-intertidal scrubs in and around the pre-harbour (Fig. 6 ). A principal components analysis (PCA) was run to test the ordination of ecosystems by assessing major changes in the area including vegetation patterns, dinoflagellate cysts and marine components (Fig. 7 ). Environmental dynamics (marine versus fluvial influence) in the basin is indicated by the axis-1 of a principal components analysis (PCA-Axis1). The PCA-Axis1 (61% of total variance) is positively loaded by vegetation patterns indicative of a saline-xeric environment, dinoflagellate cysts and marine components. The negative scores correspond to freshwater vegetation types (Fig. 7 ). The PCA-Axis1 reflects the ecological erosion of wetlands by the intrusion of seawater into the freshwater-fed plains, raising salinity in the hinterland, with land fragmentation and salt-water intrusion into the groundwater table, in and around the basin. The PCA-Axis1 can be considered as a proxy for marine ingression in/around the pre-harbour (with a main physical impact and several secondary influences such as salt spray and salinization) 43 . Figure 6 Reconstructed marine influence in Portus Pisanus during the last 8000 years. The marine influence (components per cm −3 ) and the backshore scrubs (%) are displayed as a LOESS smoothing (with bootstrap and smoothing 0.05) plotted on a linear timescale (BP and BC-AD). The Posidonia oceanica debris (presence/absence) are indicated by green marks along the marine curve. The ostracods (presence/absence) are displayed as dots. Fire activity in the area is shown as charcoal concentrations (fragments per cm −3 ) plotted on a linear timescale. The relative sea level, displayed as MSL, is depicted for the last 8000 years. The shipwreck curve from the Mediterranean region is also plotted on a linear timescale 76 . The brown-shaded horizontal section indicates the harbour development and the blue-shaded horizontal section shows the period when the sea-level stabilized. Full size image Figure 7 Reconstructed environmental dynamics in Portus Pisanus during the last 8000 years. The PCA-Axis1, plotted on a linear age scale (BP and BC-AD), is displayed as a LOESS smoothing (with bootstrap and smoothing 0.05) and a matrix plot. A boxplot was added to mark the extreme scores. The loading of each cluster is indicated below the PCA-Axis1 curve. The agro-pastoral activities (agriculture and weeds, %) are plotted on a linear age scale, and also displayed as a matrix plot. The brown-shaded horizontal section indicates the harbour phase and the blue-shaded horizontal section shows the period when sea-level stabilized. Full size image While permanent inputs of seawater and fluvial freshwater were recorded in the pre-harbour, occurrences of higher marine influence are locally underlined by a combination of peaks in marine indicators, the occurrence of different ostracod ecological groups (shallow marine; brackish-marine and euryhaline), important increases in Posidonia oceanica debris (Fig. 6 ), and strong positive deviations in the PCA-Axis1 scores (Fig. 7 ). Our reconstruction shows two early periods of seawater influence (at 5800 ± 40–5425 ± 55 BC and 4750 ± 60–4500 ± 60 BC) when the site was a marine invagination, followed by an unstable phase when sea-level stabilized along the Tuscany coastline (Fig. 1 ), between 4250 ± 60 BC and 2000 ± 45 BC (Fig. 8 ). Potential discontinuities in environmental dynamics were assessed using a homogeneity test on the PCA-Axis1 (Monte Carlo simulation, Pettitt and Buishand tests). The outcome indicates that the environmental dynamics are not uniform, underlining a major break around 2000 ± 45 BC (P value < 0.001). A second homogeneity test, only applied to the period 3350–6050 BC, highlights a second important break around 4250 ± 60 BC (P value < 0.001, Monte Carlo simulation, Pettitt and Buishand tests), when the pre-harbour evolved into a leaky lagoon. During this period (4250 ± 60–2000 ± 45 BC), the last main peak in the PCA-Axis1 corresponds to a wave-dominated delta and occurred within the chronological interval of the 4.2 ka BP event 44 , suggesting the potential role of climate in influencing the pre-harbour’s evolution. A later phase was recorded at 1250 ± 40–850 ± 40 BC when the pre-harbour evolved into a delta plain, corresponding to the 3.2 ka BP event 45 , 46 . Figure 8 Geographic maps showing three evolutionary stages of the lagoon, in relation to Portus Pisanus as documented by historical sources and archaeological data. Maps were produced by integrating stratigraphic (cores and trenches) and geomorphological data (Pranzini, 2007) with historical cartography. Dots indicate cores used to draw the maps (key cores are highlighted by red dots). The modern shoreline is depicted on each map for reference. The grey arrow indicates the direction of the predominant wind ( Libeccio ). ( A ) Roman period - a wide lagoon basin, hosting Portus Pisanus as mentioned in literary sources; ( B ) late Middle Ages - the accretion of arcuate beach ridges, belonging to the Arno Delta strandplain, led to an increase in the degree of confinement of the lagoon basin. Construction of the maritime harbour of Livorno in a seaward position with respect to the lagoon; ( C ) 17 th century AD - the rapid accretion of strongly arcuate sets of beach ridges led to the siltation of the lagoon that was transformed into a wetland, physically detached from the Ligurian Sea. Portus Pisanus abandonment and expansion of the fortified maritime harbour of Livorno. Full size image Among the main periods of marine influence in the harbour basin, before the establishment of Portus Pisanus , the period 4250 ± 60–2050 ± 45 BC was the hinge phase, culminating in a wave-dominated delta. These periods, characterized by inland salt intrusion, significantly affected the agricultural productivity of this coastal area (Fig. 7 ). Prolonged marine inundation appears to have led to the salinization of agriculturally productive soils resulting in diminished output for long periods of time (Fig. 7 ). Portus Pisanus Because the area has been frequented by archaic ships since at least the 6 th –5 th centuries BC, the first phases of navigation had to be carried out on a delta plain characterized by wetlands 34 , 35 . According to our reconstruction (Figs 6 – 7 ), marine influence increased in the harbour basin after 200 ± 30 BC, when a naturally protected lagoon developed and hosted Portus Pisanus up to the 5 th century AD (according to archaeological evidence 14 , 34 , 35 ). During this period, a first peak in charcoal fragments is recorded at 180 ± 30 BC. A second inflection in charcoal fragments occurred at 550 ± 25 AD and is synchronous with a major fall in agro-pastoral activities (Fig. 7 ). From 1000 ± 20 to 1200 ± 20 AD, a first decrease in marine influence was documented before the last major marine phase at 1300 ± 20 AD. The decline of the protected lagoon started at 1350 ± 15 AD and ended at 1500 ± 10 AD, when the basin evolved into a coastal lake, concomitant with the development of agriculture, then to a floodplain (1700 ± 10 AD) and finally a soil atop a fluvial plain (1850 ± 5 AD). Stratigraphic data from several cores (Fig. 8 ) and archaeological trenches (Fig. 9 ) undertaken at Santo Stefano ai Lupi (Fig. 2 ), along with prominent geomorphological features (e.g. outcropping beach ridges and residual reliefs), were used to produce three maps, which illustrate the landscape evolution of the southern portion of the Arno alluvial-coastal plain, between the Roman period and the Modern age (Fig. 8 ). The extension and the environmental characteristics of Portus Pisanus are based on facies correlations (chronological framework based on 14 C dates and archaeological data). The core PP3 (and also PP1) formed the type stratigraphy of the basin (Fig. 8 ). Figure 9 Stratigraphic trench from the archaeological site of Santo Stefano ai Lupi. Representative photograph depicting the stratigraphy of the archaeological trench excavated at the Santo Stefano ai Lupi site (see Fig. 2 for location). The lower portion of the section consists of shallow-marine sands rich in Posidonia fibers, mollusc shells/fragments and ceramics of Roman age. These sands grade abruptly upwards into fine-grained alluvial deposits. Depths are displayed leveling relation to MSL. Full size image Discussion Recent archaeological and palaeoenvironmental studies at Pisa have focused both on Portus Pisanus 14 , 15 and the ancient fluvial port in the area of Pisa-Stazione Ferroviaria San Rossore, where several ships dating from the 2 nd century BC to the 5 th century AD were discovered 21 , 22 , 23 , 24 , 25 . These exceptional findings (amphorae, artefacts pertaining to the loads carried, and on-board equipment such as sails, ropes, anchors) engendered a wider environmental study focused on the analysis of Arno flood events in relation to the shipwrecks 26 , 47 and on archaeobotany 5 . Although the study of the Pisa-Stazione Ferroviaria San Rossore fluvial port is imperative for improving our knowledge of both the Pisa territory and its commercial activities, Portus Pisanus was the city’s main harbour, centered on Mediterranean-wide trade. This port witnessed the rise and fall of Pisa. The lagoon, which hosted the seaport, has recorded the long environmental history of the coast before turning into a protected lagoon and becoming one of the most important natural harbours in the western Mediterranean. Before the environment becomes a harbour basin Portus Pisanus and the related lagoon are the result of a long and complex environmental history (Figs 6 – 7 ), leading to the development of a lagoon characterized by a narrow entrance (Fig. 8 ). The base of this ancient lagoon corresponds to a long and narrow marine invagination (inlet channel) that developed to the south of the city of Pisa from 6000 ± 35 to 4300 ± 70 BC. The formation of the channel resulted from sea-level rise from −10 to −4.5 meters below MSL (Fig. 1 ), with two phases of stronger marine influence at 5800 ± 40–5425 ± 55 BC and 4750 ± 60–4500 ± 60 BC (Fig. 6 ). No evidence of agro-pastoral activities are recorded during this first stage, suggesting the absence of human impacts on the coastal area (Fig. 7 ). Around 4250 ± 60 BC, the inlet channel evolved into a leaky lagoon (Figs 6 – 7 ) as the sea-level rose from −4.5 to −2 meters below MSL (Fig. 1 ). As a consequence, the area was characterized by recurrent seawater inputs of increasing intensity (Fig. 6 ). The emergence of agriculture is dated to 3350 ± 90 BC (Fig. 7 ), consistent with nearby coastal sites such as Lago di Massaciuccoli (northwest Tuscany 48 , 49 ) and Stagno di Maccarese near Rome 50 . The hinge phase of marine influence recorded by a prolonged wave-dominated delta period is centered on 2150 ± 45 BC and may fit with the 4.2 ka BP event 44 , 51 , 52 . This climate event corresponds to a stronger seawater imprint in the basin area probably due to decreasing precipitation 53 , 54 and weaker fluvial inputs 55 , 56 . As a consequence, agro-pastoral activities also declined around the basin, with the lowest scores centered on 2150 ± 45 BC (Fig. 7 ). The event ended at 1950 ± 45 BC (Fig. 7 ) and the basin was transformed into a delta plain characterized by higher inputs of freshwater (Fig. 6 ). The last phase of marine influence before the emergence of the lagoon that hosted Portus Pisanus occurred at 1250 ± 40–850 ± 40 BC, but with a weaker intensity compared to the 4.2 ka BP event (Fig. 7 ). This deviation may correspond to the 3.2 ka BP event 45 , 46 , 57 , a dry spell that favored marine inputs into the basin as a result of decreasing river flow (Fig. 7 ). In Italy, this event is characterized by reduced precipitation (Buca della Renella cave, central-western Italy 53 ) and a drop in lake levels 58 . At the end of this event, river flow increased until 200 ± 30 BC, when a large, naturally sheltered lagoon with a good connection to the sea, developed and hosted Portus Pisanus . The lagoon as a mirror of Portus Pisanus Although the area has been settled since the late Iron Age, Pisa only became a city in the archaic period (7 th –6 th centuries BC). According to archaeological evidence, the earliest navigation and port activities recorded in what later became Portus Pisanus are documented by Greek, Etruscan and local pottery fragments scattered on the lagoon floor and dated to the end of the 6 th or early 5 th century BC (Fig. 6 ). The first evidence of Roman harbour infrastructure in the Portus Pisanus area date back to the 2 nd century BC 34 , 35 when the rate of shipwrecks intensifies in the Mediterranean, suggesting a sharp increase in maritime trade (Fig. 6 ). The environmental reconstruction indicates that the onset of the protected lagoon (Fig. 8 ) is chronologically constrained to around 200 ± 30 BC (Figs 6 – 7 ). A break in sedimentary deposition (Fig. 3 ), as well as the development of marine-influenced ecosystems (Fig. 7 ) and the emergence of typical lagoonal ostracod fauna 20 , dominated by Cyprideis torosa (Fig. 6 ), attest to a changing environment, characterized by a shift towards a calmer basin (Fig. 3 ). This evolution suggests that the harbour basin constitutes a natural lagoon with a good connection to the sea, consistent with historical sources that mention, in the early 5 th century AD, “a large, naturally sheltered embayment” ( de reditu suo , 1 , 527-540 ; 2 , 11 - 12 ). A first peak in fire activity at 180 ± 30 BC (Fig. 6 ) may correspond to the period when the Ligures repeatedly set fire to Pisan crops and countryside 10 . The important development in agriculture (Fig. 7 ), marks the period when Pisa was romanised 17 . While the growth in population during the Roman period, as well as the development of trade and shipbuilding, favored by Portus Pisanus , the Via Aurelia and the Via Aemilia Scauri , resulted in an extension of the inhabited area 17 , agro-pastoral activities around the basin decline (Fig. 7 ). The first decline of Pisa is documented during the 5 th and 6 th centuries AD, especially during the Gothic War (~550 AD). A second peak in fire activity and a strong decline in agro-pastoral activities at 550 ± 25–600 ± 25 CE probably resulted from destruction wrought by the Gothic War. Nonetheless, the seaport continued to be active after the fall of the Roman Empire (476 AD). For instance, Pisa remained an important harbour city for Goths, Longobards and Carolingians (from 493 to 812 AD) 28 , 59 . No major environmental changes were recorded in the harbour basin during this period, suggesting the presence of a natural deltaic coastal spit protecting the environment (Fig. 7 ). The first phase of decreasing seawater inputs (1000 ± 20–1250 ± 20 AD), due to increasing coastal progradation processes, corresponds to the period when Pisa became a powerful Commune and when the fortified medieval harbour of Livorno was built to the west of the Roman port 60 , 61 . Part of what once was the Roman Portus Pisanus remained a protected lagoon basin but with a lower degree of sea connection (Figs 6 – 7 ). Agro-pastoral activities increased after 900 ± 25 AD (Fig. 7 ), probably favored by the extension of wetlands (Fig. 8 ). In the late 13 th century AD, when the last peak of marine influence was recorded (1300 ± 20 AD; Fig. 7 ), the defeat of Pisa’s navy in the Battle of Meloria against Genoa (1284 AD) marked the onset of the city’s declining power. In 1290 AD, Genoese ships attacked the Medieval Portus Pisanus , sealing the fate of the independent Pisan state. Livorno was finally sold to Florence in 1421 AD 60 , 62 when the former Roman harbour basin started to lose its direct connection with the sea (Fig. 6 ) due to silting and coastal progradation (Fig. 8 ). The juxtaposition of strongly arcuate sets of beach ridges is consistent with a pronounced increase in fluvial inputs 63 . At 1500 ± 10 AD, the lagoon was cut off from the sea and replaced by the maritime harbour of Livorno located on a rocky coast beyond the deltaic system (Fig. 8 ). The decision to build two new docks in this port was taken by Cosimo I Medici in 1573 AD. The old basin was then transformed into a coastal lake (from 1400 ± 15 to 1700 ± 10 AD; Figs 6 – 7 ) and then into a floodplain (from 1700 ± 10 to 1850 ± 5 AD). During these latter periods, the highest peaks in agriculture were recorded at 1550 ± 10–1600 ± 10 AD (Fig. 7 ), when Pisa was under the influence of the Duchy of Florence and the Grand Duchy of Tuscany. Catastrophic flood events A comparison with “catastrophic flood events” recorded at Pisa 47 shows that the hinge phases 63 , 64 correspond to periods of minor decreases in marine influence in the harbour basin, suggesting a weak impact of these hydrological events on the seaport. A weak agro-pastoral signal during the period 50 ± 30 BC-900 ± 25 AD (Fig. 7 ) is probably not the outcome of recurring floods, which would have affected human activities, but is likely the result of small cultivated fields in a marshy deltaic environment. Conclusions Bio- and geosciences have unraveled the complex history of the protected lagoon that hosted Portus Pisanus , shaped by relative sea-level changes, coastline variations, fluctuations in river discharge and sediment supply, climate and human impacts. The site where the harbour complex was located was both its strength and its weakness because, like other deltaic contexts 65 , 66 , sediment supply eventually entrained its demise. Portus Pisanus was destined to disappear due to long-term coastal dynamics and environmental change. Methods Relative sea-level reconstruction We built a database of RSL index points (i.e. a point that constrains the palaeo mean sea-level in space and time 67 , 68 ) for the eastern Ligurian Sea (Fig. 1 ). We followed the recent protocol proposed by Vacchi et al . 36 for the Mediterranean Sea. Index points were mainly produced from radiocarbon-dated samples of brackish lagoonal sediments collected near Portus Pisanus (i.e. the Arno and Versilia coastal plains, Fig. 1 ). We further added a suite of radiocarbon-dated samples deriving from fossil Lithophyllum byssoides rims in Northern Corsica 69 . All the radiocarbon ages were calibrated into sidereal years with a 2σ range using Calib-Rev 7.1. We employed the IntCal13 and Marine13 datasets for terrestrial and marine samples, respectively. The indicative range (i.e. the relationship of the samples with respect to the former mean sea level) of each RSL index point was established according to Vacchi et al . 36 . Maps Different surface and subsurface datasets were used to reconstruct the late Holocene palaeoenvironmental evolution of the Portus Pisanus area 15 , 19 . A geomorphological survey complemented by remote sensing analysis (satellite images, multitemporal photographs, LiDAR images) was undertaken to identify outcropping beach ridges and to verify data previously proposed by other authors 70 , 71 . In order to accurately reconstruct changes in coastal morphology and place constraints on the location and size of Leghorn port structures during the Modern Age (between the 16 th –17 th centuries AD), historical maps 72 , 73 were georeferenced and coupled with the geological data. In a GIS environment, the geomorphological features were matched with stratigraphic subsurface reconstructions based on facies correlations and geometric criteria. Subsurface data include archaeological trenches from the Santo Stefano ai Lupi site 35 , the highest quality stratigraphic descriptions available for the Arno coastal plain 74 , 75 and two reference cores (9 m-long PP1 and PP3 cores) obtained using percussion drilling equipment (Atlas Copco, Cobra model, equipped with Eijkelkamp samplers). The latter were analysed for their sedimentological features (e.g., mean grain size, colour, plant debris, wood fragments) and the fossil content (i.e., benthic foraminifera and ostracods, palynomorphs). The chronological framework is based on 9 radiocarbon dates performed at Beta Analytic Inc. (Miami, USA) and CIRCE Laboratory of Caserta (Naples University). Further control points were provided by archaeological material from the Santo Stefano ai Lupi site and on the surface of the Arno delta plain 35 . Biological indicators Sampling was done according to the different depositional layers. On average, this corresponds to one sample every 10 cm, but with some variations to respect the core stratigraphy (Fig. 3 ). All the samples from the core PP3 were prepared for pollen analysis using the standard procedure for clay samples. Pollen grains were counted under x400 and x1000 magnification using an Olympus microscope. Pollen frequencies (expressed as percentages) are based on the terrestrial pollen sum, excluding local hygrophytes and spores of non-vascular cryptogams. Aquatic taxa frequencies were calculated by adding the local hygrophytes-hydrophytes to the terrestrial pollen sum. Dinoflagellate cysts (marine plankton) were counted on pollen slides and are reported as concentrations (cysts per cm −3 ). The fire history was elucidated by counting the pollen-slide charcoal particles (50–200 mm) and is expressed as concentrations (fragments per cm −3 ). Concentrations have been plotted on a linear depth-scale. Foraminifera, marine bivalves and Posidonia oceanica debris were extracted from the same samples as the pollen grains, charcoal fragments and dinoflagellate cysts in order to avoid any analytical bias. These marine components (Foraminifera, marine bivalves) and P . oceanica debris were picked from the washed sediment fraction. The marine components are displayed as concentrations (scores: remains per cm −3 ; Fig. 6 ). Ostracods were extracted from the core PP1 (Fig. 8 ) and correlated to PP3 using depth and stratigraphy. The potential effect of different depositional layers (silty clay versus sand) on the conservation of bioindicators (mainly pollen) was also tested using the pollen sums, n-scores, and Simpson index. The data do not show a direct correlation between variations in depositional layers (Fig. 3 ) and the conservation of bioindicators, suggesting that taphonomic processes, which could affect the signal, are not significant. The whole data set is available in the “Raw data file”. Statistical analyses All data were analysed using Xl-Stat 2017 and the software package PAST, version 2.17 c. A regular interpolation (50-yr) was first run on the dataset. Biological data and charcoal fragments were analysed using cluster analysis (descending type; Figs 4 – 5A ). Cross-correlations (P = 0.05) were subsequently calculated (Fig. 5B ). Cross-correlations concern the time alignment of two time series by means of the correlation coefficient. The time series were cross-correlated to ascertain the best temporal match and the potential delay between the two time-series. The outcome is plotted as a function of the alignment position, focusing on the Lag 0 value. A LOESS smoothing (with bootstrap and smooth 0.05) was applied to the “marine influence” and backshore scrubs to define the 2.5 percentile and the 97.5 percentile (Fig. 6 ). Use of the LOESS curve is better than the percentage/concentration-curve to show long-term trends because it is a non-parametric regression method that combines multiple regression models in a k-nearest-neighbor-based meta-model. The LOESS smoothing is here plotted on a linear timescale (Fig. 6 ). A principal components analysis (PCA) was then run to test the ordination of ecosystems by assessing major changes in the matrix including pollen-derived vegetation patterns, dinoflagellate cysts and marine components (Fig. 7 ). The “agro-pastoral activities” assemblage (Fig. 4 ) was excluded from the matrix (Fig. 7 ). The main variance is loaded by the PCA-Axis1, which is also shown as a LOESS smoothing (with bootstrap and smooth 0.05) plotted on a linear timescale. A boxplot was added to separate the natural variability from the extreme values. Matrix plots are also displayed to mark the hinge phases (Fig. 7 ). Data availability statement All data generated during this study are included in this article (“Raw data file”) or are available from the corresponding author upon request.
New insights into the evolution and eventual disappearance of Portus Pisanus, the lost harbour of Pisa, have been revealed. Although it has been described as one of Italy's most influential seaports during the Middle Ages, little is known about the relationship between Portus Pisanus's environment and its history. To understand the role that long-term coastal dynamics, sea-level rise and a changing environment played in the harbour's evolution, researchers reconstructed relative sea levels for the eastern Ligurian Sea over a 10,500-year period. The research team, led by David Kaniewski, also coupled historical maps with geological data to reconstruct the morphology of the coast around the Pisa harbour basin. They analysed biological samples from sediment layers to investigate how seawater, freshwater or agricultural activities may have influenced the environment in the area, before comparing and contrasting their data with written sources and archaeological data. The findings suggest that at approximately 200 BC, a naturally protected lagoon with a good connection to the sea developed south of the city of Pisa that would have benefited navigation and trade and facilitated the establishment of port complexes. The lagoon hosted Portus Pisanus well beyond the 5th century AD but its degree of sea connection began to decline from around 1000-1250 AD, as coastlines shifted towards the sea. It was cut off from the sea and disappeared around 1500 AD when the basin developed into a coastal lake and Portus Pisanus was replaced by the maritime harbour of Livorno. "This paper is the result of a very intense collaboration between geo-scientists and archaeologists in the Mediterranean Sea," said Dr. Matteo Vacchi, from the University of Exeter. "Our results underline the importance of such approaches to understand the role of long-term coastal changes and their impacts on the societies living by the sea, notably in the last two millennia. "The study of the evolution in the coastal zone in the past is a fundamental tool to predict future changes in the context of climatic change. "Thanks to the huge amount of archaeological remains, the Mediterranean Sea offers the unique possibility to understand the ability of past societies to respond to such major coastal changes. "I am working with colleagues on similar studies in other Mediterranean coastal areas, including Corsica and Sardinia. "These Islands, placed in the middle of the Western Mediterranean, have a huge potential for this kind of investigations and represents a major focus of my current research activity." The paper, published in the journal Scientific Reports, is titled "Holocene evolution of Portus Pisanus, the lost harbour of Pisa."
10.1038/s41598-018-29890-w
Medicine
Machine learning shows similar performance to traditional risk prediction models
Yan Li et al, Consistency of variety of machine learning and statistical models in predicting clinical risks of individual patients: longitudinal cohort study using cardiovascular disease as exemplar, BMJ (2020). DOI: 10.1136/bmj.m3919 Journal information: British Medical Journal (BMJ)
http://dx.doi.org/10.1136/bmj.m3919
https://medicalxpress.com/news/2020-11-machine-similar-traditional.html
Abstract Objective To assess the consistency of machine learning and statistical techniques in predicting individual level and population level risks of cardiovascular disease and the effects of censoring on risk predictions. Design Longitudinal cohort study from 1 January 1998 to 31 December 2018. Setting and participants 3.6 million patients from the Clinical Practice Research Datalink registered at 391 general practices in England with linked hospital admission and mortality records. Main outcome measures Model performance including discrimination, calibration, and consistency of individual risk prediction for the same patients among models with comparable model performance. 19 different prediction techniques were applied, including 12 families of machine learning models (grid searched for best models), three Cox proportional hazards models (local fitted, QRISK3, and Framingham), three parametric survival models, and one logistic model. Results The various models had similar population level performance (C statistics of about 0.87 and similar calibration). However, the predictions for individual risks of cardiovascular disease varied widely between and within different types of machine learning and statistical models, especially in patients with higher risks. A patient with a risk of 9.5-10.5% predicted by QRISK3 had a risk of 2.9-9.2% in a random forest and 2.4-7.2% in a neural network. The differences in predicted risks between QRISK3 and a neural network ranged between –23.2% and 0.1% (95% range). Models that ignored censoring (that is, assumed censored patients to be event free) substantially underestimated risk of cardiovascular disease. Of the 223 815 patients with a cardiovascular disease risk above 7.5% with QRISK3, 57.8% would be reclassified below 7.5% when using another model. Conclusions A variety of models predicted risks for the same patients very differently despite similar model performances. The logistic models and commonly used machine learning models should not be directly applied to the prediction of long term risks without considering censoring. Survival models that consider censoring and that are explainable, such as QRISK3, are preferable. The level of consistency within and between models should be routinely assessed before they are used for clinical decision making. Introduction Risk prediction models are used routinely in healthcare practice to identify patients at high risk and make treatment decisions, so that appropriate healthcare resources can be allocated to those patients who most need care. 1 These risk prediction models are usually built using statistical regression techniques. Examples include the Framingham risk score (developed from a US cohort with prospectively collected data) 2 and QRISK3 (developed from a large UK cohort using retrospective electronic health records). 3 Recently, machine learning models have gained considerable popularity. The English National Health Service has invested £250m ($323m; €275m) to further embed machine learning in healthcare. 4 A recent viewpoint article suggested that machine learning technology is about to start a revolution with the potential to transform the whole healthcare system. 5 Several studies suggested that machine learning models could outperform statistical models in terms of calibration and discrimination. 6 7 8 9 However, another viewpoint concerns the fact that these approaches cannot provide explainable reasons behind their predictions, potentially leading to inappropriate actions, 10 and a recent review found no evidence that machine learning models had better model performance than logistic models. 11 However, interpretation of this review is difficult, as it included models from mostly small sample sizes and with different outcomes and predictors. Machine learning has established strengths in image recognition that could help in diagnosing diseases in healthcare, 12 13 14 15 but censoring (patients lost to follow-up), which is common in risk prediction, does not exist in image recognition. Many commonly used machine learning models do not take into account censoring by default. 16 The objective of this study was to assess the robustness and consistency of a variety of machine learning and statistical models on individual risk prediction and the effects of censoring on risk predictions. We used cardiovascular disease as an exemplar. We defined robustness of individual risk prediction as the level of consistency in the prediction of risks for individual patients with models that have comparable population level performance metrics. 17 18 19 Methods Data source We derived the study cohort from Clinical Practice Research Datalink (CPRD GOLD), which includes data from about 6.9% of the population in England. 20 It also has been linked to Hospital Episode Statistics, Office for National Statistics mortality records, and Townsend deprivation scores, 3 to provide additional information about hospital admissions (including date and discharge diagnoses) and cause specific mortality. 20 CPRD includes patients’ electronic health records from general practice, capturing detailed information such as demographics (age, sex, and ethnicity), symptoms, tests, diagnoses, prescribed treatments, health related behaviours, and referrals to secondary care. 20 CPRD is a well established representative cohort of the UK population, and thousands of studies have used it, 21 22 including a validation of the QRISK2 model and an analysis of machine learning. 8 23 Study population This study used the same selection criteria for the study population, risk factors, and cardiovascular disease outcomes as were used for QRISK3. 3 18 Follow-up of patients started at the date of the patient’s registration with the practice, their 25th birthday, or 1 January 1998 (whichever was latest) and ended at the date of death, incident cardiovascular disease, date of leaving the practice, or last date of data collection (whichever was earliest). The index date for measurement of cardiovascular disease risk was randomly chosen from the follow-up period to capture time relevant practice variability with a better spread of calendar time and age. 24 This was different from QRISK3, for which a single calendar time date was mostly used. 18 The main inclusion criteria were age between 25 and 84 years, no history of cardiovascular disease, and no prescription for a statin before the index date. The outcome of interest was the 10 year risk of developing cardiovascular disease. The definition of the primary clinical outcome (cardiovascular disease) was the same as for QRISK3 (that is, coronary heart disease, ischaemic stroke, or transient ischaemic attack). 3 We extracted two main cohorts from the study population—one overall cohort including all patients with at least one day of follow-up and one cohort with censored patients removed. The cohort without censoring excluded patients who were lost to follow-up before developing cardiovascular disease by year 10. The analysis of the cohort without censoring aimed to investigate the effects of ignoring censoring on patients’ individual risk predictions. This cohort mimics the methods used by some machine learning studies—that is, only patients or practices with full 10 years’ follow-up were selected. 8 Cardiovascular disease risk factors The cardiovascular disease risk factors at the index date included sex; age; body mass index; smoking history; total cholesterol to high density lipoprotein cholesterol ratio; systolic blood pressure and its standard deviation; history of prescribing of atypical antipsychotic drugs; blood pressure treatment or regular oral glucocorticoids; clinical history of systemic lupus erythematosus, atrial fibrillation, chronic kidney disease (stage 3, 4, or 5), erectile dysfunction, migraine, rheumatoid arthritis, severe mental illness, or type 1 or 2 diabetes mellitus; family history of angina or heart attack in a first degree relative aged under 60 years; ethnicity; and Townsend deprivation score. 3 The same predictors from QRISK3 3 were used for all model fitting except for Framingham, 25 which used fewer and different predictors. Machine learning and Cox models The study considered 19 models, including 12 families of machine learning, three Cox proportional hazards models (local fitted, QRISK3, and Framingham), three parametric survival models (assuming Weibull, Gaussian, and logistic distribution), and a statistical logistic model (fitted in a statistical causal-inference framework). Machine learning models included logistic model (fitted in an automated machine learning framework), 26 random forest, 27 and neural network 28 from R package “Caret” 29 ; logistic model, random forest, neural network, extra-tree model, 30 and gradient boosting classifier 30 from Python package “Sklearn” 31 ; and logistic model, random forest, neural network, and autoML 32 from Python package “h2o.” 33 The package autoML selects a best model from a broader spectrum of candidate models. 32 Details of these models are summarised in eTable 1. The study used the machine learning algorithms from different software packages, with a grid search process on hyper-parameters and cross validation, to acquire a series of high performing machine learning models; this mimics the reality that practitioners may subjectively select different packages for model fitting and end up with a different best model. The study treated the models from the same machine learning algorithm but different software packages as different model families, as the settings (hyper-parameters) of these packages to control the model fitting are often different, which might result in a different best performing model through the grid search process. Statistical analysis We used the Markov chain Monte Carlo method with monotone style to impute missing values 10 times for ethnicity (54.3% missing in overall cohort), body mass index (40.3%), Townsend score (0.1%), systolic blood pressure (26.9%), standard deviation of systolic blood pressure (53.9%), ratio of total cholesterol to high density lipoprotein cholesterol (65.0%), and smoking status (25.2%) 18 (only these variables had missing values). We randomly split the overall cohort (which contained 10 imputations) into an overall derivation cohort (75%) and an overall testing cohort (25%). We grid searched a total of 1200 machine learning models with the highest discrimination (C statistic) on hyper-parameters with twofold cross validation estimating calibration and discrimination. They were derived from 12 model families of 100 samples with similar sample size to another machine learning study. 8 We then estimated the individual cardiovascular disease risk predictions (averaged for missing value imputations) and model performance of all models by using the overall testing cohort. The sample splitting and model fitting process is shown in eFigure 1. We compared distributions of risk predictions for the same individual among models. We plotted the differences of individual cardiovascular disease risk predictions between models against deciles of cardiovascular disease risk predictions for QRISK3. We produced Bland-Altman plots—a graphical method to compare two measurement techniques across the full spectrums of values. 34 These plotted the differences of individual risk predictions between two models against the average individual risk prediction. 34 We used R to fit the models from “Caret” and Python to fit models from “Sklearn” and “h2o.” 29 30 We used SAS procedures to extract the raw data, create analysis datasets, and generate tables and graphs. 35 Patient and public involvement No patients were involved in setting the research question or the outcome measures, nor were they involved in developing plans or implementation of the study. No patients were asked to advise on interpretation or writing up of results. Results The overall study population included 3.66 million patients from 391 general practices. The cohort without censoring was considerably smaller (0.45 million) than the overall cohort. Table 1 shows the baseline characteristics of the two study populations, which were split into derivation and validation cohorts. The average age was higher in the cohort without censoring (owing to younger patients leaving the practice as shown in eFigure 11). Table 1 Baseline characteristics of two study populations (patients aged 25-84 years without history of cardiovascular disease (CVD) or previous statin use). Values are numbers (percentages) unless stated otherwise View this table: View popup View inline Table 2 shows the model performance of the machine learning and statistical models. All models had very similar discrimination (C statistics of about 0.87) and calibration (Brier scores of about 0.03 in eTables 2-4 and eFigures 3-4). Table 2 Performance indicators of machine learning and statistical models in overall cohort View this table: View popup View inline Figure 1 shows the variability in individual risk predictions across the models for patients with predicted cardiovascular disease risks of 9.5-10.5% by QRISK3. Patients with a predicted cardiovascular disease risk between 9.5% and 10.5% with QRISK3 had a risk of 2.2-5.8% with logistic Caret model, 2.9-9.2% with Caret random forest, 2.4-7.2% with Caret neural network, and 3.1-9.3% with Sklearn random forest. The calibration plot ( fig 2 ) shows that models that ignore censoring were miscalibrated (that is, predicted risks were lower than observed risks). Fig 1 Distribution of individual risk predictions with machine learning and statistical models in overall cohort for patients with predicted cardiovascular disease risks of 9.5-10.5% in QRISK3 Download figure Open in new tab Download powerpoint Fig 2 Calibration slope of machine learning models and statistical models in overall cohort in survival framework (observed events consider censoring). CVD=cardiovascular disease Download figure Open in new tab Download powerpoint Figure 3 plots the differences of individual cardiovascular disease risk predictions with the different models stratified by deciles of cardiovascular disease risk predictions of QRISK3. The largest range of inconsistencies in risk predictions was found in patients with highest predicted risks of cardiovascular disease. Low risk of cardiovascular disease was generally predicted consistently between and within models. We observed similar trends when using a different reference model (eFigure 5.2). Fig 3 95% range of individual risk predictions with machine learning and statistical models stratified by deciles of predicted cardiovascular disease (CVD) risks with QRISK3 in overall cohort Download figure Open in new tab Download powerpoint Figure 4 shows the Bland-Altman plot of QRISK3 and neural network. We found a large inconsistency of risk predictions between models. The differences in predicted risks between QRISK3 and neural network ranged between −23.2% and 0.1% (95% range). The regression line shows similar finding to figure 3 , with the largest differences in higher risk groups. More comparison between specific models can be found in eFigure 6 and eFigure 7. We found similar inconsistency of risk prediction among models when using a logistic model as reference (eFigure 2.1). The removal of censored patients changed the magnitude but not the variability of individual cardiovascular disease risk predictions (eFigure 2.2). Fig 4 Bland-Altman analysis comparing QRISK3 with neural network model Download figure Open in new tab Download powerpoint We found substantial reclassification across a treatment threshold when using a different type of prediction model. Of 691 664 patients with a cardiovascular disease risk of 7.5% or lower, as predicted by QRISK3, 13.6% would be reclassified above 7.5% when using another model ( table 3 ). Of the 223 815 patients with a cardiovascular disease risk above 7.5%, 57.8% would be reclassified below 7.5% when using another model. We also found high levels of reclassification with a different reference model (as shown in table 3 ) or a different threshold (eTable 7). Table 3 Reclassification of individual risk predictions with machine learning and statistical models View this table: View popup View inline We did several sensitivity analyses with consistent findings of high levels of inconsistencies in individual risk predictions between and within models. The same machine learning algorithm with the selection of different settings (hyper-parameters) from different software packages yielded different individual cardiovascular disease risk predictions (eTable 8 and eFigure 8). The evaluation of the effects of generalisability by developing and testing models in different regions of England showed similarly high levels of inconsistencies in cardiovascular disease risk predictions (eTable 10 and eFigure 9). Changing the number of predictors did not result in lower levels of inconsistencies in cardiovascular disease risk predictions with more predictors included in the models (eTable 11 and eFigure 10), Discussion We found that the predictions of cardiovascular disease risks for individual patients varied widely between and within different types of machine learning and statistical models, especially in patients with higher risks (when using similar predictors). Logistic models and the machine learning models that ignored censoring substantially underestimated risk of cardiovascular disease. Comparison with other studies Despite claims that machine learning models can revolutionise risk prediction and potentially replace traditional statistical regression models in other areas, 5 36 37 this study of prediction of cardiovascular disease risk found that they have similar model performance to traditional statistical methods and share similar uncertainty in individual risk predictions. Strengths of machine learning models may include their ability to automatically model non-linear associations and interactions between different risk factors. 38 39 They may also find new data patterns. 30 They have the acknowledged strength of automating model building with a better performance in specific classification tasks (for example, image recognition). 30 However, a critical question is whether risk prediction models provide accurate and consistent risk predictions for individual patients. Previous research has found that a traditional risk prediction model such as QRISK3 has considerable uncertainty on individual risk prediction, although it has very good model performance at the population level. 18 19 This uncertainty is related to unmeasured heterogeneity between clinical sites and modelling choices such as the inclusion of secular trends. 18 19 Our study found that machine learning models share this uncertainty, as models with comparable population level performance yielded very different individual risk predictions. Consequently, different treatment decisions could be made by arbitrarily selecting another modelling technique. Censoring of patients is an unavoidable problem in prediction models for long term risks, as patients frequently move away or die. However, many popular machine learning models ignore censoring, as the default framework is the analysis of a binary outcome rather than time to event survival outcome. A UK Biobank study of risk prediction for cardiovascular disease did not report how censoring was dealt with, 7 like several other studies. 39 40 41 Another machine learning study incorrectly excluded censored patients. 8 Random survival forest is a machine learning model that takes account of censoring. 42 Innovative techniques are being developed that incorporate statistical censoring approaches into the machine learning framework. 16 43 However, to our knowledge no current software packages can handle large datasets for these methods. This study shows that directly applying popular machine learning models to data (especially for data with substantive censoring) without considering censoring will substantially bias risk predictions. The miscalibration was large compared with observed life table predictions. This is consistent with a recent study that reported loss of information due to lack of consideration of censoring with the random forest method. 6 Models with similar C statistics gave varying estimates of individual risks for the same patients. A fundamental challenge with the C statistic is that it applies to the population level but not to individual patients. 18 44 The C statistic measures the ability of a model to discriminate between cases and non-cases. It is a proportion of cases and non-cases that are correctly ranked by the model. This means that for a high C statistic, patients with observed events should have a higher risk than the patients without observed events. 38 The C statistic concerns rank of predicted probability rather than probability itself. For example, a model may predict all events with a range of probability between 50.2% and 50.3% and non-events with a probability of 50%, which would result a perfect discrimination, but the predicted probability is not clinically useful. When a large number of patients have lower risks (which is often the case for cardiovascular disease risk prediction), the C statistic becomes less informative in indicating discrimination of models, especially in patients at high risk. For example, two patients with very low risk (say 1% and 1.5%) may have similar effects on C statistic to two patients with high risk (say 10% and 20%), given that their differences in rank are the same (but the latter two are of greater clinical interest). Therefore, C statistics do not tell us whether a model discriminates specific patients at high risk correctly or consistently compared with other models. C statistics have also been shown to be insensitive to changes in the model. 44 The evaluation of consistency in individual risk predictions between models may thus be important in assessing their clinical usefulness in identifying patients at high risk. This study considered a total of 22 predictors that had been selected by the developers of QRISK on the basis of their likely causal effect on cardiovascular disease. 3 Other machine learning studies have used considerably more predictors. As an example, a study using the UK Biobank included 473 predictors in the machine learning models. 7 A potentially unresolved question in risk prediction is what type of variables and how many of them should be included in models, as consensus and guidelines for choosing variables for risk prediction model are lacking. 45 More information incorporated into a model may increase the model performance of risk prediction at the population level. For example, the C statistic is related to both the effects of predictors and the variation of predictors among patients with and without events. 46 Including more predictors in a model may increase the C statistic merely because of greater variation of predictors. On the other hand, inclusion of non-causal predictors may lower the accuracy of the risk prediction by adding noise, increasing the risk of over-fitting, and leading to more data quality challenges. 47 Also, a very large number of predictors may limit the clinical utility of these machine learning models, as more predictors need to be measured before a prediction can be made. Further research is needed to establish whether the focus of risk prediction should be on consistently measured causal risk factors or on variables that may be recorded inconsistently between clinicians or electronic health records systems. Guidelines for the development and validation of risk prediction models (called TRIPOD) focus on the assessment of population level performance but do not consider consistencies in individual risk predictions by prediction models with comparable population level performance. 48 Arguably, the clinical utility of risk prediction models should be based, as has been done with blood pressure devices for instance, on the consistent risk prediction (reliability) for a particular patient rather than broad population level performance. 49 If models with comparable performance provide different predictions for a patient with certain risk factors, an explanation for these discrepant predictions is needed. 50 Explainable artificial intelligence has been described as methods and techniques in the application of artificial intelligence such that the results of the solution can be understood by human experts. 51 This contrasts with the concept of the “black box” in machine learning, whereby predictions cannot be explained. Arguably, a survival model that is explainable (such as QRISK3, which is based on established causal predictors) may be preferable over black box models that are high dimensional (include many predictors) but that provide inconsistent results for individual patients. Better standards are needed on how to develop and test machine learning algorithms. 14 Strengths and limitations of study The major strength of this study was that a large number of different machine learning models with varying hyper-parameters using different packages from different programming languages were fitted to a large population based primary care cohort. However, the study has several limitations. We considered only predictors from QRISK3 in order to compare models on the basis of equal information, but sensitivity analyses showed similar findings of inconsistencies in cardiovascular disease risk prediction independent of the number of predictors. Furthermore, more hyper-parameters in the machine learning models could have been considered in the grid search process. However, the fitted models already achieved reasonably high model performance, which indicates that the main hyper-parameters had been covered in the grid search process. Several machine learning algorithms were not included in this study, such as support vector machine or survival random forest, as the current software packages of these models cannot cope with large datasets. 52 53 54 55 The Bland-Altman graph used the 95% range of differences rather than 95% confidence interval, as the differences of predicted risk (including log transformed) did not follow normal distribution (which is a required assumption to calculate the Bland-Altman 95% confidence interval). Another limitation is that this study concerned cardiovascular disease risk prediction in primary care, and findings may not be generalisable to other outcomes or settings. However, the robustness of individual risk predictions within and between models with comparable population level performance is rarely, if ever, evaluated. Our findings indicate the importance of assessing this. Conclusions A variety of models predicted cardiovascular disease risks for the same patients very differently despite similar model performances. Using the logistic model and commonly used machine learning models without considering censoring in survival analysis results in substantially biased risk prediction and has limited usefulness in the prediction of long term risks. The level of consistency within and between models should be assessed before they are used for clinical decision making and should be considered in TRIPOD guidelines. What is already known on this topic Risk prediction models are widely used in clinical practice (such as QRISK or Framingham for cardiovascular disease) Multiple techniques can be used for these predictions, and recent studies claim that machine learning models can outperform models such as QRISK What this study adds Nineteen different prediction techniques (including 12 machine learning models and seven statistical models) yielded similar population level performance However, cardiovascular disease risk predictions for the same patients varied substantially between models Models that ignored censoring (including commonly used machine learning models) yielded biased risk predictions Footnotes Contributors: YL designed the study, did all statistical analysis, produced all tables and figures, and wrote the main manuscript text and supplementary materials. MS supervised the study, provided quality control on statistical analysis, reviewed all statistical results, and reviewed and edited the main manuscript text. DMA reviewed and edited the main manuscript text and supplementary materials. TPvS designed and supervised the study, provided quality control of all aspects of the paper, and wrote the main manuscript text. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. TPvS is the guarantor. Funding: This study was funded by the China Scholarship Council (to cover costs of doctoral studentship of YL at the University of Manchester). The funder did not participate in the research or review any details of this study; the other authors are independent of the funder. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: support to YL from the China Scholarship Council; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. Ethical approval: This study is based on data from Clinical Practice Research Datalink (CPRD) obtained under licence from the UK Medicines and Healthcare products Regulatory Agency. The protocol for this work was approved by the independent scientific advisory committee for CPRD research (No 19_054R). The data are provided by patients and collected by the NHS as part of their care and support. The Office for National Statistics (ONS) is the provider of the ONS data contained within the CPRD data. Hospital Episode Statistics data and the ONS data (copyright 2014) are re-used with the permission of the Health and Social Care Information Centre. Data sharing: This study is based on CPRD data and is subject to a full licence agreement, which does not permit data sharing outside of the research team. Code lists are available from the corresponding author. The lead author affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained. Dissemination to participants and related patient and public communities: Dissemination to research participants is not possible as data were anonymised. Provenance and peer review: Not commissioned; externally peer reviewed. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: .
Some claim that machine learning technology has the potential to transform healthcare systems, but a study published by The BMJ finds that machine learning models have similar performance to traditional statistical models and share similar uncertainty in making risk predictions for individual patients. The NHS has invested £250m ($323m; €275m) to embed machine learning in healthcare, but researchers say the level of consistency (stability) within and between models should be assessed before they are used to make treatment decisions for individual patients. Risk prediction models are widely used in clinical practice. They use statistical techniques alongside information about people, such as their age and ethnicity, to identify those at high risk of developing an illness and make decisions about their care. Previous research has found that a traditional risk prediction model such as QRISK3 has very good model performance at the population level, but has considerable uncertainty on individual risk prediction. Some studies claim that machine learning models can outperform traditional models, while others argue that they cannot provide explainable reasons behind their predictions, potentially leading to inappropriate actions. What's more, machine learning models often ignore censoring—when patients are lost (either by error or by being unreachable) during a study and the model assumes they are disease free, leading to biased predictions. To explore these issues further, researchers in the UK, China and the Netherlands set out to assess the consistency of machine learning and statistical techniques in predicting individual level and population level risks of cardiovascular disease and the effects of censoring on risk predictions. They assessed 19 different prediction techniques (12 machine learning models and seven statistical models) using data from 3.6 million patients registered at 391 general practices in England between 1998 and 2018. Data from general practices, hospital admission and mortality records were used to test each model's performance against actual events. All 19 models yielded similar population level performance. However, cardiovascular disease risk predictions for the same patients varied substantially between models, especially in patients with higher risks. For example, a patient with a cardiovascular disease risk of 9.5-10.5% predicted by the traditional QRISK3 model had a risk of 2.9-9.2% and 2.4-7.2% predicted by other models. Models that ignored censoring (including commonly used machine learning models) substantially underestimated risk of cardiovascular disease. Of the 223,815 patients with a cardiovascular disease risk above 7.5% with QRISK3 (a model that does consider censoring), 57.8% would be reclassified below 7.5% when using another type of model, explain the researchers. The researchers acknowledge some limitations in comparing the different models, such as the fact that more predictors could have been considered. However, they point out that their results remained similar after more detailed analyses, suggesting that they withstand scrutiny. "A variety of models predicted risks for the same patients very differently despite similar model performances," they write. "Consequently, different treatment decisions could be made by arbitrarily selecting another modelling technique." As such, they suggest these models "should not be directly applied to the prediction of long term risks without considering censoring" and that the level of consistency within and between models "should be routinely assessed before they are used to inform clinical decision making."
10.1136/bmj.m3919
Medicine
Scientists use CRISPR for possible 'bubble boy' therapy
Mara Pavel-Dinu et al. Gene correction for SCID-X1 in long-term hematopoietic stem cells, Nature Communications (2019). DOI: 10.1038/s41467-019-09614-y Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-09614-y
https://medicalxpress.com/news/2019-04-scientists-crispr-boy-therapy.html
Abstract Gene correction in human long-term hematopoietic stem cells (LT-HSCs) could be an effective therapy for monogenic diseases of the blood and immune system. Here we describe an approach for X-linked sSevere cCombined iImmunodeficiency (SCID-X1) using targeted integration of a cDNA into the endogenous start codon to functionally correct disease-causing mutations throughout the gene. Using a CRISPR-Cas9/AAV6 based strategy, we achieve up to 20% targeted integration frequencies in LT-HSCs. As measures of the lack of toxicity we observe no evidence of abnormal hematopoiesis following transplantation and no evidence of off-target mutations using a high-fidelity Cas9 as a ribonucleoprotein complex. We achieve high levels of targeting frequencies (median 45%) in CD34 + HSPCs from six SCID-X1 patients and demonstrate rescue of lymphopoietic defect in a patient derived HSPC population in vitro and in vivo. In sum, our study provides specificity, toxicity and efficacy data supportive of clinical development of genome editing to treat SCID-Xl. Introduction X-linked sSevere cCombined iImmunodeficiency (SCID-X1) is a primary immune deficiency disorder (PID) caused by mutations in the IL2RG gene on the X chromosome. The gene encodes a shared subunit of the receptors for interleukin-2 (IL-2), IL-4, IL-7, IL-9, IL-15, and IL-21. Without early treatment, affected male infants die in the first year of life from infections. Although allogeneic hematopoietic cell transplant (allo-HCT) is considered the standard of care for SCID-X1, it holds significant risks due to potential incomplete immune reconstitution, graft versus host disease (GvHD) and a decreased survival rate in the absence of an human leukocyte antigen (HLA)-matched sibling donor 1 . Because of the selective advantage of lymphoid progenitors expressing normal IL2RG , however, only a small number of genetically corrected hematopoietic stem and progenitor cells (HSPCs) are needed to reconstitute T-cell immunity 2 , 3 . The importance of achieving gene correction in long-term hematopoietic stem cells (LT-HSCs) to achieve sustained clinical benefit is demonstrated by the waning of a functional immune system in patients who do not derive their immune system from LT-HSCs with a wild-type IL2RG gene. Gene therapy is an alternative therapy to allo-HSCT. Using integrating viral vectors, such as gamma-retroviral and lentiviral vectors, extra copies of a functional IL2RG gene are semi-randomly integrated into the genome of SCID-X1 patient-derived CD34 + HSPCs. This strategy has resulted in both successes and setbacks. While most patients treated with first generation of gene therapy survived and benefited from the therapy, a substantial fraction (>25%) of patients developed leukemia from insertional oncogenesis 4 , 5 , 6 . It is concerning that patients developed leukemia from insertional oncogenesis both early and late, 15 years after transplantation of retroviral-based engineered cells 7 . Constitutive activation of the transgene 8 , the choice of vectors 9 and specific details of the gene therapy procedure have all been proposed as factors contributing to the risk of leukemia and myelodysplastic syndrome that occurred in several trials for primary immunodeficiency disorders (PIDs) including SCID-X1 10 , 11 , chronic granulomatous disease (CGD) 12 , 13 and Wiskott–Aldrich Syndrome (WAS) 14 . With second-generation self-inactivating (SIN) vectors, multiple SCID-X1 patients have successfully reconstituted T-cell immunity in the absence of early leukemic events 15 , 16 , 17 with a follow-up of up to 7 years. However, the follow-up of these therapies remains too short to assess the long-term genotoxicity risk of the newer generation vectors, as transformation of T cells growth can take >10 years to manifest 7 . An alternative to the semi-random delivery of the complementary DNA (cDNA) is to use a targeted genome editing (GE) approach. GE is a means to alter the DNA sequence of a cell, including somatic stem cells, with nucleotide precision. Using homologous recombination-mediated GE (HR-GE), the approach can target a cDNA transgene into its endogenous locus, thereby preserving normal copy number and upstream and downstream non-coding elements that regulate expression 18 , 19 , 20 . The highest frequencies of GE are achieved using an engineered nuclease to create a site-specific double-strand break (DSB) in the cell’s genomic DNA 21 , 22 . When the DSB is repaired by non-homologous end joining (NHEJ), small insertions and deletions (INDELs) can be created at a specific genomic target site—an outcome that is not generally useful for correcting mutant genes 23 , 24 . In contrast, when the DSB is repaired by either HR (using a classic gene-targeting donor vector) or by single-stranded template repair (using a single-stranded oligonucleotide (ssODN)), precise sequence changes can be introduced, thereby providing a method to precisely revert disease-causing DNA variants 25 . Among the multiple GE platforms that use artificial nucleases to generate DSBs 18 , 26 , 27 , 28 , 29 , the CRISPR-Cas9 system has accelerated the field of GE because of its ease of use and high activity in a wide variety of cells. When CRISPR-Cas9 is delivered into primary human cells, including human CD34 + HSPCs as a ribonucleoprotein (RNP) complex using fully synthesized single-guide RNA molecules (sgRNAs) with end modifications to protect the guide from exonuclease degradation, high frequencies of INDELs are achieved 30 . Moreover, when the delivery of an RNP complex is combined with delivery of the gene-targeting donor molecule in a recombinant AAV6 (rAAV6) viral vector, high frequencies of homologous-mediated editing in human HSPCs are obtained 25 . The use of rAAV6 donor vectors have been successfully used with other nuclease systems as well, including zinc-finger nucleases (ZFNs) and in other cell types, such as primary human T cells 19 , 31 , 32 . Therefore, this HR-GE approach could transform the semi-random nature of viral-based gene therapy to a more controlled and precise strategy. By using AAV6 as a classic gene-targeting donor, in contrast to ssODNs, a full cDNA can be introduced at the endogenous target. The key challenges in translating GE into medical therapies is attaining clinically relevant targeted integration frequencies into LT-HSCs, attaining functional levels of protein expression, and establishing lack of toxicity derived from the GE approach. Here, we describe a clinically relevant, selection-free “universal” CRISPR-Cas9-rAAV6 GE methodology that could potentially correct >97% of known IL2RG pathogenic mutations. We call this approach “functional gene correction” because it is not directly correcting a mutation but instead is doing so by using the targeted integration of cDNA to functionally correct downstream mutations. Approximately 2–3% of patients with deletions of the gene could not be functionally corrected using this strategy. We demonstrate that a functional, codon-optimized IL2RG cDNA can be precisely and efficiently integrated at the endogenous translational start site in CD34 + HSPCs of healthy male donors (HD, n = 13) or SCID-X1 patients ( n = 6) at comparable frequencies (median HR = 45%) in both peripheral blood (PB)-derived and umbilical cord blood (CB)-derived CD34 + HSPCs. We demonstrate the functionality of the full-length codon-optimized IL2RG cDNA by showing that T cells with the cDNA knock-in (KI) retain normal proliferation and signaling response to cytokines. Using transplantation into immunodeficient (NSG) mice, we show that process is both effective (with functional correction of 10–20% of LT-HSCs) and safe (no evidence of abnormal hematopoiesis). The in vivo functional results are based on transplantation of ~21 million IL2RG targeted healthy donor CD34 + HSPCs and ~7 million IL2RG targeted SCID-Xl-HSPCs. We demonstrate high levels of CD34 + LT-HSC targeted cDNA integration (10–20%) by showing multi-lineage hematopoiesis derived from these cells using serial transplantation in immunodeficient mice. These results match and exceed the predicted therapeutic threshold determined through a mouse model 19 . Finally, we show no evidence of significant genotoxicity as demonstrated by next-generation sequencing (NGS) and karyotype analysis. Together, this study establishes a pre-clinical proof-of-concept for a safe, precise, and highly efficient GE strategy to potentially cure SCID-X1. Results Gene correction strategy for IL2RG locus in CD34 + HSPCs SCID-X1 is caused by pathogenic mutations spanning the entire IL2RG gene. Therefore, we developed a gene-targeting strategy by integrating a complete cDNA at the endogenous IL2RG translational start site (Fig. 1a , central panel) that would correct the vast majority (~97%) of known SCID-X1 pathogenic mutations and ensure regulated endogenous expression in CD34 + HSPCs derived progeny. By achieving efficient integration frequencies in the genome of CD34 + LT-HSCs, our approach could ensure life-long therapeutic benefits for the patient (Fig. 1a , right schematic). Fig. 1 In vitro, medium scale genome targeting at IL2RG locus. a Diagram of genomic integration and correction outcomes. b Top: schematic of IL2RG corrective donors containing (+tNGFR) or not (–tNGFR) selectable marker. Bottom: IL2RG cDNA targeting frequencies of frozen mobilized peripheral blood CD34 + HSPCs (white circles) or freshly purified cord blood male-derived CD34 + HSCPs (red circles) derived from medium scale (1.0 × 10 6 ) genome targeting and measured at day 4. Absolute targeting frequencies measured by ddPCR. Median: 23.2% (+tNGFR, n = 11 biological replicates), median 45% (–tNGFR, n = 13 biological replicates). c Single cell-based methylcellulose assay from mock targeted (nucleofected only) or IL2RG cDNA targeted (–tNGFR donor) CD34 + HSPCs. Absolute number of clones are shown ( n = 3 biological replicates). d Fraction of the total for each type of colony scored. e Gene correction outcome of SCID-X1 patient 2 derived CD34 + HSPCs. Shown is the multi-lineage differentiation using OP9-idll1 in vitro system ( n = 23 wells). No growth was derived from uncorrected CD34 + cells. LT-HSPCs long-term hematopoietic stem cells, ST-HSC short term hematopoietic stem cells, MPP multi-potent progenitor, CMP common myeloid progenitor, LMPP lymphoid multi-potent progenitor, CLP common lymphoid progenitor, HSPCs hematopoietic stem and progenitor cells, ddPCR droplet digital digital droplet PCR. Mean ± s.e.m.; ns not specific (Welch’s t- test). Source data are available in the Source Data file Full size image We screened seven different sgRNAs (single guide RNAs) for activity in exon 1 of the IL2RG gene (Supplementary Fig. 1a ) and selected sg-1, previously described 30 , as the best candidate because of the location of the DSB it creates (one nucleotide downstream from the translational start site), on-target INDEL frequencies (92.9% ± 0.6, mean ± s.e.m) (Supplementary Fig. 1b ) and for high cellular viability >80% (Supplementary Fig. 1c ). We found that a truncated sgRNA 33 of 19 nucleotides (19 nt) gave >90% INDEL frequencies (equivalent to the full 20 nt sgRNA) (Supplementary Figs. 1–3 ). NGS (Supplementary Fig. 1e ) further corroborated the INDELs obtained by TIDE analysis 34 . We used the 19 nt gRNA at a medium scale process (1 million cells per electroporation) throughout the remaining experiments. We designed a codon-optimized IL2RG cDNA functional correction donor with homology arms centered on the sg-1 guide sequence and cloned into an AAV6 vector both with and without a selectable marker (truncated nerve growth factor receptor (tNGFR) driven by the Ubiquitin C promoter (Supplementary Figs. 4a, b ) (Fig. 1b , top panel). The efficiency of genome targeting integration was determined in both frozen mobilized PB (mPB) and in freshly isolated CB-derived CD34 + HSPCs from healthy male donors (Fig. 1b ). We observed a median gene-targeting frequency of 23.2% (range 9.9–45.0%) for the +tNGFR donor and 45.0% (range 24.7–60.0%) for the –tNGFR IL2RG donor (Fig. 1b , bottom panel), as measured by Digital Droplet Droplet Digital PCR (ddPCR) ( Supplementary Figs. 5a–f ) . As the selection-free cassette gave high frequencies of targeted integration, we concluded that a selection marker was not necessary because it would create a simpler cell manufacturing process for cells that would also have a selective advantage. To determine the myeloerythroid differentiation potential of IL2RG cDNA genome targeted CD34 + HSPCs, we performed methylcellulose assays. After Cas9/gRNA-rAAV6-CRISPR-Cas9-AAV6 based IL2RG cDNA targeting, HSPCs were single-cell plated in a 96-well methylcellulose plates and scored for colony formation at day 14. Although the number of colonies was reduced by ~35% in IL2RG cDNA targeted samples compared with mock-targeted HSPCs (where neither the sgRNA nor the donor were introduced) (Fig. 1c ), the distribution of types of colony-forming units (CFUs) was the same from IL2RG cDNA targeted HSPCs and mock-targeted HSPCs, including CFU-GEMMs (granulocytes, erythrocytes, monocytes, megakaryocytes), without any lineage skewing (Fig. 1d ). Genotyping of colonies confirmed that IL2RG cDNA targeted derived-colonies ( n = 344) showed an overall targeting frequency of 45.7% ± 2.4 (mean ± s.e.m.) (Supplementary Fig. 6 ). Bi-allelic modification is not relevant as the cells were derived from male donors and have a single X chromosome. In sum, the in vitro differentiation assay of targeted IL2RG cDNA CD34 + HSPCs demonstrated no perturbation of the myeloerythroid differentiation potential. To assess the hematopoietic differentiation potential of the codon-optimized IL2RG cDNA donor, we used the OP9-idll1 stromal cell line in vitro system. In this system, a lentiviral vector confers the doxycycline (DOX)-inducible expression of the Notch ligand Dll 35 . In the presence of a cocktail of cytokines permissive for myeloerythroid and lymphoid differentiation, multi-potent human CD34 + HSPCs will generate only myeloerythroid and B-cell lineage before induction of dll1 expression, but becomes permissive for T and NK-cell generation in the same well after addition of DOX to induce dll1 expression. CD34 + HSPCs derived from frozen mPB of SCID-X1 patient (delA;M145fs—patient 2) were gene targeted (functionally corrected) using the CRISPR-Cas9-AAV6 platform. The total number of cells per well derived from the IL2RG cDNA targeted cells was markedly increased, compared with that of mutant cells, indicating a growth dependence on functional IL-7 and IL-15 receptors, for which IL2RG is an essential subunit 36 . Following DOX-mediated dll1 expression, no further growth of mutant CD34 + HSPCs was detected on OP9-idll1 stromal cells. In contrast, the IL2RG cDNA targeted cells continued to expand in myeloerythroid compartment in addition to the development of B (CD19 + ), T (CD3 + CD56 - ), NK (CD3 − CD56 + ), and TNK (CD3 + CD56 + ) progeny progenitors (Fig. 1e , Supplementary Fig. 7 ). The OP9-idll1 in vitro system is known to generate more CD3 + CD4 + over CD3 + CD8 + cells expressing cell surface markers 37 . A CD3 + CD56 + (TNK cell) population was generated from our genome corrected SCID-X1 patient-derived CD34 + HSPCs further demonstrating the range of lymphoid reconstition that can arise following ex vivo gene editing correction of the IL2RG gene from patient-derived cells 38 . These experiments demonstrate the functional correction of the IL2RG gene from patient-derived CD34 + HSPCs necessary for lymphoid development. Hematopoietic reconstitution from IL2RG cDNA targeted HSPCs To further assess the toxicity and efficacy of our HR-GE system, we evaluated the in vivo engraftment and multi-lineage hematopoietic reconstitution of IL2RG cDNA targeted HSPCs in immunodeficient NSG mice. Following ~4 days of ex vivo manufacturing, IL2RG cDNA targeted and different control cells were transplanted either by intra-hepatic (IH) injection into sub-lethally irradiated 3- to 4-day-old NSG pups or by intra-femoral (IF) injection into 6- to 8-week-old NSG mice. The IH system has previously been shown to be superior for assessment of human lymphopoiesis 39 . An experimental schema is shown (Fig. 2a , primary engraftment panel). For primary engraftment studies, a total of 19.3 million cells, derived from three different healthy male CB CD34 + HSPCs were transplanted into a total of 47 mice (Fig. 2b ). The kinetics of primary human engraftment was monitored at weeks 8 and 12 in bone marrow (BM) aspirates and PB samples. At week 16, end point analysis was carried out on total BM, spleen (SP), and PB samples. High human engraftment levels - as shown by hCD45 + HLA-ABC + double positive staining, blue/black circles – were obtained with no statistical difference between the IL2RG cDNA targeted and control cells — WT, mock, or RNP (Fig. 2b , Supplementary Figs. 8, 9 ). Transplanted IL2RG targeted HSPCs showed a median human engraftment level of 45% in BM ( n =24), 28% in SP ( n = 24), and 6% in PB ( n = 24) (Fig. 2b ) . The targeted integration frequency of the IL2RG cDNA was 25.5% in BM ( n = 24), 44.8% in SP ( n = 24), and 56% in PB ( n = 6) at week 16 post engraftment (Fig. 2c ). Multi-lineage reconstitution was achieved from both mock and IL2RG cDNA targeted cells in both the BM and SP samples of transplanted mice (Fig. 3a ). Fig. 2 Normal hematopoietic reconstitution from IL2RG cDNA targeted CD34 + HSPCs. a Timeline of primary (1°) and secondary (2°) human transplants into sub-lethally irradiated NSG mice. CD34 + HSPCs are derived from umbilical cord blood of healthy male donors. Adult mice transplanted intra-femoral (IF) with either WT CD34 + HSPCs (white circles) or mock targeted (yellow circles) or RNP only (black circles) or un-selected IL2RG cDNA targeted (blue-black circles) HSPCs. Three – 4 days old NSG pups transplanted intra-hepatic (IH) with either mock or IL2RG targeted HSPCs. b Combined IF and IH human cells engraftment (hCD45 + HLA A-B-C + ) 16 weeks after 1° human transplant into indicated organs. c % IL2RG cDNA targeted HSPCs within human graft in indicated organs, quantified by ddPCR. BM ( n = 24 mice), SP ( n = 24 mice), PB ( n = 6 mice) (*** p = 0.0008, one-way ANOVA). d Percent human engraftment in indicated organs as in ( b ) 16 weeks post 2° human CD34 + HSPCs transplant into adult NSG mice. * p -value SP-IH = 0.025, * p -value BM-IF = 0.043 (Welch’s t -test). e % IL2RG targeted HSPCs quantied by ddPCR 32 weeks after engraftment. Median shown. BM bone marrow, SP spleen, PB peripheral blood. Source data are available in the Source Data file Full size image Fig. 3 Normal multi-lineage development from IL2RG cDNA targeted in the LT-HSC population. a Percent cellular composition of the lymphoid, myeloid and erythroid lineage derived from IH 1° human engraftment, shown in indicated organs and targeting conditions. CD3 + BM: ** p = 0.0017, CD3+SP: ** p = 0.007 (Welch’s t -test). b Same as ( a ) but IF transplant analysis. CD3 + SP: * p = 0.023, CD56 + BM: * p = 0.015 (Kruskal–Wallis test). c Percent cellular composition of the lymphoid, myeloid and erythroid lineage derived from secondary transplants. Data shown are combined IH and IF primary transplants. CD3 + BM: * p = 0.015, CD56 + : * p = 0.025, CD19 + SP: *** p = 0.0002, CD14 + SP: * p = 0.0112, CD11c + SP: *** p = 0.0004. LT long term. Error bars: mean ± s.e.m. Source data are available in the Source Data file Full size image In human cells not targeted with the cDNA correction cassette, the frequency of INDELs was >90% in the IH engrafted IL2RG targeted cells at weeks 8, 12, and 16 with an INDEL spectrum of +1, −11, and −13 (all inactivating mutations) ( Supplementary Fig. 10 ) . In sum, the engraftment of selection-free IL2RG cDNA targeted CD34 + HSPCs derived from healthy male donors demonstrate the ability to give rise to normal hematopoiesis. As >90% of the non-gene targeted human cells have inactivating INDELs in the Il2RG gene, it is likely that the T and NK cells seen in the mice are derived from gene targeted CD34 + HSPCs. The paucity of these cells in the mice, however, precluded definitive molecular analysis. IL2RG cDNA genome targeting of LT-HSCs The editing of LT-HSCs would provide the long-term maintenance of T-cell function in patients. We performed secondary transplantation studies to assess the robustness of our CRISPR-Cas9-AAV6 genome targeting platform in editing LT-HSCs. CD34 + HSPCs were isolated from total BM of IL2RG cDNA targeted HSPCs (from both primary IH or IF engraftments at week 16). Following overnight culturing, secondary transplants were carried out in sub-lethally irradiated 6- to 8-week-old NSG mice (Fig. 2a ). At 16 weeks following the secondary transplant, end point analysis — totaling 32 weeks of engraftment into immunodeficient mice — a median human chimerism level (hCD45 + / HLA-ABC + double positive cells) of IL2RG cDNA targeted cells ranged from 7.7% to 13.8% (BM) and 6.1% to 11.4% (SP) (Fig. 2d ). The median targeted integration frequencies of the IL2RG cDNA donor was 9.5% or 20% (BM) and 16.4% or 21.7% (SP) (Fig. 2e ). Fluorescence-activated cell sorting (FACS) plots showing BM human engraftment levels from mice injected with cells derived from both conditions are shown (Supplementary Figs. 11, 12 ). Analysis in secondary transplants showed multi-lineage hematopoietic reconstitution with no evidence of abnormal hematopoiesis thus providing further evidence of efficacy and safety (Fig. 3c ). A summary of the IL2RG cDNA targeted engrafted cells is shown in Tables 1 and 2 . We report that 20 and 9.5% of human cells in the BM derived from IH-IF and IF-IF secondary xenotransplantation experiments, respectively, retain the codon-optimized IL2RG cDNA donor integration, demonstrating a clinically significant level of correction of CD34 + LT-HSCs. Moreover, our median frequencies of IL2RG cDNA targeted in LT-HSCs significantly exceeds those reported by other groups, notably Genovese et al. 20 (ZFNs), Schiroli et al. 19 (ZFNs), and Dever et al. 25 (Cas9 RNP) where the percent of HR-GE cells was <5% of the human cells engrafted. These results, therefore, represent the first evidence of high frequencies of HR-GE in LT-HSCs using the CRISPR-Cas9 system. No tumors or abnormal hematopoiesis were observed in any mice that were transplanted with genome-modified cells (RNP or IL2RG cDNA targeted). Collectively, our primary and secondary transplantation results validate the robustness, effectiveness and lack of genotoxicity of our IL2RG cDNA genome targeting approach and strongly supports its advancement towards clinical translation. Table 1 Summary of total number of cells and mice injected per condition for primary (1°) and secondary (2°) transplants Full size table Table 2 IL2RG cDNA genome targeted frequencies pre- and pos-t transplant Full size table In vivo rescue of lymphopoiesis We investigated whether our gene-targeting approach was reproducible and efficient in SCID-X1 patient-derived CD34 + HSPCs. We edited CD34 + HSPCs from six different SCID-X1 patients with a variety of different pathologic mutations (Fig. 4a ). Five of the six samples were PB-derived CD34 + HSPCs. We achieved high viability (>80%, n = 5) with the CRISPR-Cas9-AAV6 system in the patient-derived cells and high gene-targeting (median 44.5 with range of 30.1– 47.0%, n = 6), a frequency comparable to healthy donor CD34 + HSPCs (45%, n = 13) (Figs. 4b, c ). A total of 7.3 million edited CD34 + HSPCs derived from patients 1, 2, and 3 were engrafted into 29 NSG pups. Human chimerism was measured at week 16 following IH engraftment, with no statistically significant differences between unmodified and IL2RG cDNA targeted cells, both in the BM and SP samples obtained from mice transplanted with SCID-X1 patients 1 and 2 derived CD34 + HSPCs (Fig. 4e and Supplementary Fig. 13 ). A statistically significant difference was observed only in BM samples derived from SCID-X1 patient 3 engraftment (** p = 0.0073, Holm–Sidak test) (Supplementary Fig. 13 ). Importantly, only the codon-optimized IL2RG cDNA (not mutant allele) was detected in the SP of mice ( n = 8) engrafted with SCID-Xl patient 2 corrected CD34 + HSPCs (Table 3 ), consistent with the survival advantage that a cell with a corrected IL2RG gene has. Multi-lineage analysis of SP samples derived from mice engrafted with IL2RG cDNA targeted SCID-Xl mPB CD34 + HSPCs derived from patient 2 showed that significant levels of erythroid, myeloid, and lymphoid lineages were established (Figs. 4e, f ). Gene corrected cells from both patients 1 and 3 showed high levels of engraftment following transplantation in both BM and SP (Supplementary Fig. 13 ). This work is the first to show in vivo rescue of the lymphoid lineage in a SCID-X1 patient-derived CD34 + HSPCs. In sum, these transplantation studies demonstrated that IL2RG cDNA targeted CD34 + HSPCs can engraft and rescue the SCID-X1 phenotype, as demonstrated by multi-lineage reconstitution both in vitro and in vivo. We observed no abnormal hematopoiesis in mice transplanted with HR-GE patient-derived cells providing further evidence for the safety of the process. Fig. 4 In vivo rescue of SCID-X1 mutation. a Genomic mapping and description of SCID-X1 mutations. b Percent viability determined at indicated days pre- and post- targeting. Mock (nucleofected only), RNP (nucleofected with RNP only), RNP+AAV6 (nucleofected with RNP and transduced with AAV6-based IL2RG corrective donor). Shown is data for mobilized peripheral blood CD34 + HSPCs ( n = 5). c Medium scale (1.0 × 10 6 cells) ex vivo genome targeting frequencies of frozen mobilized peripheral blood SCID-X1, at day 2 (blue-black circles, n = 6). Arrow shows 45% genome targeting of SCID-X1 patient 2 derived CD34 + HSPCs. d Human cells engraftment analysis at week 17 after intra-hepatic (IH) delivery of IL2RG cDNA targeted (blue-black circles, n = 15) or mutant CD34 + HSPCs (gray circles, n = 4). e Percent cellular composition of the lymphoid, myeloid, and erythroid lineage derived from IL2RG corrected or mutant CD34 + HSPCs. CD3 + : **** p < 0.0001, CD56 + : * p = 0.0146, CD16 + : ** p = 0.0013, CD19 + : ** p = 0.0015, CD235a + : ** p = 0.0022 (Welch’s t- test). RNP ribonuclearprotein. f Absolute numbers derived from ( e ). Source data are available in the Source Data file Full size image Table 3 Summary of SCID-X1 patients’ derived CD34 + HSPCs transplants Full size table Signaling and proliferation of IL2RG cDNA targeted T cells To assess the receptor function and signaling in progenitor cells in which the gene is express through the targeted integration of a codon-optimized cDNA into the translational start site of the endogenous locus, we evaluated the proliferation and signaling activity of HR-GE human T lymphocytes derived from adult healthy male donors. Mature T cells depend on proper IL2RG expression and signaling through IL2RG -containing receptors, e.g., IL-2R, to promote proliferation and differentiation 40 . Activation of T cells by CD3/CD28 antibodies leads to a rapid induction of IL-2 cytokine, which in turn signals though the IL-2R. Subsequent phosphorylation of tyrosine residues on the cytoplasmic domains of the receptors initiates a cascade of events that phosphorylate and activate the signaling transducers and activators of transcription 5 (STAT5) proteins. Therefore, we assessed the levels of pSTAT5 in IL2RG cDNA targeted T cells, where the IL2RG cDNA donor contained tNGFR selectable marker (Fig. 5a ). Intracellular staining for pSTAT5 from IL2RG cDNA targeted T cells (Fig. 5b ) and levels of pSTAT5 (ratio of tNGFR + pSTAT5 + double-positive cells to that of tNGFR + only cells, marked red) was demonstrated to be comparable to that of unmodified normal T cells 69.3 ± 7.0 vs 67.7 ± 4.4 (mean ± s.e.m.), respectively (Fig. 5c ). As expected, knocking-out (KO) the IL2RG locus with an IL2RG targeted donor expressing only tNGFR, significantly reduced the levels of pSTAT5 (12.7 ± 5.6; mean ± s.e.m.) (Fig. 5b ). We analyzed the MFI of the pSTAT5 level in WT, KI, and KO cells (Fig. 5c ) and found that the KO cells had an extremely low pSTAT5 MFI (as expected), whereas the KI cells had pSTAT5 MFI (mean fluorescence intensity) that was ~50% of the wild-type cells. This lower signaling did not compromise lymphocyte development (Figs. 1 – 4 ) nor proliferation (Fig. 5d ). The KI cells did not have higher signaling, which has been hypothesized as a risk factor for transformation. Fig. 5 Evaluation of IL-2 receptor function in IL2RG cDNA targeted T cells. a Schematic of signaling (pSTAT5—bottom) and proliferation (CFSE—top) in vitro assays. b pSTAT5 assay derived FACS plots. Top: healthy male-derived T cells genome targeted with IL2RG cDNA tNGFR (KI) or with tNGFR+ only cassette integrated at the IL2RG endogenous locus (KO). In red are the percent of double positive IL2RG -tNGFR + pSTAT5 + [4.42%/(4/42% + 3.18%)]×100. We compare 58.2% cells (IL2RG targeted T cells) with 58.7% ( IL2RG from WT T cells), ( n = 3 biological replicates). c Quantification of IL-2R signaling through phosphoSTAT5 pathway. d pSTAT5 MFI for WT, KI, and KO experiments from ( b ) p = 0.02, Welch’s t -test. WT T cells (gray circles, n = 6), IL2RG KI (blue circles, n = 3) and IL2RG KO (orange circles, n = 3). e Proliferation profile of CFSE labeled, TCR stimulated IL2RG cDNA tNGFR+ sorted or mock-targeted T cells. Mock-targeted T cells are WT T cells cultured for the same amount of time as the tNGFR+ targeted cells and have been nucleofected in the absence of RNP or absence of transduction with AAV6. Shown FACS analysis at days 2, 4, 6, and 8. pSTAT5 phosphorylated STAT5, CFSE carboxyfluorescein succinimidyl ester, KI knocked in, KO knocked out, tNGFR truncated nerve growth factor receptor, IL-2 interleukin 2. Source data are available in the Source Data file Full size image To demonstrate that the genome edited IL-2R is permissive for proliferation upon engagement of IL-2 cytokine, we quantified the levels of proliferation of IL2RG cDNA targeted T-cell following T-cell receptor (TCR) stimulation. A carboxyfluorescein succinimidyl ester (CFSE) dilution assay was used to measure whether targeted insertion of the codon-optimized cDNA could support T-cell proliferation. Loss of CFSE signal occurs when cells proliferate as the dye dilutes from cell division. An overview of the assay is shown (Fig. 5a ). In our experimental settings, we observed similar proliferation profile in tNGFR + T cells (marking cells in which the IL2RG cDNA had been KI) compared with mock-targeted cells (Fig. 5e ). We note that the “unmodified” cells had not undergone prior bead stimulation and so remained quiescent while the targeted and mock cells had undergone prior bead stimulation and thus there was residual proliferation without re-stimulation in those cells giving the broader peak. Overall, our data demonstrate that the genomic integration of an IL2RG codon diverged cDNA at the start site of the endogenous locus preserves normal signaling and proliferation of human T cells. Off-target and karyotype analysis We investigated the specificity of the dsDNA break generated by the CRISPR–Cas9 RNP complex, which could be a potential source of genotoxicity. The off-target activity of the full-length 20 nt and three truncated versions (19 nt, 18 nt, and 17 nt) of sg-1 guide were assessed at 54 different potential sites predicted by either Guide-Seq in U20S cells 41 or bioinformatically COSMID 42 (Fig. 6a ). The analysis was performed in both healthy (Fig. 6a ) and patient-derived CD34 + HSPCs (Fig. 6b ) to assess the specificity of the sg-1 gRNA. At the three sites identified by Guide-Seq analysis, there was no evidence of off-target INDELs. In the 51 sites identified by COSMID, only two showed evidence of off-target INDELs, both at levels <1% (Fig. 6a ). We detected INDEL frequencies using the 20 nt sg-1 of 0.59%, in an intron of myelin protein zero-like 1( MPZL1 ), a cell surface receptor gene involved in signal transduction processes. The 19 nt sg-1 induced a lower frequency of off-target INDELs 0.11% (Fig. 6a and Table 4 ). We also analyzed the INDEL frequencies of potential off-target sites in genome edited CD34 + HSPCs derived from SCID-Xl patient 1 in which the cells were edited using the 19 nt sg-1 (Fig. 6b ). We found INDEL frequencies of 0.08% at MPZL1 and 0.27% at the ZNF330 site (intergenic and >9 kb from the nearby gene, respectively) (Table 4 ). Off-target activities of sg-1 guides, WT (20 nt) and truncated (19 nt), were further assessed in the context of a high-fidelity (HiFi) Cas9 43 in SCID-X1 CD34 + HSPCs. The viability, INDELs, and IL2RG cDNA targeting frequency (%HR) were all equivalent (Fig. 6c ) and editing frequencies (% INDELs) (Fig. 6d ) were comparable between WT and HiFi Cas9 (Figs. 6c–f ). Using the 20 nt and 19 nt gRNA combined with the HiFi Cas9, however, resulted in no detectable INDELs (“background” Table 4 ) at the two sites for which there was low but detectable INDEL frequency using WT Cas9. Fig. 6 Genome specificity of IL2RG sgRNA guide. a Heat map of on-target INDEL frequencies quantied by NexGen-Seq at COSMID identified putative on-target locations from healthy CD34 + HSPCs. Levels of NHEJ induced by 20 nt IL2RG sgRNA and truncated 19 nt, 18 nt and 17 nt pre-complexed with WT Cas9 protein at 5:1 molar ratio. b Heat map as in ( a ) of on-target INDEL frequencies derived from 19 nt IL2RG sg-1 in the genome of CD34 + HSPCs SCID-X1 patient 1 derived cells. c Percent viability at day 4 of SCID-X1 patient-derived CD34 + HSPCs nucleofected with either wild-type (WT) or high-fidelity (HiFi) SpCas9 protein pre-complexed with either the 20 nt or the 19 nt IL2RG sg-1 ( n = 1). d Percent INDELs measured by TIDE at day 4 in cells as in ( c ) using WT or HiFi Cas9 protein pre-complexed with the 20 nt IL2RG sg-1 (green bars) or 19 nt IL2RG sg-1 (blue bars). e Percent IL2RG cDNA targeting (% HR) as measured by ddPCR at day 4 in cells as in ( c ) generated by either WT or HiFi Cas9 protein pre-complexed with the 20 nt IL2RG sg-1 or ( f ) 19 nt IL2RG sg-1. Source data are available in the Source Data file Full size image Table 4 Summary of IL2RG sgRNA off-target INDEL frequency analysis Full size table To further assess whether genomic instabilities, particularly translocations, were generated by the - CRISPR-Cas9-AAV6 based process, we performed karyotype analysis on CB-derived CD34 + HSPCs from healthy male donors. We chose to run karyotype analysis over PCR-based translocation assays because we have previously found that the frequency of translocations in CD34 + HSPCs when two on-target breaks (with INDEL frequencies of >80%) was 0.2–0.5% 44 . The probability that there is a translocation between the on-target break and break that has an INDEL frequency of <0.1% is exceedingly low. Whole chromosomal analysis was performed on ≥20 cells from the different conditions (WT, mock only, RNP only, AAV6 only and RNP+AAV6). The analysis confirms absence of any chromosomal abnormalities in 20 out of 20 untreated or mock treated cells, 40 out of 40 RNP only or RNP treated with rAAV6 cells, and from 40 out of 40 cells treated with rAAV6 only (Supplementary Fig. 15 ). Finally, we performed γH2AX and relative survival assays in K562 and 293T cells lines, respectively, to determine and compare the levels of DNA damage and toxicity induced by ZFN, TALEN, and CRISPR-Cas9 nucleases that all target the IL2RG gene (Supplementary Fig. 16 ). The CCR5 ZFNs were first described in Perez et al. 45 and subsequently used to clinically and to modify CD34 + HSPCs 24 , 46 , 47 . The nucleases targeting the IL2RG gene were described previously in Urnov et al. 26 (ZFNs) and Hendel et al. 48 (ZFNs, TALENs, and CRISPR-Cas9). The CRISPR-Cas9 nuclease generated the lowest levels of toxicity by showing fewer γH2AX foci and higher percent survival of human cells overexpressing each nuclease 49 highlighting the notion that standard TALEN and ZFN nuclease platforms are less specific than CRISPR-Cas9. In conclusion, our off-target analysis confirms that high specificity and activity is achieved using the IL2RG CRISPR-Cas9-AAV6 HR-GE system described here. Discussion Currently there are numerous GE-based clinical trials in USA and China, none of which are for the treatment of PIDs ( clinicaltrials.gov ). There have been a number of proof-of-concept GE studies to explore the feasibility and safety of using a HR-mediated approach to correcting pathologic mutations in the IL2RG gene as a path to developing an auto-HSPC-based therapy for SCID-X1 19 , 20 , 26 , 50 , 51 , 52 , 53 , 54 . In particular, a recent study by Schiroli et al. 19 , in the process of developing a clinical translation GE for SCID-Xl in CD34 + HSPCs, designed a ZFN GE-based platform to integrate a full IL2RG cDNA at intron 1 delivered by integration-defective lentiviral vector or rAAV6. They were able to generate ~40% INDELs and ~10% HR frequencies in WT CD34 + HSPCs, with targeted integration frequencies of ~25% in one SCID-Xl patient-derived CB CD34 + HSPCs. Notably, Schiroli et al. 19 performed only one experiment combining CRISPR-Cas9 with AAV6 and almost all of their data, including the engraftment data were from ZFN-modified cells. Our work represents significant progress for CRISPR-Cas9-based approaches as we not only demonstrate high levels of engraftment of targeted cells in LT-HSCs (up to 20%), we also demonstrate targeting efficiencies and engraftment of efficiencies in patient-derived CD34 + HSPCs that exceed the LT-HSC threshold of 10%. This high levels of genomic editing of LT-HSCs has not been previously reported and demonstrates with advances in technology, significant biologic improvements are possible with clinically relevant quantitative metrics being met. This level of correction is likely to be curative based on both animal studies 19 , from patients who had spontaneous reversion mutations in progenitor cells and from human gene therapy clinical trials. In the gene therapy clinical trials for SCID-X1, immune reconstitution was achieved with as little as 1% of the cells having gene transfer 3 or from vector copy numbers of only 0.1 in the blood 2 . Our results also show a lack of functional toxicity from the CRISPR-Cas9-AAV6 procedure because LT-HSCs were preserved and because normal human hematopoiesis was obtained from the genome-edited cells. In contrast to Dever et al. 25 , who also used a CRISPR-Cas9-AAV6 system, in this work we were not simply making a single-nucleotide correction but instead inserting a therapeutic transgene in a precise genomic location while maintaining high targeting efficiencies (median of 45% in CD34 + HSPCs and up to 20% in LT-HSCs). This targeted cDNA integration therapeutic approach has the benefit of not only being able to correct >97% of known SCID-X1 pathogenic mutations due to the “universal” strategy design and thus should have broader application as most genetic diseases are caused by mutations throughout the gene. The safety of the approach is further supported by the lack of karyotypic abnormalities generated in RNP exposed CD34 + HSPCs and by INDEL frequencies below the limit of detection using a high-fidelity version of Cas9 at 54 potential off-target sites identified by bioinformatics and cell-based methods. Even using wild-type Cas9, off-target INDELs were only detected at two sites (both at low frequencies (<0.3%)), which were at sites of no known functional significance and did not result in any measurable perturbations in the cell population in all the assays used in this work the most important of which was no evidence of abnormal hematopoiesis in RNP-treated cells. In the course of these studies, we transplanted a full human dose for an infant (the target age that we are planning to treat in a phase I/II clinical trial) into NSG mice (28.4 million CD34 + HSPCs), a functional safety standard that the Food and Drug Administration (FDA) has used prior to approving a phase I clinical trial of ZFN editing of CD34 + HSPCs 24 . The persistence of IL2RG gene corrected cells for 8 months (16 weeks in the primary followed by 16 weeks in the secondary recipient) following transplantation into NSG mice with multi-lineage hematopoiesis and without evidence of abnormal hematopoiesis also highlights the general lack of toxicity of the approach. An important aspect of our studies is that we achieved a median correction frequency of 44.5% without selection in PB patient-derived CD34 + HSPCs, a cell source that is being used in lentiviral-based gene therapy trials. These functionally gene corrected CD34 + HSPCs showed equivalent engraftment following transplantation into NSG mice as unmanipulated patient-derived CD34 + HSPCs, again providing data that the GE manufacturing process was not damaging the cells in a significant way. We also demonstrated that the “universal” strategy of knocking a codon-optimized wild-type cDNA into the endogenous start site functionally rescues gene function using both in vitro and in vivo assays of T and NK-cell development and function. These results include the rescue of T and NK-cell development and function from patient-derived CD34 + HSPCs. While the ultimate test of the safety and efficacy of our approach will be established during a gene therapy clinical phase I/II trial, we believe that we have shown strong evidence using state-of-the-art, gold standard methods of the safety and efficacy of the CRISPR-Cas9-AAV6 approach to targeting a cDNA to the endogenous translational start site to functionally correct diseases causing mutations throughout a gene. It is likely, however, that specific details of the cDNA targeting strategy will have to be tailored to each gene in order to achieve the safe and effective levels of expression that are needed. Our rationale for developing a GE-based gene therapy for SCID-X1 is to provide a safe, efficient, precise, and effective treatment option for patients. Although it is encouraging that improved methods for allo-HSCT are being developed and that the results using lentiviral-based gene therapy for SCID has been shown to be safe and effective, the long-term safety, efficacy, and limitations of these approaches remains to be determined. Thus, it is important to continue develop alternative strategies for curing patients with SCID-X1 using approaches that are less genotoxic (the mutational burden from GE is >1000-fold less than for lentiviral-based modification strategies by comparing the frequency of off-target INDELs to the frequency of uncontrolled lentiviral insertions). Ideally, multiple effective options will be available for patients, their families, and their treating physicians in the future thus giving them the opportunity to choose the approach that best fits their needs and circumstances. In sum, the safety and efficacy data presented in this study provides strong support for the clinical development of functional gene correction using the CRISPR-Cas9-AAV6 GE methodology to establish a long lasting therapeutic, potentially curative, strategy beneficial to >97% of SCID-X1 patients. Methods CRISPR-Cas9 sgRNA Seven IL2RG , exon 1 specific, 20 nt length oligomer sequences, used in the initial screen, were identified using the online CRISPOR software ( crispor.terof.net ) and synthesized (Synthego, Redwood City, CA, USA) as part of a chimeric 100 nt sgRNA. Chemically modified sgRNA oligomers were manufactured using a proprietary synthesizer by Synthego Corp. (Redwood City, CA, USA) on controlled-pore glass (AM Chemicals, Carlsbad, CA, USA) using 2′-O-t-butyldimethylsilyl-protected and 2′-O-methyl ribonucleotide amidites (ChemGenes, Wilmington, MA) according to established procedures. Standard ancillary reagents for oxidation, capping and detritylation were used (EMD Millipore, Cincinatti, OH). Formation of internucleotide phosphorothioate linkages was performed using ((dimethylaminomethylidene) amino-3H-1,2,4-dithiazoline-3-thione (DDTT, ChemGenes, Wilmington, MA). A set of 2′- O -methyl 3′phosphorothioate MS[ 30 ] modified full-length 20 nt and three additional versions having 1, 2, and 3 nt removed from the 5′ end of the complementary region of the IL2RG sgRNA guide #1 were synthesized (TriLink Biotechnologies, San Diego, CA, USA) and purified using reverse phase high-performance liquid chromatography. Purity analysis was confirmed by liquid chromatography–mass spectrometry. AAV6-based DNA donor design and vector production All homology based AAV6 vector plasmids were cloned into pAAV-MCS plasmid containing AAV2-specific inverted terminal repeats (ITRs) (Stratagene now part of Agilent Technologies, Santa Clara, CA, USA) using Gibson Assembly cloning kit according to the instructions in the commercial kit (New England Biolabs, cat # E5510S). Corrective, codon diverged IL2RG cDNA was designed to contain silent mutations that generated 78% sequence homology to the endogenous, wild-type gene. All AAV6 viruses were produced in 293T in the presence of 1 ng/ml sodium butyrate (Sigma-Aldrich, cat. no. 303410) cells and purified 48 h later using an iodixanol gradient approach as previously described 5 . The following provides additional detail: all AAV6 viruses were produced in 293T seeded at 14 × 10 6 cells per dish in 10 15-cm dishes 1 day before transfection. In all, 6 μg ITR-containing plasmid and 22 μg pDGM6 (gift from Dr. David Russell, University of Washington, Seattle, WA, USA), containing the AAV6 cap genes, AAV2 rep genes, and adenovirus helper genes were transfected per one 15-cm dish using PEI at a 4:1 ratio (PEI to DNA). Forty-eight hours post transfection, AAV6 were harvested from cells by three freeze–thaw cycles, followed by a 45-min incubation with TurboNuclease at 250 U/mL (Abnova, Heidelberg, Germany). AAV vectors were purified using Iodixanol density gradient and ultracentrifugation at 48,000 rpms for 2 h at 18 ºC. AAV6 particles were extracted from the 40 to 60% gradient interface and dialyzed, three times, in PBS (phosphate-buffered saline) containing 5% sorbitol. A 10 K MWCO slide-a-lyzer G2 dialysis cassette (Thermo Fisher, Santa Clara, CA, USA) was used for dialyses. Pluronic acid was added to the purified AAV6 at a final concentration of 0.001%, aliquot and stored at −80 °C. CD34 + HSPCs Mobilized peripheral blood (mPB) and bone marow (BM) CD34 + HSPCs cells were purchased from AllCells (Alameda, CA, USA). Cells were thawed using published protocol 55 . Freshly purified CB-derived CD34 + HSPCs, of male origin, were obtained through the Binns Program for Cord Blood Research at Stanford University, under informed consent. Mononuclear cells (MNCs) isolation was carried out by density gradient centrifugation using Ficoll Paque Plus (400 × g for 30 min without brake). Following two platelet washes (200 × g , 10–15 min with brake) HSPCs were labeled and positively selected using the CD34 + Microbead Kit Ultrapure (Miltenyi Biotec, San Diego, CA, USA) according to manufacturer’s protocol. Enriched cells were stained with Allophycocyanin (APC) anti-human CD34 (clone 561; Biolegend, San Jose, CA, USA) and sample purity was assessed on an Accuri C6 flow cytometer (BD Biosciences, San Jose, CA, USA). Following purification or thawing, CD34 + HSCPs were cultured for 36–48 h at 37 °C, 5% CO 2 and 5% O 2 , at a density of 2.5 × 10 5 cells/ml in StemSpan SFEM II (Stemcell Technologies, Vancouver, Canada) supplemented with Stem Cell Factor (SCF) (100 ng/ml), Thrombopoietin (TPO) (100 ng/ml), Fms-like tyrosine kinase 3 ligand (Flt3-Ligand) (100 ng/ml), Interleukin 6 (IL-6) (100 ng/ml), StemRegenin 1 (SR1) (0.75 mM), and UM171 (35 nM, Stemcell Technologies). For secondary engraftment studies, CD34 + HSPCs were purified from total BM of NSG mice at end point analysis. Sufficiently pure samples (≥80% CD34 + ) were pooled and cultured at 37 °C, 5% CO 2 , and 5% O 2 for 12 h prior to secondary transplant. T-cell purification Primary human T cells were obtained from healthy male donors from Stanford University School of Medicine Blood Center after informed consent was obtained and purified by Ficoll density gradient centrifugation followed by red blood cell lysis in ammonium chloride solution (Stemcell Technologies,Vancouver, Canada) and magnetic negative selection using a Pan T-cell isolation kit (Miltenyi Biotec, San Diego, CA, USA) according to manufacturer’s instructions. Cells were cultured at 37 °C, 20% O 2 and 5% CO 2 in X-Vivo 15 (Lonza, Walkersville, MD, USA) supplemented with 5% human serum (Sigma-Aldrich, St. Louis, MO, USA) and 100 IU/ml human recombinant IL-2 (Peprotech, Rocky Hill, NJ, USA) and 10 ng/ml human recombinant IL-7 (BD Biosciences, San Jose, CA, USA). Cells were stimulated with immobilized anti-CD3 (OKT3, Tonbo Biosciences, San Diego, CA, USA) and with soluble anti-CD28 (CD28.2, Tonbo Biosciences) for three days prior to electroporation. GE and INDEL quantification Editing of all primary cells was carried out using a ribonucleic protein (RNP) system at a molar ratio of either 1:2.5 or 1:5 (Cas9: sgRNA), unless otherwise stated. Recombinant S . pyogenes Cas9 protein was purchased from IDT (Integrated DNA Technologies, Coralville, Iowa, USA). Nucleofection was performed in P3 nucleofection solution (Lonza) and Lonza Nucleofector 4d (program DZ-100). Cells were plated at a concentration of 1.0 × 10 5 –2.5 × 10 5 cells/ml. For T cells editing, electroporation was performed using Lonza Nucleofector 4d (program EO-115) with an RNP composition as used for CD34 + HSPCs editing. INDEL frequencies were quantified using TIDE online software on genomic DNA extracted using Quick Extract (Epicentre, an Illumina Company, cat no. QE09050) according to manufacturing specifications. Genome targeting and quantification CD34 + HSPCs nucleofected with the IL2RG -specific RNP system were plated at a density of 5.0 × 10 5 cells/ml and transduced with the AAV6 donor at an multiplicity of infection (MOI) of 200,000 vg/μl within 15 min of nucleofection. Cells were cultured at 37 °C, 5% CO 2 , 5% O 2 for 36 h to 48 h after which they were either re-plated in fresh media, at a density of 2.5 × 10 5 cells/ml or prepared for xenotransplantation studies. Absolute quantification of the levels of genomic integration was carried out using Digital Droplet PCR TM (ddPCR TM , Bio-Rad, Hercules, CA, USA). Genomic DNA was extracted as described in previous section. In all, 1 μg of genomic DNA was digested with EcoRV-HF (20U) in Cutsmart buffer at 37 °C for 1 h. ddPCR reaction contains 1× reference primer/probe mix synthesized at a 3.6 ratio (900 nM primer and 250 nM FAM labeled probe), 1× target primer/probe mix synthesized at a 3.6 ratio (HEX labeled probe), 1× ddPCR Supermix for probe without dUTP, 50 ng of digested DNA and water for a total volume of 25 μl. The primers and probes sequences are detailed in Table S2 . Genomic DNA in the ddPCR mixture was partitioned into individual droplets using QX100 Droplet Generator, transferred to a 96-deep well PCR plate and amplified in a Bio-Rad PCR thermocycler. The following ddPCR program was optimized to amplify a 500-bp amplicon: step 1—95 °C for 10 min, ramp 1 °C/s, step 2—94 °C for 30 s, ramp 1 °C/s, step 3—60.8 °C for 30 s, ramp 1 °C/s, step 4—72 °C for 2 min, ramp 1 °C/s, step 5—repeat steps 2–4 for 50 cycles, step 6—98 °C for 10 min, ramp 1 °C/s, step 7—4 °C, ramp 1 °C/s. Bio-Rad Droplet Reader and QuantaSoft Software were used to read and analyzed the experiment following manufacturer’s guidelines (Bio-Rad). Absolute quantification as copy of DNA/μl was determined for the reference, endogenous IL2RG gene and for the integrated IL2RG cDNA. Percent targeting in total population was calculated as a ratio of HEX to FAM signal. For all targeting experiments, genomic DNA was derived from male donors. Quantification of IL2RG cDNA targeted integration frequencies in SCID-Xl patients was assessed based on agarose gel quantification as IL2RG cDNA signal ratio intensity. Methylcellulose CFU assay Two days post genome targeting, single cells were sorted onto 96-well plates coated with MethoCult Optimum (StemCell Technologies, cat no H4034). Fourteen days later, colonies derived from targeted and mock-treated cells were counted and scored based on morphological features pertaining to Colony Forming Units-erytroid (CFU-E), erythroid burst forming units (BFU-E), Colony Forminig Unit- Granulocytes, Monocytes (CFU-GM), and CFU-GEMM. Genotyping analysis was performed to quantify the percent of mono-allelic targeting. A three primer-based IL2RG -specific genotyping PCR-based protocol was established an optimized as follows: IL2RG WT-F1 5′-GGGTGACCAAGTCAAGGAAG-3′; int- IL2RG -R1: 5′-GATGGTGGTATTCAAGCCGACCCCGA-3′; IL2RG WT-R2: 5′-AATGTCCCACAGTATCCCTGG-3′. The PCR reaction contained 0.5 μM of each of the three primer, 1× Phusion Master Mix High Fidelity, 150−200 ng of genomic DNA and water to a final volume of 25 μl. The following PCR program generated an integration band of 543 bp from F1 and R1 primer set and an endogenous band of 1502 bp from F1 and R2 primer set: step 1—98 °C for 30 s, step 2—98 °C for 10 s; step 3—66 °C for 30 s; step 4—72 °C for 30 s, step 5—repeat steps 2–4 for a total of 30 cycles, step 6—72 °C for 7 min; step 7—4 °C. OP9-idll1 system OP9 cells were generously provided by Dr. Irving Weissman’s lab and generated as previously described 40 . Briefly, OP9 stromal cells were infected with two lentiviral constructs, the first containing a TET-ON tetracycline trans-activator (rtTA3) under control of a constitutive promoter (EF1a) and linked to turboRFP, and the second containing the Dll1 gene under control of a tet-responsive element (TRE) promoter and linked to turboRFP. In the presence of tetracycline or doxycyline, the rtTA3 rapidly activates expression of Dll1 and turboRFP. Lymphoid differentiation of patient-derived CD34 + HSPCs SCID-X1 patient-derived CD34 + HSPCs were targeted with the IL2RG cDNA corrective donor. Forty-eight hours post targeting, 300 cells derived from either un-target or IL2RG cDNA targeted were sorted onto a well of a 96-well plate seeded with 50,000 OP9-idll1 cells 48 h in advance. Cells were incubated at 37 °C, 5% CO 2 , 10% O 2 for 1 week in activation media containing: alpha-MEM base media (ThermoFisher, cat no. 32561102), supplied with 10% fetal bovine serum (FBS; GemCell, cat no. 100-500), mono-thioglycerol (MTG) (100 μM), ascorbic acid (50 μg/ml), 1× penicillin/streptomycin, SCF (10 ng/ml, PeproTech, cat no. AF-300-07), Flt-3L (5 ng/ml, PeproTech, cat no. AF-300-19), IL-7 (5 ng/ml PeproTech, cat no. 200-07), IL-3 (3.3 ng/ml, PeproTech cat no. AF-200-03), Granulocyte-macrophage colony-stimulating factor (10 ng/ml, PeproTech, cat no. AF-300-03), TPO (10 ng/ml, PeproTech cat no. AF-300-18), EPO (2 U/ml, PeproTech, cat no. 100-64), IL-15 (10 ng/ml, PeproTech cat no. AF-200-15), IL-6 (10 ng/ml, PeproTech, cat no. 200-06). After 7 days, half the medium was exchanged and DOX was added at a final concentration of 1 μg/ml. In vitro multi-lineage differentiation analysis Lymphoid, myeloid, and erythroid differentiation potential was determined using FACS analysis at 1 week post DOX induction. In all, 100% growth was obtained from all wells seeded with 300 targeted or mock-treated cells. Media were removed from all positive wells and cells were washed in 1× PBS. Cells were re-suspended in 50 μl MACS buffer (1× PBS, 2% FBS, 2 mM EDTA), blocked for nonspecific binding (5% vol/vol human FcR blocking reagent, Miltenyi, cat no. 130-059-901), stained for live dead discrimination using Live/Dead blue dead cell staining kit for UV (ThermoFisher Scientific, cat no. L23105) and stained (30 min, 4 °C dark) using CD3 PerCP/Cy5.5 (HiT3A, BioLegend), CD4 BV650 (OKT4, BioLegend), CD8 APC (HiT8a, BioLegend), CD11c BV605 (3.9, BioLegend), CD14 BV510 (M5E2, BioLegend), CD19 FITC (HIB19, BioLegend), CD33 AF-300 (WM53, BD Pharmingen), CD45 BV786 (BD Pharmingen), CD56 PE (MEM-188 BioLegend), CD235a PE-Cy7 (HI264, BioLegend), and CD271 (tNGFR) CF-594 (C40-1457, BD Horizon). Phosphorylated STAT5 in vitro assay To assess STAT5 phosphorylation in response to cytokine stimulation, purified human T cells were cultured for 7 days post electroporation and starved, overnight, in medium lacking serum and cytokines. Samples were split and either stimulated with IL-2 (100 U/ml) and IL-7 (10 ng/ml) or left unstimulated. Cells were split again, fixed, permeabilized using 4% PFA and methanol and stained with CD3 PE (UCHT1, BioLegend), CD271 (tNGFR) APC (ME20.4, Biolegend). Intracellular antigens were stained with pSTAT5 AF-488 (pY694, BD Bioscience) or isotype control (BD Biosciences). FACS analysis was performed on Accuri C6 (BD Biosciences) or Cytoflex (Beckman Coulter) and data analysis was performed using FlowJo. CFSE cellular proliferation of IL2RG targeted human T cells Purified human T cells were nucleofected alone (mock treated) or in the presence of the long corrective IL2RG cDNA-tNGFR DNA donor vector. NGFR bright T cells were sorted. NGFR bright or mock-treated cells were labeled with CFSE (BioLegend) according to the manufacturer’s protocol and either re-stimulated with anti-CD3/anti-CD28/IL-2/IL-7 as described in previous section or left unstimulated (IL-7 only). Targeting levels were monitored and quantified based on the tNGFR expression and on absolute quantification of the integrated IL2RG cDNA by ddPCR. Xenotransplantation of genome targeted CD34 + HSPCs into mice For all human engraftment studies, we used freshly purified CB derived CD34 + HSPCs derived from healthy male donors, under informed consent. Human engraftment studies designed to rescue the disease phenotype were carried out using frozen, mPB CD34 + HSPCs derived from SCID-Xl patients 1–3. SCID-X1 patients were given subcutaneous injections of Granulocytes Colony-Stimulating Factor (G-CSF) (filgrastim, Neupogen®; Amgen, Thousand Oaks, CA) for 5 consecutive days at 10–16 mcg/kg/day and one dose of Pleraxifor for mobilization and apheresis (National Institutes of Allergy and Infectious Disease IRB-approved protocol 94-I-0073). PB CD34 + HSPCs were selected from the leukepheresis product using Miltenyi CliniMACS. Human engraftment experimental design and mouse handling followed an approved Stanford University Administrative Panel on Lab Animal Care (APLAC). Cells used for engraftment studies were exposed to a maximum of 4 days ex vivo culturing. IH primary (1°) human engraftment In all, 1.0 × 10 5 to 2.5 × 10 5 cells derived from IL2RG cDNA targeted cells or mock-treated cells (electroporated in the absence of RNP and never exposed to AAV6) were re-suspended in 25−30 μl of freshly prepared CD34 + complete media with the addition of UM171 and SR1. Three to 4 days old NSG pups were irradiated with 100 cGy and immediately engrafted IH using an insulin syringe with a 27 gauge × ½” needle. A total of 2.15 × 10 6 cells from each condition were injected into 11 pups/condition. In all, 18/22 engrafted pups were analyzed at week 16 post engraftment. Level of human engraftment was assessed at weeks 8 and 12 using BM aspirates and PB samples. At week 16 or later, end point analysis was done from total BM, SP, liver, and PB. For total BM analysis, mouse bones were harvested from tibiae, femurs, sternum, and spinal cord from each mouse and grinded using a mortar and pestle. MNCs were purified using Ficoll gradient centrifugation (Ficoll-Paque Plus, GE Healthcare, Sunnyvale, CA, USA) for 25 min at 2000 × g , at room temperature. SP and liver samples were grinded against a 40 μM mesh, transferred to a FACS tube and spun down at 300 × g for 5 min, at 4 °C. Red blood cells were lysed following a 10- to 12-min incubation on ice with 500 μl of 1× ACK lysis buffer (ThermoScientific, cat no. A1049201). Reaction was quenched and cells were washed with MACS buffer (2–5% FBS, 2 mM EDTA, and 1× PBS). PB samples were treated with 500 μl of 2% Dextren and incubated at 37 C for 30 min to 1 h. In all, 800 μl to 1 ml of the top layer was transferred to a FACS tube, spun down at 300 × g , 5 min and red blood cells lysed as already described. Cells purified from all four sources were re-suspended in 50 μl MACS buffer, blocked, stained with LIVE/Dead staining solution and stained for 30 min at 4 °C, dark with the following antibody panel: CD3 PerCP/Cy5.5 (HiT3A, BioLegend), CD19 FITC (HIB19, BioLegend), mCD45.1 PE-Cy7 (A20, BioLegend), CD16 PE-Cy5 (3G8, BD Pharmingen), CD235a PE (HI264, BioLegend), HLA A-B-C APC-Cy7 (W6/32, BioLegend), CD33 AF-300 (WM53, BD Pharmingen), CD8 APC (HiT8a, BioLegend), CD45 BV786 (HI3a, BD Horizon), CD4 BV650 (OKT4, BioLegend), CD11c BV605 (BioLegend), CD14 BV510 (M5E2, BioLegend), and CD56 Pacific Blue (MEM-188, BioLegend). IF primary (1°) human engraftment In all, 5.0 × 10 5 cells derived from WT cells, mock treated, RNP treated, and IL2RG cDNA targeted cells were injected IF into 6–8 weeks old NSG mice. Mice were irradiated with 200 cGy 2–4 h prior to engraftment. Cells were prepared in the same fashion as described in the IH section. A total of 2.0 × 10 6 WT cells were injected into a total of four mice, 3.5 × 10 6 mock-treated cells were injected into seven mice, 2.0 × 10 6 RNP-treated cells were injected into four mice and 7.5 × 10 6 IL2RG cDNA targeted cells were injected into 15 mice. In all, 29/30 injected mice were analyzed at week 16 post engraftment, as described in the IH engraftment assay section. Secondary (2°) human engraftment Secondary engraftments experiments were derived from both IH and from IF engrafted human cells. From the IH mock and IL2RG cDNA targeted engrafted mice, total BM was collected at week 16 post primary engraftment, MNC were purified using Ficoll gradient centrifugation and CD34 + cells were enriched using CD34 + microbeads (Miltenyi). Enriched cells were pooled from five engrafted mice with mock-treated cells and from seven engrafted mice with IL2RG cDNA targeted cells and cultured overnight in complete CD34 + media containing UM171 and SR1. Following overnight incubation, cellular count and viability was determined for mock-treated cells to be 2.47 × 10 6 cells at 85.5% viability and for IL2RG cDNA targeted cells was 4.8 × 10 6 cells at 84% viability. In all, 3.5 × 10 5 mock-treated cells and 5.0 × 10 5 IL2RG cDNA targeted cells were engrafted IF into eight 6–8 weeks old, irradiated NSG mice (four males and four females). Secondary engraftment experiments derived from IF primary engraftments were carried on as described above with the following modification: 5.0 × 10 5 CD34 + enriched cells derived from WT, mock and RNP primary engraftment assay were IF injected into four 6–8 weeks old NSG mice, 5.0 × 10 5 CD34 + enriched cells derived from IL2RG cDNA targeted cells were IF injected into 12 6–8 weeks old NSG mice. Equal numbers of male and female mice were used. IH primary (1°) human engraftment Frozen mPB CD34 + HSPCs derived from SCID-Xl patients were thawed and genome targeted as described in previous section. In all, 2.5 × 10 5 cells were IH injected into 3–4 days old, irradiated NSG pups. GUIDE-Seq sgRNAs were generated by cloning annealed oligos containing the IL2RG target sequence into pX330 (Gift from Feng Zhang, Addgene #42230) 56 . In all, 200,000 U2OS cells (ATCC #HTB-96) were nucleofected with 1 μg of pX330 Cas9 and gRNA plasmid and 100 pmol dsODN using SE cell line nucleofection solution and the CA-138 program on a Lonza 4D-nucleofector. The nucleofected cells were seeded in 500 μl of McCoy’s 5a Medium Modified (ATCC) in a 24-well plate. Genomic DNA (gDNA) was extracted 3 days post nucleofection using a Quick-DNA Miniprep plus kit (Zymo Research). Successful integration of the dsODN was confirmed by RFLP assay with NdeI. In all, 400 ng of gDNA was sheared using a Covaris LE220 Ultrasonicator to an average length of 500 bp. Samples were prepared for Guide-seq 41 and sequenced on the Illumina Miseq. Briefly, solid-phase reversible immobilization magnetic beads were used to isolate genomic DNA, which was further sheared to an averaged of 500 bp (Covaris S200), end-repaired and ligated to adaptors containing 8-nt random molecular index. Target enrichment was achieved through two rounds of nested PCR using primers complementary to the oligo tag. We analyzed GUIDE-Seq data using the standard pipeline 41 with a reduced gap penalty for better detection of off-target sites containing DNA or RNA bulges. Bioinformatic off-target identification Potential off-target sites for the IL2RG gRNA in the human genome (hg19) were identified using the web tool COSMID 42 with up to three mismatches allowed in the 19 PAM (protospacer adjacent motif) proximal bases. After off-target site ranking, 45 sites were selected for off-target screening. Off-target validation Frozen mPB CD34 + cells (AllCells) were electroporated with 300 μg/ml of Cas9 and 160 μg/ml of sgRNA. sgDNA was extracted 48 h after RNP delivery. Off-target sites were amplified by locus-specific PCR. PCR primers contained adapter sequences to facilitate amplicon barcoding via a second round of PCR as previously described 57 . All amplicons were pooled at an equimolar ratio and sequenced on the Illumina Miseq according to manufacturer’s instructions using custom sequencing primers for Read 2 and Read Index. Sequencing data were analyzed using a custom INDEL quantification pipeline 58 . Karyotype analysis Fresh CB CD34 + HSPCs were purified, genome edited or targeted as previously described. Four days post ex vivo culturing and manipulations, 5 × 10 5 cells from WT untreated, mock, RNP only, RNP and AAV6 or AAV6 only treated cells were processed by Stanford Cytology Labs at Stanford University. Karyotyping analysis was performed on 20 cells derived from each condition. IL2RG-specific genotoxicity assays in human cell lines Levels of γH2AX induced by different classes of engineered nucleases were quantified by measuring the phosphorylation of histone H2AX, a marker of DSB formation. K562 cells were nucleofected with the indicated doses of each nuclease expression plasmid, and the percentage of γH2AX + cells was measured by FACS at 48 h post nucleofection. 293T cells were co-transfected with plasmids expressing GFP and nuclease. GFP-positive cells were analyzed at day 2 and again at day 6 by FACS. Percent survival relative to I-SceI control was calculated as follows: $$\frac{{{\mathrm{Nuclease}}\;{\mathrm{day}}\;6/{\mathrm{Nuclease}}\;{\mathrm{day}}\;2}}{{{\mathrm{I}}-{\mathrm{SceI}}\;{\mathrm{day}}\;6/{\mathrm{I-SceI}}\;{\mathrm{day}}\;2}} \times 100$$ A percent equal to 100 denotes no toxicity while a percentage <100 marks toxicity. FACS analysis All FACS analysis pertaining to OP9-idll1 and human engraftment analysis were done on FACS Aria II SORT instrument part of FACS Facility Core from Stanford University, Institute for Stem Cell Biology and Regenerative Medicine. Statistical analysis Statistical analysis was done with Prism 7 (GraphPad Software). Ethics and animal approval statement All of the studies performed in this work comply with all relevant ethical regulations. The animal studies were reviewed, approved, and monitored by the Stanford University IACUC. Reporting summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Data availability Sequencing data have been deposited at [BioProject ID PRJNA526905] under accession code PRJNA526905 [ ]. The authors declare that the data supporting the findings of this study are available within the paper and its supplementary information files or from the authors upon reasonable request. Change history 04 December 2019 An amendment to this paper has been published and can be accessed via a link at the top of the paper. 26 April 2019 The original version of this Article omitted the following from the Acknowledgements: “G.B. acknowledges the support from the Cancer Prevention and Research Institute of Texas (RR140081 and RR170721).”This has now been corrected in both the PDF and HTML versions of the Article.
In preclinical trials, Stanford scientists and their collaborators harnessed the gene-editing system CRISPR-Cas9 to replace the mutated gene underpinning the devastating immune disease. Very rarely, a boy is born with a mutation that renders his immune system barren—devoid of any and all immune cells. The disease, X-linked severe combined immunodeficiency, or SCID-X1, often is referred to as the bubble boy disease. It affects only males and is lethal if not treated in the first year of life. Now, scientists at the School of Medicine and their collaborators have used the gene-editing system CRISPR-Cas9 to devise a new treatment to replenish immune cells in mouse models of SCID-X1. The results are promising, the scientists said, because they believe the treatment could potentially work in humans, as well. SCID-X1 affects about 1 in 50,000 male births. Those with the disease suffer from a debilitating mutation in a single gene, IL2R gamma. When this gene is defective, the immune system never develops. The standard treatment for patients with SCID-X1 is a bone marrow transplant, which supplies them with stem cells that will give rise to a working immune system. But the transfer process is tricky and not guaranteed to work. So, Matthew Porteus, MD, Ph.D., professor of pediatrics, came up with a new idea: correct the genes in the patients' own cells. Through CRISPR-Cas9, Porteus and his team have done just that. Using cell samples that came from people with SCID-X1, the researchers genetically altered the class of stem cells that give rise to blood and immune cells. Their approach got the gene working again. Each mouse that received the edited cells began generating new immune cells and displayed no detectable adverse side effects. "To our knowledge, it's the first time that human SCID-X1 cells edited with CRISPR-Cas9 have been successfully used to make human immune cells in an animal model," said postdoctoral scholar Mara Pavel-Dinu, Ph.D. A paper describing the work was published online April 9 in Nature Communications. Porteus is the senior author, and Pavel-Dinu is the first author. Editing in a solution Gene-based therapy for SCID is not new. In the 1990s, scientists began to dabble in gene therapies that used a virus to deliver a new, functional IL2R gamma gene. "It was very effective, but about 25 percent of the patients developed a leukemia because the virus integrated into an erroneous gene," Porteus said. "It showed both the promise of what gene therapy could do and highlighted the area that needed to be improved." Porteus' approach uses CRISPR-Cas9 to create a double-stranded break in DNA to insert a healthy copy of the IL2R gamma gene in the stem cells that create immune cells. Using the gene-editing system, scientists tweaked cells from six people with SCID-X1 and then transplanted those cells into mouse models of SCID-X1. Those mice were then not only able to make their own immune cells, but many of the edited cells retained something called "stemness," meaning that they maintained their ability to continually create new cells. "The idea is that these modified stem cells will give rise to the blood system and the immune system for the entirety of the patient's life, which we hope is 90 or more years," Porteus said. "And we see evidence for that in our study." Popping the bubble "We've showed that this is a novel and effective strategy to potentially treat this disease, but the other big thing here is safety," Porteus said. "We don't see any abnormalities in the mice that receive the treatment. More specifically, we also performed genetic analysis to see if the CRISPR-Cas9 system made DNA breaks at places that it's not supposed to, and we see no evidence of that." That's crucial, Porteus said, because it ensures that other healthy genes aren't being erroneously tampered with. Translating lab research to a patient population takes time, Porteus said, but he's optimistic that if larger mouse studies are successful, the CRISPR-Cas9 gene therapy could be piloted in human patients in the next year or two through the Stanford Center for Definitive and Curative Medicine.
10.1038/s41467-019-09614-y
Medicine
Take a stand and be active to reduce chronic disease, make aging easier, research finds
www.biomedcentral.com/1471-2458/13/1071 Journal information: International Journal of Behavioral Nutrition and Physical Activity , BMC Public Health
http://www.biomedcentral.com/1471-2458/13/1071
https://medicalxpress.com/news/2014-01-chronic-disease-aging-easier.html
Abstract Background Physical activity and sitting time independently contribute to chronic disease risk, though little work has focused on aspirational health outcomes. The purpose of this study was to examine associations between physical activity, sitting time, and excellent overall health (ExH) and quality of life (ExQoL) in Australian adults. Methods The 45 and Up Study is a large Australian prospective cohort study (n = 267,153). Present analyses are from 194,545 participants (48% male; mean age = 61.6 ± 10.7 yrs) with complete baseline questionnaire data on exposures, outcomes, and potential confounders (age, income, education, smoking, marital status, weight status, sex, residential remoteness and economic advantage, functional limitation and chronic disease). The Active Australia survey was used to assess walking, moderate, and vigorous physical activity. Sitting time was determined by asking participants to indicate number of hours per day usually spent sitting. Participants reported overall health and quality of life, using a five-point scale (excellent—poor). Binary logistic regression models were used to analyze associations, controlling for potential confounders. Results Approximately 16.5% of participants reported ExH, and 25.7% reported ExQoL. In fully adjusted models, physical activity was positively associated with ExH (AOR = adjusted odds ratio for most versus least active = 2.22, 95% CI = 2.20, 2.47; P trend < 0.001) and ExQoL (AOR for most versus least active = 2.30, 95% CI = 2.12, 2.49; P trend < 0.001). In fully adjusted models, sitting time was inversely associated with ExH (AOR for least versus most sitting group = 1.13, 95% CI = 1.09, 1.18; P trend < 0.001) and ExQoL (AOR for least versus most sitting group = 1.13, 95% CI = 1.10, 1.17; P trend < 0.001). In fully adjusted models, interactions between physical activity and sitting time were not significant for ExH ( P = 0.118) or ExQoL ( P = 0.296). Conclusions Physical activity and sitting time are independently associated with excellent health and quality of life in this large diverse sample of Australian middle-aged and older adults. These findings bolster evidence informing health promotion efforts to increase PA and decrease sitting time toward the achievement of better population health and the pursuit of successful aging. Peer Review reports Background Worldwide, nations are preparing for the demands of an aging population, and this entails dealing with challenges of maintaining health, functional capacity, and wellbeing [ 1 ]. Focusing on relevant lifestyle behaviors is an important consideration for preventing or delaying chronic disease and improving health [ 1 – 3 ]. An emerging body of literature indicates that the lifestyle behaviors of physical activity and time spent sitting independently contribute to health outcomes such as chronic disease morbidity and mortality risk [ 4 ]. Regularly engaging in moderate-to-vigorous physical activity has been shown to reduce the risk of all-cause mortality, cardiovascular mortality, cancer mortality, stroke, heart disease, breast cancer, colon cancer, and other undesirable health outcomes [ 5 ]. Over the past decade, however, research on the health impacts of sedentary behavior (time spent at low levels of energy expenditure while in a sitting posture) has expanded rapidly [ 4 ]. High volumes of time spent sitting are associated with an increased risk of all-cause mortality [ 6 – 10 ], cardiovascular disease mortality [ 8 ], type 2 diabetes mellitus [ 11 – 15 ], and other diseases or conditions [ 15 – 17 ] when adjusting for participation in moderate-to-vigorous intensity physical activity. Therefore, insufficient moderate-to-vigorous physical activity and sitting time may be distinct influences on poor health. Compared to abundant literature on risk factors for disease and poor health, research focusing on the influence of physical activity and sitting time on more aspirational health-related outcomes is much less common [ 5 , 18 – 20 ]. Successful aging has been described as a multidimensional intersection, where not only the avoidance of disease and disability are found, but also where high cognitive and physical function and engagement with life conjoin [ 21 ]. The focus on such aspirational outcomes represents a “salutogenic” approach to health promotion [ 22 ], rather than the traditional disease prevention approach. This salutogenic orientation is instructive for determining influences on aspirational levels of health and well-being. Aspirational positively framed messages may be more effective for motivating healthful behavior in some segments of the population, compared to focusing on the avoidance of chronic disease, which is often an abstract possibility many years away [ 23 ]. Despite the aging population and widespread problem of physical inactivity, there has been limited use of successful aging or salutogenic approaches to frame positive health messages toward motivating active lifestyles. This study examines both self-reported health and quality of life status as they are useful health outcomes and are predictive of more objective health indicators [ 24 ]. To investigate whether higher levels of physical activity and lower levels of sitting time were positively associated with excellent health and quality of life, we utilized self-reported data from a large sample of middle-aged and older Australian men and women, and we statistically adjusted for a range of associated covariates and potential confounding variables in the analyses. Methods The 45 and Up Study The 45 and Up Study is a large ongoing Australian prospective cohort study that began with a baseline sample of 267,153 men and women from New South Wales, the most populous state in Australia. A detailed description of The 45 and Up Study has been published previously [ 25 ]. The 45 and Up Study baseline [ 26 ] data provide information on a wide range of health-related variables. Participants were randomly sampled from the Medicare Australia (national health insurance) database between February 2006 and December 2008. All adults who were aged 45 years and over and who were currently residing in NSW at the time of recruitment were eligible for inclusion in the Study. Participants who completed a mailed baseline questionnaire and provided their signed consent for participation in the baseline questionnaire and long-term follow-up were included in the Study [ 25 ]. The University of NSW Human Research Ethics Committee provided approval for The 45 and Up Study and analysis of the baseline questionnaire data (approval number 05035). The University of Western Sydney Human Research Ethics Committee granted reciprocal institutional ethics approval for use of the baseline questionnaire data in the current study (UWS Protocol number H8793). Participants Participants were a subgroup (n = 194,545) of the total baseline sample of 267,153 men and women enrolled in The 45 and Up Study as of December 2009 (18% response rate). The 45 and Up Study sample was intended to be a large heterogeneous sample of Australian adults, though not necessarily a true representation of the Australian adult population. The present study’s sample included all participants aged 45–106 years with non-missing data on self-rated overall health, quality of life, physical activity, sitting time, and covariates and potential confounding variables (age, household income, educational qualification, smoking status, marital status, weight status, sex, residential remoteness and economic advantage, functional limitation, and number of chronic diseases). Thus, the final sample included 194,545 residents (48% male) of New South Wales, aged 45–106 years (mean ± SD = 61.6 ± 10.7 yrs), from The 45 and Up Study baseline dataset. All participant data, except region of residence (Medicare records), originated from responses to a self-administered paper questionnaire that was completed and returned by postal mail. Physical activity and sitting time The Active Australia Survey (AAS) [ 27 ], was used to measure physical activity in The 45 and Up Study baseline questionnaire. This instrument has previously demonstrated acceptable test-retest reliability [ 28 ] and validity [ 29 ]. On the questionnaire, participants were asked to indicate their participation in three types of physical activity over the previous week– “walking continuously, for at least 10 minutes (for recreation or exercise or to get to or from places)”; “vigorous physical activity (that made you breathe harder or puff and pant, like jogging, cycling, aerobics, competitive tennis, but not household chores or gardening)”; and “moderate physical activity (like gentle swimming, social tennis, vigorous gardening or work around the house)” – by recording the total duration and the total number of times they participated in each [ 27 ]. For this study, total minutes spent in the queried physical activities was used to determine physical activity levels, with vigorous physical activity time multiplied by two, for double weighting [ 27 ]. In accordance with previous research using this dataset [ 15 ] physical activity time was divided into five categories of total minutes per week, as follows: zero mins; low active (1–149 mins); sufficiently active (150–299 mins); highly active (300–539 mins); and very highly active (540+ mins). Total sitting time was determined by asking participants to report total hours per day usually spent sitting. In accordance with previous research arising from the 45 and Up baseline dataset [ 15 ], sitting time was divided into four categories of 0 to <4 hours; 4 to <6 hours; 6 to <8 hours; and 8 hours or more of sitting time per day for analysis. Although reliability and validity of this sitting time questionnaire item has not been formally assessed, the item is analogous to the sitting time item used in the International Physical Activity Questionnaire (IPAQ), shown to have acceptable reliability and validity [ 30 ]. Atkin et al. [ 31 ] support the use of single-item questionnaires in epidemiological research, when the primary requirements of such research consist of the ability to rank levels of health-related variables within the sample. Self-rated health and quality of life Self-rated overall health was assessed with the following question, “In general, how would you rate your overall health?” Five response options included: excellent; very good; good; fair; or poor. For the current study, self-rated health was dichotomized as excellent or not excellent (including very good; good; fair; and poor). Self-rated quality of life was assessed with the following question, “In general, how would you rate your quality of life?” Five response options included: excellent, very good, good, fair, or poor. Self-rated quality of life was dichotomized as excellent or not excellent. Covariates and potential confounding variables To control potential confounding in analyses, covariates included age group, household income, educational qualification, residential remoteness, residential economic advantage, marital status, smoking status, weight status, number of chronic diseases, and level of functional limitation. Participants indicated their age in years, categorized into five age groups: 45 to 54; 55 to 64; 65 to 74; 75 to 84; 85 and up. Highest educational qualification was categorically self-reported, including: no school certificate; school certificate; high school certificate; trade or apprenticeship; certificate or diploma; or university degree. Participants indicated whether they had ever been a regular smoker; smoking status was dichotomously categorized as “ever” or “never.” Marital status was self-reported according to six categories, reduced to a dichotomous variable for analysis as married (including married; de facto/living with a partner) or not married (single; widowed; divorced; separated). Residential remoteness and residential economic advantage were determined based on the mean Accessibility Remoteness of Australia Plus score for participant home address postcode. Five residential remoteness categories included: major city, inner regional, outer regional, remote, or very remote area. Four residential economic advantage categories included: least, mid to low, mid to high, and most economic advantage. Annual household income was categorized for analysis as less than $10,000, $10,000-$29,999, $30,000-$49,999, $50,000-$69,999, or $70,000 or more. Weight status was determined from self-reported height and weight to calculate body mass index (km/m 2 ), using WHO classifications [ 32 ] to determine underweight (<18.50 km/m 2 ), normal weight (18.50–24.99 km/m 2 ), overweight (25.00–29.99 km/m 2 ) and obese (≥30.00 km/m 2 ) categories. Functional limitation status was determined using the Medical Outcomes Study Physical Functioning (MOS-PF) scale, which assesses the extent to which an individuals’ health limits their ability to perform daily functional activities [ 33 ]. The MOS-PF has demonstrated good test-retest reliability and content validity as a measure of physical functioning [ 34 ]. Based on a 100-point scale, functional limitation scores were categorized as: no functional limitation (100), minor limitation (95–99), moderate limitation (85–94), or severe limitation (0–84). Participants reported whether they had ever been told by a doctor that they have skin cancer, melanoma, breast cancer, other cancer, heart disease, prostate cancer, enlarged prostate, high blood pressure, stroke, diabetes, blood clot, asthma, hay fever, depression, anxiety, or Parkinson’s disease. Chronic diseases were categorized for analysis as: none, one, or two or more chronic diseases. Statistical methods Data from The 45 and Up Study baseline dataset were analyzed using SPSS 19.0 software (SPSS Inc. Chicago, IL USA) for both descriptive and inferential statistics. Crude odds ratios (OR) and adjusted odds ratios (AOR) with 95% confidence intervals (CI) were calculated to assess the association between exposures and the outcome variables of excellent health and excellent quality of life using separate binary logistic regression models. Potential confounders were added to the model in groups of demographic and physical health variables, and interaction between physical activity and sitting time was examined via an interaction term. Logistic regression models were mutually adjusted for categories of physical activity or sitting time (model 1); followed by additional adjustment for categories of age, household income, educational qualification, smoking status, marital status, weight status, sex, and remoteness and economic advantage of residential area (model 2); and lastly, a fully adjusted model included additional adjustment for categories of physical limitation and chronic diseases (model 3). To examine the consistency of relationships between active lifestyle variables and health-related outcomes, follow-up logistic regression analyses were used, stratified by age group, sex, household income, weight status, and self-reported ancestry (Australian or not). A final fully adjusted binary logistic regression (model 3) was used to examine excellent self-rated health and quality of life by 20 combination categories of five physical activity and four sitting levels. A significance level of alpha = 0.05 was used for all analyses. Results Approximately 16.5% of the sample reported excellent overall health and 25.7% reported excellent quality of life (Table 1 ). The unadjusted associations between socio-demographic and lifestyle factors are provided in Table 2 . For both excellent quality of life and excellent health, significant bivariate relationships were found for sitting time (inverse), physical activity, sex, marital status, age (inverse), income, education, residential economic advantage, residential remoteness (inverse), smoking (inverse), chronic disease (inverse), functional limitation, (inverse) and weight status (inverse from normal weight). Table 1 Self-rated health and quality of life status prevalence and active lifestyles in the 45 and Up Study baseline sample (N =194,545) Full size table Table 2 Bivariate associations between participant characteristics and self-rated health and quality of life status in the 45 and Up Study baseline sample (N =194,545) Full size table Sitting time Table 3 presents the odds of excellent overall self-rated health and excellent quality of life by categories of sitting time. In model 1, the lowest sitting time category showed a significant positive association in log-odds of excellent health and quality of life, relative to the highest sitting time category of ≥8 hours per day (AOR for lowest versus highest category = 1.09; 95% CI = 1.05, 1.13; P trend < 0.001) and excellent quality of life (AOR for lowest versus highest category = 1.07; 95% CI = 1.04, 1.10; P trend < 0.001). Table 3 Odds of excellent overall health and quality of life by sitting time and physical activity (N =194,545) Full size table In the fully adjusted model 3, all categories of sitting time displayed significantly higher log-odds of excellent self-rated health, relative to the category sitting ≥8 hours per day. The category reporting the lowest amount of sitting was 13% more likely to report excellent health (AOR for lowest versus highest category = 1.13; 95% CI = 1.09, 1.18; P trend < 0.001) compared to those sitting ≥8 hours per day. All categories of sitting time showed higher log-odds of excellent quality of life, compared with the highest sitting category. The category reporting least sitting time was 13% more likely to report excellent quality of life (AOR for lowest versus highest category = 1.13; 95% CI = 1.10, 1.17; P trend < 0.001) compared to those sitting ≥8 hours per day. Physical activity Table 3 presents the odds of excellent overall health and excellent quality of life by five categories of physical activity. In model 1, all categories of physical activity above zero minutes showed significantly higher log-odds of excellent health and quality of life, relative to those reporting zero minutes. The most physically active group was more than four times as likely to report excellent health (AOR for highest versus lowest category = 4.51; 95% CI =4.08, 4.98; P trend < 0.001) and excellent quality of life (AOR for highest versus lowest category = 4.05; 95% CI = 3.75, 4.38; P trend < 0.001), compared to the least physically active. In the fully adjusted model 3, all categories of physical activity above zero minutes displayed significantly higher log-odds of excellent health and quality of life, relative to the lowest physical activity category. The most physically active category was twice as likely to report excellent health (AOR for highest versus lowest category = 2.22; 95% CI = 2.00, 2.47; P trend < 0.001) and twice as likely to report excellent quality of life (AOR for highest versus lowest category = 2.30; 95% CI = 2.12, 2.47; P trend < 0.001), compared to the least physically active. Stratified analyses A series of fully adjusted binary logistic regressions (model 3), used to examine variations across demographic variables, are presented in Additional file 1 : Tables S1-S4. In age-stratified analyses, the relationship between physical activity and excellent health was strongest for the oldest age group (AOR for highest versus lowest category = 4.54; 95% CI = 1.78, 11.56). For the weight status-stratified analyses, the relationship between physical activity and excellent health was strongest for the underweight group (AOR for highest versus lowest category = 6.60; 95% CI = 1.56, 28.01). For both health and quality of life, across all other strata of age, sex, household income, ancestry, and weight status, adjusted odds ratios for most physically active versus least active category centered just over 2.0, ranging from 1.58—2.80. For sitting time, adjusted odds ratios for lowest sitting time versus highest centered just over 1.0, ranging from 0.88—1.31 for both health and quality of life outcomes, across all other strata. Interaction of physical activity and sitting time In logistic regression model 1, examining relationships between physical activity, sitting time, and excellent health and quality of life, the physical activity and sitting time interaction terms were significant ( P = 0.001 for health; P = 0.003 for quality of life). These interactions were not significant, however, in the fully adjusted models ( P = 0.118 for health; P = 0.296 for quality of life). Figure 1 graphically displays the fully adjusted (model 3) log-odds of excellent health by 20 combinations of physical activity and sitting time. In this figure, the reference category is the most inactive group: those reporting zero minutes of physical activity and eight or more hours per day of sitting. The most physically active group was nearly three times as likely to report excellent health compared to the least active group (AOR for very highly active and sitting 0 to <4 hours per day versus zero minutes and sitting ≥8 hours per day = 2.81; 95% CI = 2.33, 3.38, P trend < 0.001). Figure 1 Odds of excellent self-rated health by category of sitting time and physical activity level. N = 194,545. †Reference category is those with lowest physical activity level who sit 8 or more hours per day. # Model adjusted for categories of age, household income, educational qualification, smoking status, marital status, weight status, sex, and remoteness and economic advantage of residential area, functional limitation, and number of chronic diseases. * p < 0.05; All AOR > 1.25 Significantly different from 1.00. Full size image Figure 2 depicts the fully adjusted odds (model 3) of excellent quality of life by combinations of physical activity and sitting time. The most physically active group was nearly three times as likely to report excellent quality of life compared to the least active group (AOR for very highly active and sitting 0 to <4 hours per day, versus zero minutes of physical activity and sitting ≥8 hours per day = 2.90; 95% CI = 2.52, 3.34, P trend < 0.001). Figure 2 Odds of excellent quality of life by category of sitting time and physical activity level. N = 194,545. †Reference category is those with lowest physical activity level who sit 8 or more hours per day. # Model adjusted for categories of age, household income, educational qualification, smoking status, marital status, weight status, sex, and remoteness and economic advantage of residential area, functional limitation, and number of chronic diseases. * p < 0.05; All AOR > 1.25 Significantly different from 1.00. Full size image Discussion Stemming from a salutogenic approach and positive health message framework, we sought to investigate whether higher levels of physical activity and lower levels of sitting time were positively associated with excellent health and quality of life. This study’s main finding was that both physical activity and sitting time were independently associated with excellent health and excellent quality of life, showing physical activity was the stronger influence of the two. These associations were attenuated when controlling for key demographic influences of age, household income, education, and weight status, but remained statistically significant and likely of public health significance. The associations were further attenuated when controlling for key health-related influences of chronic disease and physical limitation, but remained significant and likely meaningful with regard to the health of this population. Although there was some indication of interaction between physical activity and sitting time, the interactions were not statistically significant in fully adjusted models. Therefore, the final models for both excellent health and quality of life appear to reflect two independent main effects of physical activity and sitting time. The likelihood of reporting excellent health and quality of life was substantially higher for individuals with an active lifestyle, as compared to their less active counterparts. Physical activity and sitting time both showed dose–response patterns of influence on the odds of excellent health and quality of life, when controlling for other key behavioral and environmental influences. In our stratified analyses, the robustness of relationships between the health-related outcomes and physical activity and sitting time was remarkable. Across most demographic strata, lower levels of sitting time generally showed small positive relationships with the aspirational outcomes of excellent health and quality of life. Physical activity too, was mostly consistent across strata, manifesting a strong salutogenic factor in this study. These patterns did not hold, however, for adults aged 85 and older (n = 4,230), or for underweight adults (n = 2,404) who constituted a small minority of this sample, and who have different profiles of health, quality of life, and risk of chronic disease than the majority of the population. Conceptually, self-rated health reflects a global assessment related to functioning and presence or absence of diseases or symptoms, while health-related quality of life reflects the discrepancy between actual and desired functional status, and the overall impact of health on well-being [ 24 ]. Although the proportion of our sample reporting excellent quality of life was about 9% higher than the proportion reporting excellent health, the associations between lifestyle behaviors and these two health-related outcomes were strikingly similar. For most analyses, participants who were highly active or very highly active (300 minutes or more per week) were about twice as likely to report excellent health and excellent quality of life, as compared to their least active counterparts. Within the highly active or very highly active categories, the likelihood of excellent health or quality of life was higher for those reporting less sitting time per day, on the order of about a 20-30% difference between those sitting most and those sitting least each day at this level of physical activity. The influence of sitting, however, was clearly less of an influence than physical activity for likelihood of excellent health and quality of life. Integration with previous studies The present study’s results are similar to those of Davies et al. [ 18 ], who examined associations between physical activity and screen time on health-related quality of life. Their study found that the combination of no physical activity and high screen time (e.g., watching television—typically sedentary behavior) was related to poorer quality of life in their sample of Australian adults. Our study differed from theirs, however, by focusing on excellent health and quality of life in a larger sample of middle-aged and older adults, and examining sitting time rather than screen time. We also were able to statistically adjust for additional influences such as functional limitation, residential economic advantage and remoteness. Kerr et al. [ 35 ] found that higher physical activity levels were significantly related to greater self-rated health in a sample of American adults, aged 66 years and up, but these researchers did not include sitting time in their analyses. Vallance et al. [ 19 ], however, recently showed a relationship between sitting and health-related quality of life in men aged 55 years and up, when adjusting for physical activity. In their study, those who reported sitting the least time had better physical, mental, and global health, compared to those who sat the most, but these relationships only held for weekend sitting time. The present study aligns with previous studies showing that sitting time and physical activity are independently related to health-related outcomes when both lifestyle components are studied concurrently [ 8 – 10 , 15 ]. Similar to previous studies, our results showed some indication that the influence of physical activity and sitting time may interact [ 7 , 10 , 18 ], but the interaction was not robust, and was attenuated and insignificant in the fully adjusted model. The present findings suggest that physical activity and sitting time may each present a potential avenue to increase the likelihood of excellent health and quality of life, but that the combination of more physical activity and less sitting time may offer the greatest potential for attaining excellent health and quality of life. If such findings are borne out by additional research studies, positive health messages could be used to motivate populations to adopt more active lifestyles. Although our findings align with much of the previous literature showing better health associated with more physical activity and less sitting time, not all studies have supported such relationships. In particular, Herber-Gast et al. [ 36 ] recently reported no association between sitting time and incidence of cardiovascular disease when controlling for physical activity and other relevant demographic and lifestyle factors in a longitudinal analysis of middle-aged Australian women. These authors found no interaction between physical activity and sitting time related to cardiovascular disease. The divergence from our findings may be due to differences in focusing on disease more than health, focusing solely on women, or possibly in differing measures of physical activity and sitting time. In work more relevant to the salutogenic model, Vallance et al. [ 19 ] similarly did not show an association between sitting time and physical health, mental health, or global health when examining weekday sitting separately from weekend sitting. Trinh and colleagues [ 37 ] also reported few significant associations between sitting time and quality of life in cancer survivors. Södergren and colleagues [ 3 ] reported that although leisure time physical activity was associated with good health in adults aged 55–65 years, sitting time was not significantly related to measures of good health. The lack of association in these previous studies may represent unknown moderation by demographic variables, or may stem from residual confounding by physical activity level, age, work type and status, or other relevant influences on sitting time and health. Particularly relevant to our study, the impact of physical activity was stronger than that of sitting time, and controlling for this key lifestyle variable along with other important potential confounding variables in smaller datasets with more limited power or greater variability may partially explain null findings in previous studies [ 3 ]. Our observed fully adjusted odds ratios for sitting time were small in magnitude for the lowest versus highest sitting categories, and this magnitude of association is unlikely to be statistically significant in much smaller datasets. Areas for further study Despite the wealth of research on physical activity and health-related outcomes, the investigation of sitting time has only recently proliferated, and most of this work has focused on associations with chronic disease and mortality. Both physical activity and sitting time are linked with a wide range of health-related outcomes [ 5 , 38 ]. Both of these active lifestyle components can change energy expenditure, which is related to reduced risk of morbidity and mortality in much of the literature [ 5 , 38 ]. Yet, like the wide variety of biological and psychological mechanisms thought to play a role for physical activity, there is likely more to sitting’s influence on health and quality of life than energy expenditure alone. Spending long periods in occupational sitting is associated with overall fatigue, musculoskeletal pain, and poor health in data from interviews with office workers [ 39 ]. In the ergonomics literature, sitting is linked to one of the most prevalent chronic conditions, low back pain [ 40 ], frequently associated with disability [ 41 ]. Thus, prolonged bouts of sitting daily may potentially feature prominently in a downward spiral of decreased mobility, physical function, physical fitness, engagement with life, physical activity, and eventually greater risk of chronic disease [ 42 ], but much more work is required to examine these possibilities and their temporal sequence. Beyond the focus on chronic diseases [ 15 ], indices of cardiometabolic health [ 12 ], and mortality [ 9 ], research on active lifestyles stemming from being physically active and limiting prolonged sitting has been moving into studies of mental wellbeing [ 43 – 45 ], cognitive function [ 38 ] and health-related quality of life [ 3 , 19 ]. More work is required in this area to determine associations with mood, energy level, sexual and neurological functioning, sleep, and activities of daily living, among many important health-related outcomes. Further prospective studies should investigate active lifestyles and successful aging from a salutogenic approach [ 22 ], with a deliberate focus on the components of physical and cognitive functioning and engagement with life, plus avoidance of disease [ 21 ]. More evidence from longitudinal prospective research is needed, including the continued use of The 45 and Up Study data as further time points become available. Armed with epidemiological evidence, further work in etiology and lifestyle determinants of successful aging can include experimental studies, and public health intervention work can examine positive message framing to promote excellent health and quality of life through more active lifestyles. Public health efforts should also address a need for guidelines or recommendations for limiting prolonged sitting or reductions in this sedentary behavior, to accompany physical activity guidelines. Strengths and limitations A major strength of the current study is the use of a very large heterogeneous sample of Australian adults, with a diversity of age, socioeconomic status, residential characteristics, and other demographic and lifestyle influences on health and quality of life. An additional strength of this study was our ability to stratify analyses and statistically control for potential confounding variables that may limit or distort findings from such observational studies. Lastly, our study was novel in the use of a salutogenic approach to frame positive health messages toward motivating healthful active lifestyle behaviors and successful aging. Opposite these strengths stand the major limitation of cross-sectional analysis of these baseline data from the ongoing longitudinal 45 and Up Study. Cross-sectional analysis precludes any causal implications of the observed relationships. Further studies with this 45 and Up Study and others will be better able to show temporality of association, particularly as follow-up data become available in the near future. Self-report measures of sitting time and physical activity may be susceptible to various types of bias, but our measures of physical activity and sitting time were sufficient for ranking individuals within an epidemiological dataset for analysis [ 31 ]. Also, our measures of overall health and quality of life were self-reported, and therefore inherently subjective. Despite this, patient-reported health status has been shown to be an independent predictor of subsequent mortality, cardiovascular events, hospitalization, and costs of care [ 24 ]. Therefore, such measures are highly relevant in public health research. Conclusions Physical activity and sitting time are independently associated with excellent health and quality of life in this large diverse sample of Australian middle-aged and older adults. The present study’s findings bolster evidence to inform efforts to increase moderate-to-vigorous physical activity and decrease time spent sitting toward the achievement of better population health and pursuit of successful aging. Public health efforts can use the accumulating body of evidence on active lifestyles to develop guidelines or recommendations for sitting, in addition to those of physical activity. Public health efforts could consider framing positive health messages and health promotion interventions aimed at achievement of higher levels of health and quality of life through moving more and sitting less to motivate middle-aged and older adults to improve their lifestyles. Abbreviations OR: Odds ratio AOR: Adjusted odds ratio CI: Confidence interval ExH: Excellent self-rated health ExQoL: Excellent self-rated quality of life BMI: Body mass index NSW: New South Wales UWS: University of Western Sydney AAS: Active Australia Survey WHO: World Health Organization MOS-PF: Medical outcomes study physical functioning scale IPAQ: International physical activity questionnaire SD: Standard deviation.
People who decrease sitting time and increase physical activity have a lower risk of chronic disease, according to Kansas State University research. Even standing throughout the day—instead of sitting for hours at a time—can improve health and quality of life while reducing the risk for chronic diseases such as cardiovascular disease, diabetes, heart disease, stroke, breast cancer and colon cancer, among others. The researchers—Sara Rosenkranz and Richard Rosenkranz, both assistant professors of human nutrition—studied a sample of 194,545 men and women ages 45 to 106. The data was from the 45 and Up Study, which is a large Australian study of health and aging. "Not only do people need to be more physically active by walking or doing moderate-to-vigorous physical activity, but they should also be looking at ways to reduce their sitting time," Richard Rosenkranz said. The twofold approach—sitting less and moving more—is key to improving health, the researchers said. People often spend the majority of the day being sedentary and might devote 30 to 60 minutes a day to exercise or physical activity, Sara Rosenkranz said. Taking breaks to stand up or move around can make a difference during long periods of sitting. Sitting for prolonged periods of time—with little muscular contraction occurring—shuts off a molecule called lipoprotein lipase, or LPL, Sara Rosenkranz said. Lipoprotein lipase helps to take in fat or triglycerides and use it for energy. Increasing physical activity and decreasing sitting time can reduce chronic disease and make aging easier, according to Kansas State University research. Credit: Kansas State University "We're basically telling our bodies to shut down the processes that help to stimulate metabolism throughout the day and that is not good," Sara Rosenkranz said. "Just by breaking up your sedentary time, we can actually upregulate that process in the body." In a previous study published in the International Journal of Behavioral Nutrition and Physical Activity, the researchers found that the more people sit, the greater their chances of obesity, diabetes, cardiovascular disease and overall mortality. For the more recent study, the researchers wanted to take a positive approach and see if increasing physical activity helped to increase health and quality of life. The researchers want to motivate people—especially younger people—to sit less and move more so they can age easier with less chronic disease. "There is only so far that messages about avoiding diseases can go, especially when talking about chronic disease because it is so far removed and in the future," Richard Rosenkranz said. "For young people, being motivated by avoiding diseases is probably not the most pressing matter in their lives. We wanted to look at excellent health and excellent quality of life as things to aspire to in health." To help office workers and employees who often sit for long periods of time, the researchers suggest trying a sit/stand desk as way to decrease sedentary time and add physical activity into the day. A sit/stand desk or workstation can adjust up and down so employees can add more standing time to their days. There are even sit/stand desks for children to stand and do homework or projects. The research appears in the journal BMC Public Health. Collaborators included Gregory Kolt of the School of Science and Health at the University of Western Sydney in Sydney, Australia, and Mitch Duncan of the Institute for Health and Social Science Research with the Centre for Physical Activity Studies at Central Queensland University in Rockhampton, Australia. While the researchers have used existing data for this latest study, the Rosenkranzes are now conducting experiments to manipulate sitting time in already active people. They want to understand how increased sitting time affects physiological risk factors such as blood pressure, body composition, triglyceride and cholesterol levels, inflammation and oxidative stress.
www.biomedcentral.com/1471-2458/13/1071
Biology
Bees grooming each other can boost colony immunity
Scientific Reports (2020). DOI: 10.1038/s41598-020-65780-w Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-020-65780-w
https://phys.org/news/2020-06-bees-grooming-boost-colony-immunity.html
Abstract The significant risk of disease transmission has selected for effective immune-defense strategies in insect societies. Division of labour, with individuals specialized in immunity-related tasks, strongly contributes to prevent the spread of diseases. A trade-off, however, may exist between phenotypic specialization to increase task efficiency and maintenance of plasticity to cope with variable colony demands. We investigated the extent of phenotypic specialization associated with a specific task by using allogrooming in the honeybee, Apis mellifera , where worker behaviour might lower ectoparasites load. We adopted an integrated approach to characterize the behavioural and physiological phenotype of allogroomers, by analyzing their behavior (both at individual and social network level), their immunocompetence (bacterial clearance tests) and their chemosensory specialization (proteomics of olfactory organs). We found that allogroomers have higher immune capacity compared to control bees, while they do not differ in chemosensory proteomic profiles. Behaviourally, they do not show differences in the tasks performed (other than allogrooming), while they clearly differ in connectivity within the colonial social network, having a higher centrality than control bees. This demonstrates the presence of an immune-specific physiological and social behavioural specialization in individuals involved in a social immunity related task, thus linking individual to social immunity, and it shows how phenotypes may be specialized in the task performed while maintaining an overall plasticity. Introduction Division of labour is defined as the pattern of specialization by cooperative individuals of a social group, which perform different tasks or assume specific roles depending on their morphology (polyphenism) or behaviour (polyethism) 1 , 2 . These task specializations often occur with a suite of behavioural and physiological correlates, some of which are specific phenotypic specializations that increase the aptitude, the efficiency and/or decrease the costs of the task performed 3 , 4 , 5 , 6 . However, the phenotypic specialization associated with division of labour is expected to be under contrasting selective forces: a colony might benefit from having sets of workers with highly specialized phenotypes, highly efficient and apt to perform the appointed task; at the same time, specializing may limit task flexibility, therefore reducing performance of other tasks when needed 7 , 8 . Thus, division of labour should show an adequate degree of flexibility to allow the colony to rapidly reallocate its resources in response to the environmental demands 9 , 10 . Understanding the degree of phenotypic specialization in group of workers performing specific tasks has the potential to unravel the trade-off between phenotypic plasticity and specialization. Allogrooming 11 is a behaviour in which a worker uses its mouth parts to remove debris from the body of other colony members. This behaviour, observed in several species of eusocial insects, plays a role in defence against parasites and pathogens 12 , 13 , 14 , 15 . In honeybees, allogrooming represents an important resistance mechanism that seems to limit ectoparasites load, especially mites, within colonies 16 , 17 , 18 , 19 , 20 and its expression depends on genetic and environmental factors 21 , 22 . In Apis cerana , allogrooming is performed at a high rate and appears to be a particularly effective counter-adaptation against the major worldwide threat for honey bee colonies and apiculture, the parasitic mites Varroa destructor 16 , 17 , 20 , 23 . The relatively recent spread of this parasite in many countries heavily impacted on Apis mellifera colonies 23 but effective strategies to deal with this emergency are still lacking 23 , 24 , 25 . Being allogrooming expressed with efficacy against V. destructor in A. cerana , its characterization in A. mellifera may clarify its possible effectiveness to control parasite load at colony level. Despite the potential value that allogrooming could have in maximizing colony resistance to parasites and disease transmission, very scarce attention has been given to the behavioural and physiological specializations of allogrooming at the individual level. Thus, the degree of specialization of individuals performing this task is still unclear and behavioural and physiological correlates of allogroomers are largely unknown. Here, we investigated the degree of phenotypic specialization of allogroomers in the honeybee A. mellifera . Empirical evidence reported in literature about the temporal expression of allogrooming and the degree of behavioural specialization in individuals performing this task is contrasting. According to some authors, in A. mellifera this behaviour is temporally restricted from the 1 st to the 20 th day post-emergence 26 while other researchers observed workers performing allogrooming during their entire life 27 , 28 . We thus characterized the temporal and spatial dynamics of allogrooming occurrence. To assess the timing of allogrooming expression, we performed detailed behavioural observations on a large cohort of workers along their lifespan inside the hive (until the onset of foraging activities). We then characterized the spatial occurrence of allogrooming events on the comb surface, in order to test whether they are randomly distributed or clustered in specific areas. Moreover, we investigated the degree of behavioural specialization of allogroomers, focusing on their individual behavioural profile (i.e. the array of tasks performed) and on the role they play in the colony social network. Indeed, it is not clear to what extent specialization in allogrooming implies a different behavioural repertoire in allogroomers compared to same age range non-grooming nestmates. Previous studies showed that allogroomers are specialized individuals performing the behaviour at a consistently higher frequency compared to other tasks typical of same age range workers 29 , while others reported that this behaviour is unfrequently expressed also by the few individuals performing it, which also carry out all the tasks typical of their age 30 . We thus compared the behavioural repertoire of allogroomers with that of same age range non-grooming bees, predicting that, in case of specialization, allogroomer workers would show a reduced performance of the other in-hive tasks ( prediction 1 ). In A. mellifera colonies, individuals have been shown to occupy different positions (more or less central, i.e. more or less connected) within the colony social network, according to their caste, age and task 31 , producing a compartmentalized structure that likely reduces disease transmission 32 . If allogroomers are behaviourally specialized, we expect that their position within the colony social network differs from that of same age range non-grooming bees. In particular, since this behaviour is expected to be most advantageous if directed towards many bees within the hive, we might predict allogroomers to be more central in the colony network than same-age range non-grooming bees ( prediction 2 ). It would be advantageous for allogroomers to be able to detect nestmates needing to be groomed. Since many stressors, including pathogens and parasites, alter the odour of workers in A. mellifera 33 , 34 , 35 , an enhanced perception of such chemical cues could contribute to a possibly specialized phenotype of allogroomers. Indeed, antennae play a key role in the expression of hygienic behaviours, as recently demonstrated by a transcriptomic study of Varroa sensitive hygienic bees 36 . Proteomic investigation also showed that honeybee individuals performing different tasks differ for antennal profile of proteins involved in olfaction, with soluble olfactory proteins, which play a crucial role in the first steps of odour recognition, being differently expressed 37 . Among these proteins, two Odorant Binding Proteins (OBPs) have been reported as biomarkers linked to social immunity and shown to have good affinity towards ligands released by decaying insect corpses 38 . A further soluble olfactory protein belonging to the Chemosensory protein family has been found to be associated with grooming behaviour 39 . In these two studies the authors examined the antennal proteomic profiles of honeybees selected and tested for hygienic behaviours, while, to the best of our knowledge, no study so far directly compared the expression of proteins involved in olfaction of bees performing allogrooming. We thus investigated the expression of olfactory proteins in the antennae, predicting that if allogroomers have any degree of chemosensory specialization, this would reflect in a different antennal proteomic profile ( prediction 3 ). Performing allogrooming is likely to increase the risk for allogroomers to come in contact with pathogens and parasites. Allogroomers would therefore benefit from having an increased immunocompetence compared to same age range workers performing different tasks, in order to cope with a higher risk of infection. We tested this prediction by comparing immunocompetence ability between allogroomers and non-grooming workers of same age range, predicting a higher level of immunocompetence in the formers ( prediction 4 ). Overall, our work characterizes for the first time the allogroomers’ phenotype through an integrated approach which encompasses behavioural observations, proteomics and immuno-assays (Fig. 1 ). Figure 1 The experimental design of the study, which illustrates sample collection and the predictions tested (g = allogroomers, n-g = non-allogroomers). Full size image Results Temporal and spatial dynamics of allogrooming expression Occurrence of allogrooming clearly varies with bee age (Fig. 2a ), occurring within an age-range of 3 to 15 days, and being especially common in the range 6 to 11 days (76% of allogrooming events was observed within this range). The age dependency of this temporal trend is supported by Runs test, which shows significant departure from randomness (Runs test, number of runs = 3, p = 0.013). Furthermore, the fraction of allogroomers that was seen grooming at least once over the total amount of marked bees varies with bees age, showing the same range of 3 to 15 days for grooming expression and with the largest part of allogroomers having an age between 6 and 11 days (78% of allogroomers). Percentage of marked bees that performed at least once allogrooming was 1.50 over the entire observation period and rise up to 4.30 in the peak range (6 to 11 days). The majority of allogroomers (57.1%) was observed performing only once allogrooming. Figure 2 Temporal and spatial dynamics of allogrooming. ( a ) Expression of allogrooming is age-dependent, on average 105 marked bees were observed for each age interval; ( b ) allogrooming is not randomly expressed on the whole comb surface; plot reports the Monte Carlo estimates (MCE) of observed, K(r), vs. expected values of Ripley’s K-function as a function of distance among allogrooming acts (r); solid black line = estimated K(r), dashed line = theoretical K(r) in the setting of complete spatial randomness for the same number of observations, gray-shaded area = estimates of potential variability in K(r) generated by MCE with n = 999 simulations; 1065 events of allogrooming observed in total. Full size image We observed 1065 allogrooming events in total (respectively 466, 305, 294 on each of the three combs). Spatial distribution of allogrooming strongly deviated from complete randomness, being more clustered than expected (Fig. 2b ). In particular, allogrooming events preferentially took place in the area of the comb where brood was present, opposed to the hive entrance and to the dance area. Prediction 1. Allogroomers do not show a specific behavioural repertoire Repertoire size (the number of different tasks a bee performed at least once) did not differ between allogroomers and same age range non-grooming bees (F = 0.102, df = 1.113, p = 0.750, 53 allogroomers vs 62 non-grooming bees) (Fig. 3a ). Allogroomers and same age range non-grooming bees did not differ in the performance rate (number of observations scans in which the focal bee performed that behaviour) for any of the behavioural task considered (Table 1 , Fig. 3b ), unsurprisingly except for the defining behaviour of allogrooming, which was expressed at a significantly higher, even if rather low, frequency by allogroomers (Table 1 ; median and interquartile range: allogroomers, 0.03, 0; same age range non-grooming bees 0, 0). Moreover, overall activity rates (number of scans during which the bee was performing any behaviour other than inactivity) did not differ between allogroomers and same age range non-grooming bees (F = 0.003, df = 1.113, p = 0.958). Figure 3 Allogroomers do not show a specific behavioural repertoire. Comparisons between groomers (g, n = 53) and non-groomers (n-g, n = 62) in the repertoire size ( a ) and in the performance rate of five behaviours ( b ) (median and quartiles are represented), for abbreviation and statistic results see Table 1 . Only behaviours with average performance rate higher than 0.05 are reported. ns = non-significant difference. Full size image Table 1 Behaviours considered in order to describe the behavioural repertoire of same age range allogrooming and non-grooming bees. Full size table Prediction 2. Allogroomers are more central in the colonial social network Allogroomers (n = 140) showed a higher degree centrality (degree centrality is defined as the sum of the strength of all ties connected to a node) than same age range non-grooming bees (n = 711) (Chi-square = 12.452, df = 1, p < 0.001) while they did not show differences in their betweenness (the total number of shortest paths between pairs of nodes that pass through the considered node) (Chi-square = 2.719, df = 1, p = 0.099) (Fig. 4 ). While colony of origin had a significant effect on both degree (Chi-square = 22.132, df = 1, p < 0.001), and betweenness (Chi-square = 316.401, df = 1, p < 0.001), the interaction between colony of origin and bee category did not have a significant effect on neither degree (Chi-square = 0.557, df = 1, p = 0.456) nor betweenness centrality (Chi-square = 0.032, df = 1, p = 0.858). Figure 4 Allogroomers are more central in the colonial social network. Mean and standard error values of centrality (weighted outdegree is the strength of all ties connected to a node, left) and betweenness (the total number of shortest paths between pairs of nodes that pass through the considered node, right) between allogroomers (g, n = 140) and non-groomer bees (n-g, n = 711). ns = non-significant difference. Full size image Prediction 3. Allogroomers do not have a differential expression of antennal olfactory proteins compared to same age range non-grooming bees Overall, the “shotgun” approach (i.e. direct digestion of the entire protein extract without a previous separative step) applied on the crude extracts of antennae of same age range non-grooming and allogroomers identified 482 and 484 proteins respectively. The relatively low number of identified proteins depends on the sensitivity of the available equipment when compared to those used in recent studies of insect antennal proteomics. The global distribution of the identified proteins and their expression level are very similar between the two groups (Fig. S1, five biological replicates per group). This high degree of overlap is also reflected considering the numbers of proteins belonging to each gene ontology (GO categories), both for molecular function and for biological process, Pfam and Interpro. Besides the global expression pattern of antennal proteins, our primary aim was to understand if the allogroomers could have a chemosensory specialization based on a different profile of olfactory proteins with respect to same age range non-grooming workers. Among olfactory proteins we identified 12 Odorant Binding Proteins (OBPs), two Chemosensory Proteins (CSPs), and 2 Niemann-Pick type C2 (NPC-2) proteins. OBPs and CSPs are two broad families of insect proteins involved in the first steps of odour recognition 40 , 41 by transporting hydrophobic odorants through the sensillar lymph to the odorant-receptors (ORs) borne on the olfactory neurons 42 . A similar role, although less documented, has recently been suggested for NPC-2. None of soluble olfactory proteins is significantly more abundant in allogroomers or same age range non-grooming workers, including CSP3, which has been reported to be associated with grooming by Guarna and coworkers 38 , 39 in experimental colonies selected for hygienic behaviour. Therefore, we found no evidence that allogroomers are specialized in the perireceptors events of olfaction (Fig. 5 , Fig. S1). Prophenoloxidase, an important component of insect innate immune response also reported to be associated to grooming behaviour 39 , was identified but differences in abundance were not found in our specimens. Figure 5 Allogroomers do not have a differential expression of antennal olfactory proteins compared to same age range non-grooming bees. Bar charts showing Log2 LFQ (Label-free quantification) intensities of the identified soluble olfactory proteins (Odorant Binding Proteins, OBP; Chemosensory Proteins, CSP; Niemann-Pick type C2, NPC-2), averaged among the biological replicates of groomers (dark grey bars) and non-groomer bees (light grey bars) respectively. For each sample standard error is also reported. No significant differences were found. Full size image No quantitative differences in antennal protein expression between allogroomers and same age range non-grooming workers were found using Mann–Whitney test with a Benjamini-Hochberg correction (FDR ≤ 0.01). Prediction 4. Allogroomers have higher immunocompetence compared to same age non-grooming workers Category (allogroomers vs same age range non-grooming workers, respectively n = 108 and n = 127) had a significant effect on immunocompetence (Wald chi-square = 7.752, df = 3, p = 0.005), with allogroomers being more able to clear viable bacterial cells from their hemolymph than same age range non-grooming workers (means ± SE; allogroomers: 103.19 ± 10.47; non-grooming workers: 158.56 ± 18.43) (Fig. 6 ). Hive of origin did not have a significant effect (Wald chi-square = 2.649, df = 3, p = 0.449). There was no significant interaction between hive of origin and category (Wald chi-square = 0.941, df = 3, p = 0.816). No Colony Forming Units (CFUs) were detected in workers of both categories injected with only PBS. Figure 6 Allogroomers have higher immunocompetence compared to same age range non-grooming workers. Raw numbers of colony forming units (median and quartiles are represented) detected on the agar surface after plating 100 µl of homogenate dilutions in allogroomers (g) and non-groomer bees (n-g). Dots represent the individual values. Full size image Discussion Through an integrated approach involving behavioural observations and physiological assays, we here demonstrate that allogrooming in the honeybee Apis mellifera is a transient behavioural specialization, mirrored by an increased immunocompetence but without a chemosensory specialization in terms of differential abundance of chemosensory proteins in the antennae. Our observations clearly showed that allogrooming is age-dependent, with a peak of expression between 6 and 11 days of worker age, and its expression is not randomly distributed on the comb surface but rather clustered in a restricted area opposed to the hive entrance and to the dance area. Moreover, our results suggest that allogrooming is a weak specialization. The behavioural repertoire of allogroomers and age-matched non-groomers is quite similar: apart from the defining activity of allogrooming, allogroomers perform the same set of tasks expected on the basis of their age polyethism, and with similar frequency, compared to same age range non-grooming bees. Our results differ from some of previous findings 29 , 43 , 44 in which individual bees performed allogrooming at significant rates for their entire life, without undertaking the typical polyethism path 45 , 46 . A possible explanation is that variation in allogrooming expression also exists within the category of allogroomers themselves, with a small percentage of individuals showing a hyper-specialization in this task, as already suggested 27 . Our results suggested however that, if this was the case, these bees nonetheless represent a tiny percentage of allogroomers in the colony. Overall, our behavioural investigation, which for the first time specifically compared a large sample of allogroomers and age matched non-grooming bees, allows to characterize allogrooming as a transient and weak behavioural specialization. The finding that behavioural plasticity is maintained by allogroomers despite their specific task is indeed not puzzling in the view of colony task organization 47 . In fact, previous work has shown that the worker temporal polyethism is highly flexible, responding to both genetic influences and colony social requirements 45 , 47 , 48 , 49 , 50 , 51 . Despite the individual behaviour of allogroomers in terms of task performance is similar to that of same age range non-grooming workers, these two categories show significant differences in their position within the colony social network suggesting a spatial specialization in allogroomers. Allogroomers are more connected, i.e. have higher network centrality, compared to same age range non-grooming workers, which translates into contacting a higher number of colony mates. From a proximate perspective, this might be due to a different use of space by groomers. Individual network position has been shown to depend on spatial behaviour in bees 31 . While our results show that they are no more active than non-groomers, they might have less fidelity to specific parts of the comb and/or move faster across the comb. From an ultimate perspective, we might speculate that a higher centrality could be beneficial as it would allow allogroomers to screen a higher number of colony members for parasites. Interestingly, the possible increased costs of such higher network centrality (increased exposure to pathogens 32 ) might be reduced in allogroomers thank to their increased levels of immune ability (our results). The peculiar task of allogroomers might be supported by differential chemosensory abilities that could help in identifying unhealthy individuals to better direct allogrooming towards the most suitable targets inside the colony (prediction 3). Indeed, Guarna and colleagues 39 reported that Chemosensory protein 3 is linked to grooming behaviour in bees coming from colonies selected and tested for hygienic behaviours. Our results, however, do not support this hypothesis, since at least for soluble olfactory proteins no differences were found when directly comparing allogroomers with same age range non-grooming workers. There are two possible explanations for this finding. First, allogroomers might not need a chemosensory specialization. Bees in need of being groomed can be recognized through other channels, such as the tactile one, or by their behaviour. Indeed, grooming is sometimes solicited through the so called “grooming invitation dance”, whereby bees shake their whole body from side‐to‐side producing specific vibrations which increase the probability of being groomed by a nestmate 52 . Moreover, allogrooming might also be performed on specific age-class individuals rather than on parasitized individuals, and recognition of these individuals may not require a specialized olfaction. Alternatively, we should also recognize that chemosensory abilities do not only depends on the expression of olfactory proteins in the antennae, as the olfactory perception process is a long way from the binding of odorant molecules, at the level of chemosensillar lymph, to the integration in the central nervous system. The absence of substantial differences at the antennal proteome level is thus not, per se, a definitive proof for the absence of chemosensory specialization. Future studies are needed to assess the perceptive abilities of allogroomers at different levels and both using different (such as proboscis extension reflex, electroantennography) and more sensitive techniques. The picture is clearer in bee colonies selected for hygienic behaviour 38 for which 7 proteins were more expressed in antennae and were considered by authors as protein biomarkers for this specific selective breeding. While antennal proteome is not peculiar in allogroomers, a clear and striking difference in immunocompetence has been found between allogroomers and same age range non-grooming workers, with the former being more efficient in clearing bacterial cells from their hemolymph. Even if bacterial clearance bioassay represents just one of the possible methods to assess immunocompentence, this technique provides an integrative view of the activation of the organism immune system 53 , 54 (and see material and methods section) and the result is thus of great significance. Interestingly, the difference in immunocompetence is even larger with respect to what was previously found when nurse bees and foragers were compared 48 , 54 . This is particularly intriguing, as allogroomers and non-groomers have the same age and only slightly differ in their behaviour, while nurse bees and foragers largely differ in both age, physiology and behaviour. Among honeybee workers, social interactions appear to increase towards nestmates showing signs of potential infections 55 , 56 . Grooming parasitized or sick nestmates to free them from parasites, debris, spores or other infective agents might increase the risk to be infected. Therefore, the enhanced individual immunity showed by allogroomers might contribute to carry out their immunity-related task inside the hive. Our study did not clarify whether the higher immune ability of groomers is part of their specialization toolkit, as it is the case for other physiological specialization in bees, such as increased Juvenile Hormone level in guard bees 57 , 58 or rather a consequence of the increased exposure to pathogens due to their increased network centrality. Future studies should address this issue, possibly by directly manipulating individual exposure to pathogens and through non-lethal sampling of hemolymph, in order to follow ontogenetic development of immune ability in allogroomers and non-groomers. Moreover, future studies should also investigate the identity of bees receiving allogrooming. Our finding that the area where allogrooming occurs is opposed to the hive entrance and to the dance area suggests that allogrooming is not directed preferentially toward foragers. However, ad hoc studies are needed to clarify the identity of the receivers of the allogrooming acts. A possible limitation of our study is that, in order to obtain a relatively large number of allogroomers, we were not able to continuously follow individual worker behaviour for a long period. This could have led us to include in the sampling a mix of bees with variable levels of specialization in allogrooming. For example, it is not possible to exclude that proteomic differences might emerge if only hyper-specialized groomers were compared with groups of bees that have never shown allogrooming for their whole life (a possibility prevented by our non-continuous behavioural sampling). However, our approach allowed to show that, under our working definition of allogroomers and non-groomers, and given our sampling scheme, striking phenotypic differences between the two groups emerge in some phenotypic aspects of both behaviour (position within the social network) and physiology (immunocompetence) but not in other ones (task performance and antennal proteomic profile). The implications of our study are twofold. Firstly, we provide evidence to better understand the degree of specialization of allogrooming in A. mellifera which appears as a weak and transient behavioural specialization, characterized by a marked enhanced immunity. Since the expression rate of the behaviour appears to be limited also at the colony level in terms of individuals performing the behaviour with respect the total number of individuals, we can suppose that allogrooming is expressed in A. mellifera but not to an extent that could effectively contrast the parasite pressure of Varroa destructor 59 , 60 , 61 . Although this evidence might be discouraging, strategies to increase the overall colony rate of allogrooming in order to cope with this very impacting pest might still be considered as reasonable. The second insight from our results is that the physiological specialization of allogroomers is specifically related to the immune phenotype. Our results indicate an enhanced immune response in allogroomers compared to the same age range non-groomers nestmates as well as a different position of allogroomers in the colonial social network, in agreement with predictions from organizational immunity 31 . Overall, our results demonstrate the presence of an immune-specific physiological and social behaviour specialization in individuals involved in social immunity related tasks, thus linking individual to social immunity. It also suggests that division of labour might lead to physiological specialization narrowly tailored upon the task performed while maintaining an overall plasticity. Material and Methods Insect collection, rearing and general procedures Experiments were conducted between June and August, when expression of allogrooming is higher 22 and pers. obs., in 2014 (observations for individual behaviour characterization), 2015 (antennal proteomics and bacterial challenge) and 2016 (behavioural observation for social network analysis). All studies were performed using standard one-frame observation hives maintained in laboratory (Department of Biology, University of Florence) where bees were free to forage outside. Observation combs were taken from colonies belonging to a local beekeeper (Apicoltura Cristofori Mauro). We screened 15 colonies each year in order to identify colonies with significant allogrooming rates (at least 30 allogrooming acts per comb during 30 min of observation). From each selected colony we took a comb containing stored honey and pollen, open and sealed brood cells, the queen and circa 2000 workers and transferred it to the observation frame. Experiments were performed on four observation hives for the behavioural experiments (two for prediction 1 and two for prediction 2), three hives for antennal proteomics (prediction 3) and four observation hives for the bacterial challenge (prediction 4). Overall, observation combs were issued from a total of 11 colonies. Individual bee marking procedure To obtain house bees of known age needed for the experiments, we collected newly emerged bees directly from the comb by gently removing them with forceps. Bees were a) individually marked with plastic coloured numbered tags on their thorax (predictions 1 and 2, for which we needed to follow individual behaviour over several days) or b) marked with a spot on the thorax with UniPosca® paint markers, using different colours according to day of collection and hive of origin (predictions 3 and 4, where individual age and colony of origin were sufficient information as identified bees were immediately removed from the comb as soon as they performed the grooming act). Marked bees were gathered in plastic cylindrical containers (Ø 10 cm × h 10 cm), and lightly dusted in icing sugar before being gently reintroduced in their natal colonies within a few hours to favour acceptance by older nestmates. More than 200 newly emerged workers per observation comb were marked for testing predictions 1 and 2 and more than 100 per comb for testing predictions 3 and 4 (approximately a total of 1500 bees were marked in total). Temporal and spatial dynamics of allogrooming expression We evaluated the temporal dynamics of allogrooming expression along individual worker lifespan using an all occurrences sampling method, focusing on allogrooming 62 . Starting the day after re-introduction of newly emerged marked bees and until day 25 post emergence (average life expectancy during summer 63 ), we counted the occurrences of all allogrooming (Supplementary Video 1) events performed by a marked bee by observing combs every odd day for 30 min at each side. A total of 12 h of observation was performed on each hive. Departure from randomness of the temporal expression of allogrooming (number of bees performing an act of allogrooming on the total number of marked bees) was tested with a Runs test. On average 104.60 ± 19.31 marked bees were observed for each age interval. We evaluated the spatial occurrence of allogrooming expression on the comb surface by using an all occurrences sampling method, as above 62 . We recorded the spatial coordinates of all allogrooming events performed by any bee by observing three observation combs. We alternated observation of 30 min for each comb side. A total of about 15 h of observation was performed (5.16 h on average for each hive). For each allogrooming act, the position was precisely marked on a transparent acetate sheet overlaid onto the observation frame glass surface. Once observations were collected, their coordinates were pooled across the three combs and departure from complete spatial randomness in distribution of the expression of allogrooming was tested by computing the routine Kest (Ripley’s K-function) and simulating 99% significance envelopes using the envelope function in spatstat package in R 64 . Observations took place during central hours of the day (between 11 am and 4 pm) 21 . Prediction 1. Allogroomers show a specific behavioural repertoire We determined the behavioural repertoire of allogroomers and same age range non-grooming bees using focal animal sampling 62 . Every marked bee observed performing an act of allogrooming (Supplementary Video 1) continuously for at least 30 s (during the morning observation) was followed continuously for 10 min later on during the afternoon of the same day (between 2 and 5 pm). Ten-minute observation periods were divided into 20 intervals of 30 s each, and for each interval the main behaviour performed by the focal bee was recorded, thus obtaining 20 observation scans for each focal groomer. During each ten min period only a single bee (focal bee) was observed. The same procedure was applied to non-grooming bees, i.e. marked bees of the same age range as focal allogroomers never observed to perform acts of allogrooming during the same or previous days. In case bees chosen as same age range non-groomers were later observed performing allogrooming, they were removed from the dataset. This procedure allowed the comparison of behavioural specialization between allogroomers and same age range non-grooming bees at each age. Overall, we obtained 53 focal allogroomers (first hive, n = 26; second hive, n = 27) and 62 focal same age range non-grooming bees (first hive: 26; second hive = 36) ranging in age between 4 and 15 days post-emergence. Age distribution was not different between the two groups (Mann-Whitney test, U = 1414, p = 0.201, N = 53 vs 62). For each bee we calculated: a) the behavioural repertoire size as the number of different tasks (listed in Table 1 ) performed at least once; this measure has been used as a proxy of behavioural plasticity, i.e. the opposite to behavioural specialization 65 , 66 ; b) the performance rate of each behavioural item, as the number of observation scans in which the focal bee performed that behaviour. As many behavioural items were very rarely performed, we pooled them within the same category. Social interactions included antennation, trophallaxis and waggle dance, brood care included larval inspection, larval care and ventilation, the category “other” included comb building and related activities. Moreover, we controlled for possible differences in the behavioural repertoire due to different activity rates, by comparing activity rate (number of scans during which the bee was performing any behaviour other than inactivity) between allogroomers and same age range non-grooming bees. Behavioural repertoire size and performance rates for each behavioural item were compared between the two groups using a GLMM with bee category (allogroomers or non-allogroomers) as fixed factor, hive of origin as random factor and including their interaction. We used a negative binomial distribution for all comparisons, except for three behavioural categories (allogrooming, external activity and other activities) which were performed so rarely that we dichotomize the data and used a binomial logistic distribution. Statistical analyses were performed with SPSS 16.0 (SPSS Inc., Chicago, IL) and Past v3.20 67 . Prediction 2. Allogroomers are more central in the colonial social network An association network was built on the basis of the spatial proximity of bees, following the protocol in Baracchi and Cini 31 . We considered two bees interacting when they were at a distance shorter of the length of a bee body (approximately less than 3 cells). Interaction among marked bees were recorded from each observation colony taking photos, over a period of four days. 25 Photos were taken on both sides every hour during the central hours of the day (approximately 11 am-4 pm). In order to define the association network, each individual bee was considered as a node and an interaction between two individuals was taken as an edge existing between these two nodes, thus resulting in a weighted and not directed network. The following measures of centrality were calculated for each node: degree and betweenness 68 . Degree is defined as the sum of the strength of all ties connected to a node; betweenness is a centrality measure that is defined as the total number of shortest paths between pairs of nodes that pass through the considered node. It is thus an index of liaising otherwise separate parts of the network. We focused on these two measures as they catch different aspects of network centrality, respectively the size of the social neighbourhood of a bee, i.e. the individual potential to groom many bees (degree), and the potential to influence the passage of pathogens through the network (betweenness). Both these measures have shown to differ according to age and task in A. mellifera 31 . Centrality measures were computed using Ucinet 6 and differences in centrality measures between allogroomers and same age range non-grooming bees were assessed using a generalized linear model (GLZ) with negative binomial distribution and log-link function with 10000 permutations to take into account the non-independence of network data. The full factorial model included hive of origin as a random factor and category (allogroomers, n = 140; same age range non-grooming bees, n = 711) as a fixed factor as well as their interaction. Statistical analyses were performed with Ucinet 6 69 . Prediction 3. Allogroomers have a differential expression of antennal olfactory proteins compared to same age range non-grooming bees Allogroomers and same age range non-grooming bees (defined as above) were collected from 3 observation hives. As this same study shows allogrooming is mainly performed by bees within a well-defined age-interval (Fig. 2a ) we sampled marked bees within this range. Once identified as allogroomers, bees were gently removed from the comb with forceps and immediately stored at −20 °C. For each allogroomer sampled a non-allogroomer of the same age range was collected from the same hive. Non-allogroomers were defined as those workers marked on the same day of the collected allogroomer which have never been observed performing allogrooming during the same or previous days. Then, flagella from antennae were dissected immediately before protein extractions. Extracts were prepared from pools of 5 individuals randomly sampled from the three colonies. Five biological replicates for each group (allogroomers and non-grooming bees) were prepared. Reagents and procedures used for protein extraction, digestion, purification and shotgun analysis (i.e. direct digestion of the entire protein extract without a previous separative step), as well as protein identification and quantification are described in Iovinella et al . 37 (see also Supplementary Material). Protein extracts were reduced, alkylated and digested prior to HPLC-MS analysis. The peptide mixture of each sample was submitted to a nanoLC-nanoESI-MS/MS analysis on an Ultimate 3000 HPLC (Dionex, San Donato Milanese, Milano, Italy) coupled to a LTQ-Orbitrap mass spectrometer (Thermo Fisher, Bremen, Germany). The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE 70 partner repository with the dataset identifier PXD017651. Data were searched using MaxQuant software (version 1.5.2.6) against databases downloaded from Uniprot containing all Apis mellifera proteins as well as those from common honeybee viruses. The data relative to identification and quantification are contained in the MaxQuant output file named proteinGroups.txt and are reported in Supplementary Dataset. Search results were analyzed with the Perseus software platform (version 1.5.1.6) and IBM SPSS v20. Differential protein abundance was evaluated after filtering data for proteins quantified in at least 3 replicates (out of the 10). Hierarchical clustering analyses were performed using average Euclidean distance and the default parameters of Perseus (300 clusters, maximum 10 iterations). Missing values were imputed (width = 0.3, downshift = 1.8) by drawing random numbers from a normal distribution to simulate signals from low abundant proteins, using the default parameters. Analysis of differential abundance of single proteins was performed using Mann–Whitney test, with a Montecarlo resampling procedure (100000 samples), followed by a Benjamini-Hochberg correction (FDR ≤ 0.01), as a substantial fraction of protein abundance values were not normally distributed. Prediction 4. Allogroomers have higher immunocompetence compared to same age range non-grooming workers Allogroomers and same age range non-grooming bees were collected from 4 observation hives following the same criteria used to sample bees for antennal proteome analysis (see prediction 3). We compared the ability to clear bacterial cells from their haemolymph (i.e. bacterial clearance) between allogroomers and same age range non-grooming bees by injecting bees with the Gram-negative bacteria Escherichia coli , an immune elicitor commonly used to test immunocompetence in insects 71 , 72 , 73 , 74 . We measured bacterial clearance as a good proxy of workers immunity strength since injection of live bacteria provides an integrative view of the activation of the organism immune system 53 , 54 and different parameters used to measure antimicrobial immune response in insects are positively correlated 75 , 76 . Moreover, E. coli is not naturally found in A. mellifera , we could therefore exclude its presence in our workers prior to artificial infection. Bacterial culture and injection were carried out following the same procedure used by Cappa et al . 54 (see also supplementary material). Each worker (allogroomers, n = 108; same age range non-grooming bees, n = 127) was infected by injecting 1 µl of inoculum, containing approximately 1.5 × 10 5 cells, with a HamiltonTM (Bonaduz, Switzerland) micro syringe between the second and third tergites. After injection, bees were introduced in groups of about 10-20, separated for category (allogroomers and same age range non-grooming bees), into plastic cylindrical containers (Ø 10 cm x h 10 cm) provided with ad libitum honey as food and maintained under controlled conditions (~30 °C; 55% RH). All containers used to house infected bees were carefully washed with ethanol and dried before use to prevent any contamination and then randomly allocated to the allogroomers or same age range non grooming bees. Twenty-four hours later, each worker was quickly beheaded with scissors, and then the whole body was inserted in a sterile plastic bag with 10 mL of PBS after removing the sting and the venom sac to prevent a reduction of bacterial cells viability due to venom antimicrobial peptides 77 and processed with a Stomacher® 400 Circulator (230 rpm × 10 min) to homogenize the bee body. Afterwards, 0.1 mL of undiluted and serially diluted PBS suspensions (dilutions 10 −1 , 10 −2 ) of each sample were plated on LB solid medium added with tetracycline (10 μg/mL) and incubated overnight at 37 °C. The following day, the colonies grown on the plate surface were counted and the viable bacterial count was expressed as Colony Forming Units (CFUs) per bee. At least 8 same age range bees per colony for each category (4 allogroomers and 4 same age range non-grooming bees) were injected with 1 µL of PBS, homogenized and plated following the same procedure of E. coli -infected workers, to ensure absence of other bacterial strains capable of growing on our LB agar plates added with tetracycline. A total of 235 bees were infected with E. coli and plated: (i) allogroomers, n = 108, (ii) same age range non-grooming bees, n = 127. Bacterial challenge data (raw number of CFU) were analyzed with a generalized linear model (GLZ) with negative binomial distribution and log-link function. The full factorial model included hive of origin as a random factor and category (allogroomers vs non-grooming bees) as a fixed factor. Statistical analyses were performed with SPSS 16.0 (SPSS Inc., Chicago, IL) and Past v3.20 67 . Data availability Proteomic data are available via ProteomeXchange with identifier PXD017651. The other datasets generated and/or analysed during the current study are available in Supplementary materials and/or from the corresponding author on reasonable request.
Honeybees that specialize in grooming their nestmates (allogroomers) to ward off pests play a central role in the colony, finds a new UCL and University of Florence study. Allogroomer bees also appear to have stronger immune systems, possibly enabling them to withstand their higher risk of infection, according to the findings published in Scientific Reports. Ectoparasites (parasites that live on the outside of a host's body, such as mites) are a growing threat to honeybees worldwide, so the researchers say that supporting allogrooming behavior might be an effective pest control strategy. Lead author Dr. Alessandro Cini, who began the project at the University of Florence before moving over to UCL Centre for Biodiversity & Environment Research, said: "An ectoparasitic mite, Varroa destructor, represents a major global threat to bee colonies. By understanding how allogrooming practices are used to ward off parasites, we may be able to develop strategies to promote allogrooming behavior and increase resilience to the parasites. "Here, we found worker bees that specialize in allogrooming are highly connected within their colonies, and have developed stronger immune systems. "We suspect that if more bees engaged in these allogrooming behaviors that ward off parasites, the colony as a whole could have greater immunity." Honeybees that specialise in grooming their nestmates (allogroomers) to ward off pests play a central role in the colony, finds a new UCL and University of Florence study published in Scientific Reports. Credit: Dr Alessandro Cini, UCL Among bees, allogrooming consists of a worker using its mouth to remove debris, which may include parasites and other pathogens, from the body of another member of its colony. In bee colonies, different groups of worker bees conduct different activities—one such specialization is allogrooming, although it was not previously known how specialized the groomer bees are, and how their physiology may be different. The current study focused on Apis mellifera, commonly known as the western honeybee, which is the most common species of honeybee and also the world's most-used bee for both honey production and pollination in agriculture. As allogrooming would likely put the grooming bees at an elevated risk of contracting pathogens and parasites, the researchers tested their immune systems, and found that their hemolymph (like blood, but for insects) could more effectively clear out potentially harmful bacteria than the immune systems of other bees in the colony. Co-author Dr. Rita Cervo from the University of Florence said: "By identifying a striking difference in the immune systems of the allogrooming bees, which are involved in tasks important to colony-wide immunity from pathogens, we have found a link between individual and social immunity." Honeybees engaging in allogrooming behavior. Credit: Dr Adele Bordoni, University of Florence The researchers found that allogroomer bees occupy a central position in the colony's social network, as they are more connected to bees across the colony than the average bee, enabling their grooming habits to benefit a large number of bees and keep the colony as pest-free as possible. The researchers found that allogrooming is a relatively weak, transient specialty, as the groomer bees still devoted a similar amount of time to other tasks as the rest of the colony's worker bees. The researchers say this shows that bees can develop physiological differences narrowly tailored to specific tasks, while still maintaining a degree of plasticity enabling them to switch to other tasks as needed. The researchers did not detect any differences in how well the allogroomer bees could detect when other bees needed grooming, as their antennae were not more finely-tuned to relevant odors. It is possible they can detect who needs grooming in other ways, such as by noticing the 'grooming invitation dance' whereby bees shake their whole body from side-to-side.
10.1038/s41598-020-65780-w
Biology
High-voltage cryo-electron microscopy reveals tiny secrets of 'giant' viruses
Akane Chihara et al, A novel capsid protein network allows the characteristic internal membrane structure of Marseilleviridae giant viruses, Scientific Reports (2022). DOI: 10.1038/s41598-022-24651-2 Journal information: Scientific Reports
https://dx.doi.org/10.1038/s41598-022-24651-2
https://phys.org/news/2022-12-high-voltage-cryo-electron-microscopy-reveals-tiny.html
Abstract Marseilleviridae is a family of giant viruses, showing a characteristic internal membrane with extrusions underneath the icosahedral vertices. However, such large objects, with a maximum diameter of 250 nm are technically difficult to examine at sub-nanometre resolution by cryo-electron microscopy. Here, we tested the utility of 1 MV high-voltage cryo-EM (cryo-HVEM) for single particle structural analysis (SPA) of giant viruses using tokyovirus, a species of Marseilleviridae , and revealed the capsid structure at 7.7 Å resolution. The capsid enclosing the viral DNA consisted primarily of four layers: (1) major capsid proteins (MCPs) and penton proteins, (2) minor capsid proteins (mCPs), (3) scaffold protein components (ScPCs), and (4) internal membrane. The mCPs showed a novel capsid lattice consisting of eight protein components. ScPCs connecting the icosahedral vertices supported the formation of the membrane extrusions, and possibly act like tape measure proteins reported in other giant viruses. The density on top of the MCP trimer was suggested to include glycoproteins. This is the first attempt at cryo-HVEM SPA. We found the primary limitations to be the lack of automated data acquisition and software support for collection and processing and thus achievable resolution. However, the results pave the way for using cryo-HVEM for structural analysis of larger biological specimens. Introduction The “giant viruses” are exceptionally large physical size viruses, larger than small bacteria 1 . They also have a much larger genome (> 100 kilobases (kb)) than other viruses and contain many genes (> 50 genes) not found in other viruses 2 . One of the characteristics of these viruses is that they possess double-stranded DNA encapsulated in a lipid bilayer 3 . These large DNA viruses are now taxonomically classified into the phylum Nucleocytoviricota 4 , but have historically been referred to be nucleo-cytoplasmic large DNA virus (NCLDV). NCLDVs are an expansive clade of large viruses that possess double-stranded DNA and target varying host eukaryotes 5 . NCLDVs are composed of several families, including the Asfarviridae , Ascoviridae , Iridoviridae , Marseilleviridae , Mimiviridae , Phycodnaviridae , and Poxviridae , and unclassified viruses such as cedratviruses, faustoviruses, medusaviruses, Mininucleoviridae , molliviruses, orpheoviruses, pacmanviruses, pandoraviruses, and pithoviruses 6 . Even now, a variety of NCLDVs are isolated and studied from around the world. A new order, Megavirales , has been proposed based on the shared characteristics of these viruses 7 . NCLDVs exhibit several types of shapes depend on the species 2 . Asfarviridae , Ascoviridae , Iridoviridae , Marseilleviridae , Mimiviridae , Phycodnaviridae , faustoviruses, medusaviruses, pacmanviruses, and Mininucleoviridae exhibit an icosahedral shape. Poxviridae exhibits a brick shape. Cedratviruses, molliviruses, orpheoviruses, pandoraviruses, and pithoviruses exhibits an amphora shape. Sizes vary also; the amphora-shaped pithoviruses are the largest of the giant viruses, exceeding 2 μm in size, but there is a significant variation in the actual dimensions 8 . On the other hand, the icosahedral-shaped mimivirus is ~ 500 nm in diameter (not including fibrous filaments extending from the capsid) 9 , and its closest known relative cafeteriavirus is ~ 300 nm 10 . The brick-shaped poxviruses are approximately 350 × 270 nm, although precise dimensions are variable 11 . Other icosahedral viruses are ~ 260 nm for medusavirus 12 , 13 , ~ 180 nm for iridovirus 14 , ~ 190 nm for PBCV-1 15 . ASFV is ~ 250 nm, but is complicated because it has an external membrane 16 . Marseilleviridae is a family of the new order of NCLDVs 17 , which have a highly complex ~ 360 kb genome and a particle size of ~ 250 nm. The first member, Marseillevirus, was isolated in 2007 by culturing a water sample from a cooling tower in Paris, France 18 . Currently, Cannes 8 virus, Melbournevirus, Marseillevirus, Tokyovirus, Port-Miou virus, Lausannevirus, Noumeavirus, Insectminevirus, Tunisvirus, Brazilian marseillevirus, Golden mussel marseillevirus, and more species belong to this family, and these are classified into five lineages, A to E 19 , 20 . Several studies have also reported the presence of Marseilleviridae in humans 21 , 22 , 23 , 24 . However, there is controversy surrounding marseillevirus infections of humans as other studies have shown no evidence 25 , 26 . Melbournevirus, one of the Marseilleviridae belonging to lineage A 27 , has previously been analysed by cryo-electron microscopy (cryo-EM) single particle analysis (SPA) at 26 Å resolution 28 . It shows the icosahedral capsid with a triangulation number of T = 309, shows the characteristic internal membrane inside the capsid that extrudes just underneath the vertices, and the unique large density body inside the nucleoid. Tokyovirus is the first Marseilleviridae isolated in Asia in 2016 and is classified in lineage A 29 . Tokyovirus was utilised to investigate the characteristic capsid structure of the icosahedral Marseilleviridae in this study. Few studies have revealed the capsid structure of large icosahedral viruses in detail. An electron microscopy study half a century ago showed that the large icosahedral virus capsid is composed of 20 and 12 sets of trisymmetron and pentasymmetron, respectively 30 , 31 , although these viruses are not classified as NCLDVs. Each of these is a cluster of capsomers formed by pseudo-hexamer. In recent years, the structures of many viruses have been revealed with high resolution by using cryo-EM SPA for determination of overall structure 32 , dynamics 33 , 34 , and assembly 35 . However, the large icosahedral viruses present special challenges for high resolution cryo-EM simply from their size, which imposes hard limits on sample preparation, data acquisition and image reconstruction techniques. A hard limit is one which cannot be overcome using the same conditions, e.g.: particle count per micrograph, or the resolution limit imposed by the Nyquist frequency. A soft limit can be overcome, e.g., by collecting more micrographs, but a hard limit cannot. Therefore, several different methods including cryo-electron tomography, scanning electron microscopy, and atomic force microscopy, were initially combined with cryo-EM and investigated the structure of giant viruses 9 , 36 . The different methods which have been used to investigate giant viruses have been examined before 37 . However, in 2019, the structure of Paramecium bursaria chlorella virus 1 (PBCV-1) at 3.5 Å resolution 15 and the structure of African swine fever virus (ASFV) at 4.1 Å resolution 16 were reported by the use of cryo-EM SPA, respectively. These cryo-EM SPA of the large icosahedral viruses at high resolution have commonly used microscopes with an accelerating voltage of 300 kV. However, these reports utilise a method called “block-based reconstruction” 38 to achieve these resolutions which focusses on sub-sections (“blocks”) of the virus to permit localised defocus refinement, resulting in reconstructions of higher resolution. In these reports, it was shown that the capsids of PBCV-1 and ASFV are composed of one “major” capsid protein (MCP) and a combined fourteen kinds (PBCV-1) or four kinds (ASFV) of “minor” capsid protein (mCP), respectively. The MCP of PBCV-1 and ASFV includes double “jelly roll” motifs, each of which consists of eight β strands 39 . The connecting loops in the motifs and their glycosylation sites provide these viruses with unique functions. The mCPs form a hexagonal network to fill the space between the pseudo-hexagonal MCP trimers, which contribute to keeping the capsid structure stable. Further, PBCV-1 and ASFV possess an mCP called a “tape-measure” protein, which is a long filamentous protein that extends from a pentasymmetron to an adjacent pentasymmetron along the trisymmetron edge. It was proposed that the length of this tape-measure protein determines the capsid size 40 . Herein we focus on the single particle reconstruction of the entire tokyovirus using 1 MV cryo-HVEM (high-voltage electron microscopy). HVEM was originally developed to extend attainable resolution using shorter electron wavelengths 41 . Major usage is currently focussed on thick specimens, which lower acceleration voltages are unable to penetrate 42 . Cryo-HVEM on biological samples has not previously been reported for single particle analysis, and only a few examples using tomography have been reported 8 , 43 , 44 . For thick samples (e.g., tokyovirus with a maximum diameter of 250 nm), the influence of depth of field causes an internal focus shift, imposing a hard limit on attainable resolution 45 . However, increasing the accelerating voltage (shortening the wavelength of the emitted electrons) can increase the depth of field, and improve the (electron) optical conditions in thick samples. The resolution achievable with a given acceleration voltage can be simply calculated by the following formula, assuming all points within a given thickness can be considered equally focussed: $$d = \, \surd \left( {{2}\lambda t} \right)$$ (1) where d is the resolution, t is the sample thickness (in this case, particle diameter), and λ is the electron wavelength 45 . For high symmetry specimens, such as icosahedral viruses, sample thickness (or depth) is functionally equivalent to particle diameter. Using Eq. ( 1 ), for a 250 nm thick sample, at an acceleration voltage of 300 kV, the electron wavelength is 1.97 pm, so the theoretical resolution limit is ~ 9.9 Å with the entire width of the particle at the same focal depth (black curves in Fig. 1 ). Extending this, at an accelerating voltage of 1 MV, the electron wavelength is 0.87 pm, so the theoretical resolution limit is calculated to improve to 6.6 Å. Figure 1 The depth of field effect in cryo-EM. Theoretical resolution limits caused by the size of particles (Eq. 1 ) are plotted at accelerating voltages of 200, 300 and 1000 kV, assuming uniform defocus (Black curves). The resolutions at which phase error limits the theoretical maximum resolution of a cryo-EM reconstruction (Eq. 2 ) are plotted at accelerating voltages of 200 kV, 300 kV and 1000 kV (Red curves). Dashed vertical line drawn at the maximum diameter (250 nm) of tokyovirus. Theoretical resolution limits at the maximum diameter of tokyovirus are shown to one decimal place. Full size image Considering phase error in the contrast transfer function (CTF) of the signal 46 is another estimation for calculating maximum attainable resolution (red curves in Fig. 1 ), and can be calculated by: $${\text{d }} = \surd \left( { \, \left( {\lambda {\text{t}}} \right) / \left( {{2 } \times \, 0.{7}} \right) \, } \right)$$ (2) In this case, the attainable resolutions are improved to 5.9 Å at 300 kV and 4.0 Å at 1 MV, respectively (red curves in Fig. 1 ). However, the value of 0.7 in Eq. ( 2 ) 46 can vary based on the shape of an object, whether it is full or empty, and has not been tested on real objects such as tokyovirus (maximum diameter of 250 nm). Previous discussions 47 , 48 , 49 cover the equations for calculating electron wavelengths and accounting for relativistic effects at given acceleration voltages if one wishes to further explore the mathematics. Of course, many other factors affect the maximum resolution achieved with experimental data. From the estimation, we attempted to clarify the structure of the entire giant virus particle with a maximum diameter of 250 nm at higher resolution using a 1 MV cryo-HVEM rather than using the block-based approach. In this study, we first evaluated 1 MV cryo-HVEM for biological SPA. We then carried out SPA for a 250 nm maximum diameter icosahedral giant virus, tokyovirus. This resulted in a 7.7 Å three-dimensional (3D) reconstruction without using the block-based reconstruction technique. The cryo-EM map revealed a novel capsid protein network (Fig. 4 ). This consisted of MCP trimers and penton proteins covering the surface, an mCP layer consisting of eight protein components, and a scaffold protein component (ScPC) network. The penton protein is a unique pentameric protein located in the same layer as the MCP, but only at the fivefold vertices. The ScPCs existing underneath the mCP layer and connecting each vertex allows the internal membrane extrusion. As such, they may play a role similar to that of tape measure proteins that determined the particle size in NCLDV. An additional density on top of the MCP trimer suggested to include a glycoprotein, and which may function for host-cell interaction. Our results pave the way for a greater understanding of this family of NCLDV, and in addition provide the first evaluation of a new tool for the SPA of gigantic biological specimens. Results Performance of the 1 MV cryo-HVEM While the imposed theoretical resolution limit is overcome by moving to higher acceleration voltages (Fig. 1 ) 45 , 46 , it is of little practical use if other factors are providing hard limits to attainable resolution. To this end, we examined the performance of the 1 MV cryo-HVEM (JEOL JEM-1000EES) using a Pt-Ir film and performed rotational averaging on a calculated power spectrum (Fig. 2 A). In the 1D intensity plot, Thon rings are clearly visible to 1.81 Å (Fig. 2 B). Comparing the performance of the 1 MV cryo-HVEM equipped with a LaB 6 electron gun against a high performance 300 kV microscope using a thermal field emission electron gun (in this case a Titan Krios G2 (Thermo Fisher Scientific) was used) is also of great interest (Fig. S1 ). Both use Gatan K2 Summit direct detector as a camera. The Modulation Transfer Function (MTF) of the 1 MV microscope is superior in counting mode across the entire spatial frequency range compared to that of the 300 kV microscope, although in super resolution mode it drops significantly within the 0.25 Nyquist frequency (Fig. S1 A). Detective Quantum Efficiency (DQE) is similarly superior for the 1 MV microscope in counting mode, but in super resolution mode it shows a significant reduction in both 1 MV and 300 kV microscopes (Fig. S1 B). Furthermore, the DQE of the 1 MV cryo-HVEM show as much as a 40% drop in performance in the lower spatial frequencies compared to the 300 kV cryo-EM. Beyond 0.6 Nyquist frequency, the two are comparable. In this study, however, super resolution mode had to be used to increase the manual sampling efficiency of the large-size virus particles (Table 1 ). Figure 2 Performance of the cryo-HVEM (JEOL JEM-1000EES) equipped with a Gatan K2 Summit direct electron detector using a Pt-Ir standard sample. ( A ) Power spectrum showing clear Thon rings. ( B ) A focussed view of the Thon rings. ( C ) 1D plot of the power spectrum calculated with the "Radial Profile Extended" plugin of Fiji 75 . Full size image Table 1 Cryo-EM data information. Full size table Overall structure of tokyovirus Figure 3 shows the overall structure of tokyovirus reconstructed using RELION 3.1 50 with icosahedral symmetry imposed. Tokyovirus has an outer capsid shell comprised of MCP trimers arranged in an icosahedral form with T = 309 (h = 7, k = 13) (Fig. 3 A). Immediately below the outer shell (MCP) are layers of mCPs and ScPCs in that order, and the internal membrane stores the viral nucleoid inside of these capsid layers (Fig. 3 B–D). In the extrusion of the internal membrane, the membrane forms multiple layers (asterisk in Fig. 3 C). Proximity to the vertex of the capsid from this extrusion is via the relatively large mCPs and low-density materials (arrow in Fig. 3 C). Compared to MCP and mCPs, densities of ScPCs are smeared, showing structural flexibility of the component (Fig. 3 C,D). Figure 3 E shows the estimated local resolution of the capsid layer of tokyovirus, sliced for internal visualisation. The highest resolution was shown in the interface between MCP and mCPs. Each part has a characteristic structure, which can be better viewed by careful filtering and segmentation of the reconstruction (Fig. 4 ). MCP trimers cover the whole surface of the viral capsid except for the fivefold vertices (light blue in Fig. 4 A), where the pentons plug the hole in the vertices (penton in Fig. 4 A). mCPs can roughly be classified by eight distinct components, which we have temporarily named glue, zipper, lattice, cement, support, and pentasymmetron component (PC)-α, β, and γ (Fig. 4 B). Further, the ScPCs are located underneath the mCPs (yellow in Fig. 4 ). These protein components can be consisted of several mCP proteins. At current resolution, and given current annotations of the genome, it is not possible to segment each protein or identify each protein according to the mass. We just identified these proteinaceous components by their shapes and locations. Unlike the other mCPs, the ScPCs form an anti-parallel chained array between pentasymmetrons along the trisymmetron interface (yellow in Fig. 4 ). Possibly as a result of the ScPCs frames, the internal membrane has a characteristic structure that extrudes outwards below the fivefold vertices (Fig. 3 B,C, grey in Fig. 4 ). Figure 3 SPA 3D reconstruction of tokyovirus at 7.7 Å resolution. ( A ) An isosurface view, coloured by radius, and showing the fivefold (black pentagon), threefold (black triangle) and twofold (black double-teardrop) symmetry axes. H, K indexes indicating the T = 309 icosahedron are included. ( B ) A cross-section view, with symmetry axes shown with arrows. ( A , B ) are coloured by radius in UCSF Chimera with the following parameters: blue, 910 Å; turquoise, 1010 Å; green, 1080 Å; yellow, 1125 Å; red, 1200 Å and are shown at 2 σ. ( C ) A central slice of the 3D reconstruction. Asterisk shows a multilayer structure in the extrusion of the internal membrane. Arrow indicates weak densities connecting the vertex of the capsid and the internal membrane extrusion. ( D ) Focussed view of the marked box in ( C ), showing delineation between major capsid protein (MCP), minor capsid protein (mCP), scaffold protein component (ScPC), internal membrane (IM), and nucleocapsid (NC). ( E ) local resolution of the capsid, estimated by the blocres module of Bsoft 71 , focussing on the upper capsid edge, sliced to permit visualisation of internal density and is shown at 3 σ. Scale bars: ( A – C ) 50 nm, ( D , E ) 10 nm. Full size image Figure 4 Segmentation of tokyovirus structure. ( A ) The complete tokyovirus virion cut out to show each component. Individual components of the virion; major capsid protein (MCP) layer (light blue), minor capsid protein (mCP) layer (blue), scaffold protein component (ScPC) array (yellow), and internal membrane (IM) (grey) are indicated. Individual segments are low-pass-filtered to improve visualisation clarity. ( B ) Focussed segmentation of mCPs, ScPCs (yellow), and penton (red purple) of tokyovirus. The mCPs, colored blue in ( A ), were classified into 8 components based on the structures, consisted of lattice component (LtC) (orange to red), support component (SuC) (royal blue), cement component (CmC) (sky blue), zipper component (ZpC) (pink), glue component (GlC) (emerald green), and 3 pentasymmetron components (PC-α, β, and γ) (purple, light green, and cyan). The isosurface is shown at 2 σ. Full size image Minor capsid proteins (mCPs) We segmented the tokyovirus reconstruction, and then extracted and classified the mCPs into eight types of protein components based on their structural features and arrangements (Fig. 4 B). They are referred to as Lattice component (LtC) (orange to red in Fig. 4 B), Support component (SuC) (royal blue in Fig. 4 B), Cement component (CmC) (sky blue in Fig. 4 B), Zipper component (ZpC) (pink in Fig. 4 B), and Glue component (GlC) (emerald green in Fig. 4 B), and three pentasymmetron components of α, β, and γ (PC-α, β, and γ) (purple, light green, and cyan in Fig. 4 B), respectively. The mesh-shaped triangle formed by mCPs is composed of three trapezoidal units consisting of five components, LtC, SuC, ZpC, CmC, and GlC, and these trapezoids are connected by rotating 120° around the threefold rotation axes. These protein components are further connected to PC-β and PC-γ at the edge of the triangle. The LtC (orange to red in Fig. 4 B) forms a wavy structure along the gap of MCP trimers (orange to red in Fig. 4 B). This waveform repeats according to the number of MCP trimers present above each. It forms the base of the trapezoidal unit. Under the network structure formed by LtCs, the SuC (royal blue in Fig. 4 B) creates a bridge between several LtCs (Fig. 4 B). It also connects the two ScPC pairs (yellow in Fig. 5 C) around the membrane extrusions, forming a three-dimensional ScPC framework surrounding the internal membrane (Fig. 5 C). Within the trisymmetron, the CmCs (sky blue in Fig. 4 B) connect the three trapezoidal units by interfacing the ends of LtCs, extending from the threefold axis toward the associated pentasymmetron and terminating at the ScPC edge. Two protein components, ZpC and GlC (pink and emerald green in Fig. 4 B), are involved in connecting adjacent trisymmetrons and run directly above the ScPC array (Fig. 5 A,B). The GlCs are present at the boundary between adjacent trisymmetrons and appears to glue them. The ZpCs fill the gaps between LtCs at the trisymmetron interfaces, interlocking with GlCs like the teeth of a zipper. Figure 5 Examination of the scaffold protein component network between the internal membrane and capsid shell. ( A ) Slab density of the tokyovirus capsid along the trisymmetron interface showing the innermost layer of the mCP network. Arrows indicate weak densities connecting the vertex of the capsid and the internal membrane extrusion. ( B ) Isosurface view of the mCP components coloured as in Fig. 4 (GlC; green, ZpC; hot pink, ScPC; yellow, SuC; royal blue) and with distances from the internal membrane to the components indicated by arrows. ( C ) The tokyovirus internal membrane with ScPCs (yellow) and SuC (royal blue) network overlaid. ( D ) Focussing on the pentasymmetron at the fivefold axis, representing distances from the internal membrane extrusion to PC-α, β, γ (purple, light green, and cyan) and the penton (red purple). The isosurface is shown at 2 σ. Full size image The ScPC is present further inside the mCP layer (yellow in Figs. 4 , 5 ), forming a framework surrounding the internal membrane. ScPC has a large head and long tail structure, which is arranged along the edge of trisymmetron (Figs. 4 and 5 B,C). Two ScPCs are connected in an anti-parallel manner with the head and tail, and both the heads are located at the apex of the pentagon of the pentasymmetron, surrounding the membrane extrusion (Fig. 5 C). The adjacent ScPC terminal pairs are connected by SuC (royal blue in Fig. 5 C). Penton and pentasymmetron components The penton density (central region in Fig. S2 A) is similar to that of PBCV-1 (Fig. S2 B), in that it has a “cap” region which aligns with the MCP layer and a lower region which aligns with the mCP layer. ASFV, by comparison, is missing this lower region (Fig. S2 C). We fitted the PDB model of the PBCV-1 penton (Fig. S2 E) to the upper region of the tokyovirus penton (Fig. S2 D), showing the tokyovirus penton has the similar volume and structure as the PBCV-1 penton, though the homologous protein of the PBCV-1 penton has not been identified in tokyovirus. In ASFV, the insertion domain is extrinsic of the MCP interface (Bracket in Fig. S2 F), while neither tokyovirus nor PBCV-1 exhibit clear density for this domain. This insertion domain appears in ASFV. It may function to assist insertion of DNA into the capsid, as ASFV also has an external membrane to penetrate 16 . This domain is also present in Cafeteriavirus-depedent mavirus (a virus which infects a specific NCLDV) 51 . The penton density including the lower region itself has a distance to the internal membrane extrusion of ~ 150 Å (Fig. 5 D). The gap was supported with three pentasymmetron components (PC-α, β, and γ) and lower density materials (arrow in Fig. 3 C). The three pentasymmetron components of PC-α, β, and γ keep a similar distance from the internal membrane extrusion (54–60 Å) (Fig. 5 D), which is also similar to that of the ScPC array along the trisymmetron interface (42 Å) (yellow in Fig. 5 B). PC-α (purple in Figs. 4 B and 5 D) is the largest, interacting with each other and the penton itself (Figs. 4 B and 5 D). Pairs of PC-β and PC-γ (light green and cyan in Figs. 4 B and 5 D) surround this group of PC-α. PC-β interacts primarily with a single PC-α, while also acting as terminal for the GlC (Fig. 4 B). PC- γ interacts with two neighbouring copies of PC-α. These protein components function to support the MCP array in the pentasymmetron. They also maintain contact with the mCP network under the trisymmetron and interact with the extrusion of the internal membrane via poorly resolved low density materials under the fivefold vertices. Internal membrane and its interaction with ScPCs Melbournevirus, one of Marseilleviridae , was first reported to possess a very characteristic extrusion of the internal membrane at the fivefold axes 28 . Tokyovirus also shows this extrusion of the internal membrane, and further, permitted identification of multiple layers in the extrusion (asterisk in Figs. 3 C and 5 A,B,D). Interestingly, the large heads of the ScPC surround the membrane extrusion together with SuCs (Fig. 5 C). The ScPC array and SuCs may play a role in distorting the membrane into this extruded shape at the fivefold axes. Major capsid protein A BLAST search 52 was performed using the proposed MCP sequence from the draft genome of tokyovirus 29 (Table 2 ). The MCP of tokyovirus shows the highest scores against iridovirus (PDBID: 6OJN) 14 and PBCV-1 (PDBID: 5TIP) 53 , with high coverage and moderate homology, although both comparisons of conserved sequences fall below the "identities" that allows for direct and safe comparisons 54 . The three respective MCP amino acid sequences were aligned by PROMALS3D 55 . The secondary structure of tokyovirus MCP was independently predicted by PSIPRED 56 , which identified two sets of eight β strands (B1 to I2 in Fig. S3 ) that form a (double) jelly roll motif like the MCP of other NCLDVs 7 . Table 2 Results of BLAST comparison of tokyovirus MCP versus other NCLDVs. Full size table In the MCP amino acid sequence of each virus, there is a large difference in loop length between β strands (Fig. S4 ). To investigate how the difference in loop length appears in the difference in MCP structure, a homology model of tokyovirus MCP was generated based on the MCP model of PBCV-1 using the SWISSMODEL server 57 . The MCP trimer from a threefold axis of the tokyovirus cryo-EM map was extracted (Fig. 6 A). Then, the homology model was manually optimised to fit the MCP density using COOT 58 and energetically minimized by PHENIX 59 to ensure that the structure is valid (Fig. 6 B). The resultant homology model was compared to both MCP models of PBCV-1 and iridovirus (Fig. S4 ). In tokyovirus MCP, the loops connecting the external parts of jelly roll 1 (JR1) or JR2 are longer than those of PBCV-1. The DE1 and FG1 loops are longer than those of PBCV-1 and of similar length to those of iridovirus. The HI1 loop is particularly extended (Figs. S3 and S4 ), with the 23-residue sequence of tokyovirus being longer than that of PBCV-1. Conversely, the DE2 loop in tokyovirus is truncated compared to those of both PBCV-1 and iridovirus. Other loops are of comparable length. These loops constituting the external part of the MCP model coincide with the external part of the density of the tokyovirus MCP trimer, but do not fill the entire cap density of the tokyovirus MCP trimer (Fig. 6 B). The density in the tokyovirus MCP which is not filled by the fitted homology model has been highlighted in red (Fig. 6 C). Figure 6 Cryo-EM map and fitted homology model of the tokyovirus MCP trimer. The homology model was generated by SWISS-MODEL 57 and adjusted to the volume with COOT 58 and refined with PHENIX 59 . ( A ) Side and top views of the extracted cryo-EM map of the MCP trimer. ( B ) Side and top views of the extracted MCP trimer cryo-EM map with the homology model fitted. ( C ) Side and top views of the extracted MCP cryo-EM map of the MCP trimer with the additional cap region coloured in red. Scale bar equals 2 nm. The isosurface of the extracted MCP segment is shown at 2 σ. Full size image The MCP density has a visible cap (red in Fig. 6 C) which does not correspond to any part of the fitted homology model. To check this, we extracted three MCPs from different areas of the capsid—one from the threefold axis itself, one from two MCP trimers away from the threefold axis, and one from an intermediate point between the two-, three- and fivefold axes. Direct cross correlation between these ranged from 0.99 to 0.91 when calculated in UCSF Chimera 60 . All MCP trimers possess this cap density. A previous report 28 for Marseilleviruses showed that a small protein indicated by proteomic analysis was present at approximately the same levels as the MCP protein. However, due to the nature of SPA (that of averaging many particles) and imposition of icosahedral symmetry, it is possible that not all MCPs are fully occupied. The correlation of the cap density ranges from 0.98 (the threefold cap density against the symmetrised threefold cap density) to 0.87 and 0.84 for the other two MCP positions, respectively. This lower correlation for the cap density may be caused by higher flexibility, which is likely if the protein is highly glycosylated. The symmetric nature of this cap implies the presence of protein rather than disordered post-translational glycosylation of the MCP. This is further supported by Periodic acid Schiff (PAS) staining 61 of SDS-PAGE-separated tokyovirus proteins, which do not show any glycosylated proteins at the molecular weight of the MCP. However, a band is evident at ~ 14 kDa (Fig. S5 ), indicating that the cap density is likely to include a protein which has been glycosylated interacting with the top of the MCP. Discussion Here we applied 1 MV cryo-HVEM to the structural analysis of tokyovirus as a complete viral particle, overcoming some of the resolution limits imposed by the effect of depth of field on exceptionally large particles. The 3D reconstruction showed the highest resolution of a giant virus larger than 200 nm without utilising the block-based reconstruction technique 38 or similar methods 16 . However, the current resolution of 7.7 Å of tokyovirus has not reached the theoretical maximum of 4.0 Å for ~ 250 nm particles (red curve in Fig. 1 ). In practice, many factors affect the ability to achieve a given resolution beyond the accelerating voltage. Therefore, before studying the biological specimen, the performance of the microscope system was confirmed with Thon rings of a Pt-Ir standard film extending beyond 2 Å (Fig. 2 ). This demonstrates that the microscope or electron source would not be a limiting factor in resolutions achieved with the giant virus particles, even though it does not have a field emission electron source and is using a less coherent LaB 6 source. Although there were many factors that limited the resolution of tokyovirus to 7.7 Å, the main limiting factor for current resolution is likely the amount of data that can be collected. Cryo-HVEM requires the manual collection of micrographs. The 1182 particles from 160 micrographs used in total were an extremely small number compared to other reported SPA of giant viruses using 300 kV microscopes. PBCV-1 with a maximum diameter of 190 nm was reconstructed with ~ 13,000 particles from 5624 images at 4.4 Å reconstruction 15 and ASFV with a 250 nm maximum diameter of the outer capsid were reconstructed with 16,266 particles from 17,135 micrographs at 14.1 Å reconstruction 62 , and with 63,348 particles from 64,852 micrographs at 8.8 Å reconstruction 16 . The size of the virus imposes limits which require careful balance between a lower magnification to increase particles imaged per micrograph and a high enough magnification to achieve worthwhile resolutions in reconstructions. Tokyovirus with a maximum diameter of 250 nm is so large that even at relatively low magnifications, there were at best around 12 virus particles per micrograph (often ~ 6 virus particles, e.g., in Fig. S6 A) which can be utilised for 2D classification. In this case, the “super resolution” mode (7420 × 6780 detector dimensions) in the K2 Summit camera was used for data acquisition, keeping the pixel spacing less than 1.5 Å/pixel (Table 1 ). Automated acquisition of micrographs, which is now near ubiquitous in the mainstream cryo-EM would greatly increase the quantity of data which can be acquired in any given timeframe. However, we selected “super resolution” mode for manual-operated cryo-HVEM to maintain a better particle count (per micrograph) while maintaining a higher sampling frequency, though the detector performance suffers slightly (Fig. S1 ). This probably imposes further limitation on the attainable resolution in this experiment. Another limitation on the attainable resolution lies in the software. Some elements of general-purpose cryo-EM SPA image processing do not support HVEM accelerating voltages. Gctf 63 was not used as it does not support 1 MV data, and likewise, dose weighting of 1 MV data is unsupported in both MotionCor2 64 and the RELION implementation 65 as there are currently no parameters for radiation damage at 1 MV. Fortunately, CTFFIND 66 does support 1 MV data for contrast transfer function (CTF) estimation, so we used it in this study. We were unable to use dose weighting, although this is somewhat offset by higher acceleration voltages, which reduces radical damage 67 . Particle polishing in RELION 65 also provides little benefit, which may be caused by the large size of the viruses on the micrograph where distortions across the micrograph potentially cause virus deformation variation per frame. Ewald sphere correction of whole particles, recently implemented in the RELION software suite 65 , has also shown to improve reconstructions from 300 kV microscopes across a range of diameters. We tested Ewald sphere correction for our tokyovirus reconstruction taken with the 1 MV cryo-HVEM and found no improvement. This may be because the 7.7 Å reconstruction of tokyovirus is not high enough resolution to benefit. The software support mentioned above, particularly dose weighting, will be needed for high resolution SPA of giant viruses using 1 MV microscopes in the future. Ultimately, using cryo-HVEM provides benefits for larger particles. The decreased defocus gradient across a particle aids high resolution whole particle reconstruction. Sample damage from the electron beam is also decreased, which is of particular importance with biological samples. Furthermore, increased beam penetration permits improved visualisation of internal structure, as used in cryo-electron tomography with pithovirus, an amphora-shaped giant virus having ~ 800 nm thickness 8 . Beyond the more widely available 300 kV cryo-EM, it permits study of finer details of even larger viruses, which lower acceleration voltages are unable to penetrate. Finally, the reconstruction methodology is simpler than that of block-based reconstruction. Block-based reconstruction is a powerful technique, but as the name implies, it breaks the whole structure down into blocks. Stitching these back together is called a composite map, and while these are gaining in popularity, do have some inherent dangers. For example, FSC calculations cannot be performed on composite maps without artefacting, and sharpening can reveal overlap artefacts between maps. We tested block-based reconstruction on data sets from cryo-HVEM, but it did not improve attained resolution. Although the exact reasons are unknown, our working hypothesis is that the relatively low number of particles in the final reconstruction and the lower curvature of the Ewald sphere at 1 MV (when compared to 300 kV) decreases the effectiveness of block-based reconstruction. However, block-based reconstruction methods may have some advantages in cryo-HVEM given sufficient particle numbers. This is because it can overcome minor distortions of very large symmetric particles, yielding improved resolutions. For objects with low symmetry the direct method using higher accelerating electrons is still needed. The 7.7 Å capsid structure of tokyovirus represents a new capsid network of NCLDV. The trapezoidal arrays of LtCs in tokyovirus do not form a single underlayer beneath the MCP, like the mCPs of PBCV-1 15 or ASFV 16 , instead relying on another CmC which connect the three rotated trapezoids within the trisymmetron (Figs. 5 , S7 ). Two protein components of GlC and ZpC are present at the trisymmetron interfaces rather than a single protein (P11) as in PBCV-1. SuC also interacts with each trapezoidal lattice on the interior surface, and further forms a connection with ScPCs, which does not appear to be present in the reconstructions from PBCV-1 and ASFV. The T = 309 ( h = 7, k = 13) capsid is clearly displayed in the 3D reconstruction (Fig. 3 ). With this structure we can newly identify an intermediate “scaffold” protein component (ScPC) array between the capsid and the internal membrane (yellow in Figs. 4 and 5 ). Anti-parallel pairs of ScPCs run along the trisymmetron interface and connects with each pentasymmetron via the pentasymmetron components. These features are quite analogous to the “tape-measure” proteins 40 reported in PBCV-1 and ASFV cryo-EM maps 15 , 16 , which is the long filamentous mCP named P2 in PBCV-1 and M1249L in ASFV. This suggests that tokyovirus capsid construction may occurs through a different mechanism. The presence of this scaffold network may indicate that the scaffold is responsible for imposing restraints on the capsid dimensions. Greater clarity of these flexible ScPCs will be required to further elucidate their interactions in more detail and perhaps shed more light on their potential role in construction. To build the MCP homology model, we extracted the central MCP trimer from a trisymmetron and the homology model was rigid-body-fitted into the density. Regions of the homology model which fit the density poorly were manually fixed using COOT 58 and energetically refined with PHENIX 59 software (Fig. 6 B). An empty “cap” region was present on each MCP trimer density when the MCP homology model was fitted to it, although thus far we have been unable to clarify this region sufficiently for model fitting. As such, we were initially unsure whether this density is caused by other smaller proteins interacting with the MCP or post-translational modification of the MCP. We identified a candidate of the small “cap” protein, as Periodic acid Schiff (PAS) staining 61 was used on SDS-PAGE of purified tokyovirus particles to identify potential glycoproteins. This showed a protein running at ~ 14 kDa (Fig. S5 B). However, the band is weak compared to that of MCP in the Coomassie stained gel (Fig. S5 A). The cap density may not be present for all MCP trimers, so called “partial occupancy” and the icosahedral averaging will affect their clarity as a result. In the case of PBCV-1, sugar chains are directly bound to the MCP 53 , while in the case of ASFV, there is a second membrane external of the capsid itself 16 so additional capsid glycosylation is less important. We previously found that some species in the Marseilleviridae family induce a bunch formation in host amoeba 20 . Simultaneously, the newly borne viruses adhered to and aligned on the host cell surface in the process. A member of the Mimiviridae , tupanvirus, has also evidence suggesting this similar mechanism, where a mannose-binding protein (MBP) expressed on the amoeba cell membrane was suggested to be involved in intercellular adhesions 68 . In Marseilleviruses, 10 of the 49 identified virion proteins were reported to be glycosylated 18 . The glycoprotein may play a role in these bunch formation and adhesion to the cell, potentially to increase speed of transmission, but it needs further study to clarify the precise function. The tokyovirus and PBCV-1 pentons are similar in depth (Fig. S2 A,B). As the tokyovirus penton density is fitted with that of the cryo-EM-derived PDB model of PBCV-1 (PDBID: 6NCL) (Fig. S2 D), we can find a density present for the penton base protein and an unmodeled lower region level with the mCP layer. The same is evident in PBCV-1 (Fig. S2 E). The ASFV penton protein (Fig. S2 F) fits the single jelly roll in the same position and orientation as the PBCV-1 penton, with the insertion domain on the outer face of the capsid in density, for which neither PBCV-1 nor tokyovirus possess clear density. A BLAST search of the tokyovirus genome, using both the PBCV-1 penton and the Cafeteriavirus-dependent mavirus penton which was modelled into the ASFV penton structure, did not yield any matches, so identification and homology modelling of the penton protein has not been possible. Given the low BLAST metrics for the tokyovirus MCP against both PBCV-1 and mavirus (Table 2 ) this is not necessarily surprising. To clarify the nature of the protein occupying the lower region of the penton, we must achieve a higher resolution reconstruction. Immediately around this small pentamer is a symmetric array of further protein components of PC-α, β and γ within the mCP layer, which act to support the MCP array in the pentasymmetron as a replacement for the lattice proteins and interact with the internal membrane extrusion via low-density materials (arrows in Figs. 3 C and 5 A). They may be roughly analogous to the “lantern” proteins in ASFV 16 . These further extend to the edge of the pentasymmetron and interact with the ScPC array (Fig. 4 B). This is a novel structure in NCLDV and likely to play a role in the formation of the internal membrane extrusion. In conclusion, based on the theoretical resolution estimation, we applied 1 MV cryo-HVEM to single particle analysis of tokyovirus, which is a large virus particle with a maximum diameter of 250 nm. This revealed the 7.7 Å capsid structure with a limited quantity of data without using a block-based reconstruction technique. The cryo-EM map showed the novel capsid network allowing the characteristic internal membrane extrusions under the icosahedral vertices. The advanced ScPC array represents a framework that supports the extruded structure of the internal membrane and also possibly function as a tape measure protein reported in other NCLDVs. The unique PC-α, β, and γ are also likely to support the membrane extrusions. While it shares many similarities to the structures of PBCV-1 and ASFV regarding MCP structure, we found a cap density on top of the MCP that was suggested to contain a glycoprotein. The glycoprotein identified in Marseilleviridae possibly acts to interact with the host cell surface. This in turn may cause the bunch formation of host cells in some species as has been previously demonstrated 20 . The 1 MV cryo-HVEM installed in Osaka University had sufficient performance to reach the theoretical resolution, but the current resolution of tokyovirus capsid was limited to 7.7 Å. One of the major problems was with the limited quantity of data. This will be solved in the future by installing automated data acquisition software. Using a larger format detector or improving the super resolution performance of the Gatan K2 detector when paired with HVEM will also remedy the problem, because the significant drop in low frequency signal in super resolution mode at 1 MV may be a limitation of the super resolution estimation algorithm for higher energy electrons in the current K2 detector (Fig. S1 ). Another major problem is the limited support of software for 1 MV micrographs. If current advanced techniques such as dose weighting during motion correction, and Bayesian particle polishing can be used, and full CTF parameter refinement can be optimized, the combination of all of these should allow the resolution to approach the theoretical value. Further optimizations and detailed analytical evaluations to solve the problems identified in this study will be necessary for cryo-HVEM using a LaB6 electron source to be used for structural analysis of large biological samples. To distinguish the resolution limitations of cryo-HVEM, we should also test SPA using well-tested specimens, such as apoferritin, TMV, and/or GroEL. In addition, careful comparisons with mainstream 300 kV cryo-EM would help demonstrate the potential of cryo-HVEM in the future. Methods Tokyovirus growth, purification, and sample preparation Tokyovirus 29 was originally provided by Professor Masaharu Takemura, Tokyo University of Science. It was propagated in Acanthamoeba castellanii cells cultured in PYG medium (2% w/v proteose peptone, 0.1% w/v yeast extract, 4 mM MgSO 4 , 0.4 mM CaCl 2 , 0.05 mM Fe(NH 4 ) 2 (SO 4 ) 2 , 2.5 mM Na 2 HPO 4 , 2.5 mM KH 2 PO 4 , 100 mM sucrose, pH 6.5). Viruses were purified as described previously 27 , 28 . Summarily, the infected culture fluid was collected and centrifuged for 10 min at 1500 g , 4 °C to remove dead cells, before the supernatant was centrifuged for 35 min at 10,000 g , 4 °C. The pellet was suspended in 1 ml of PBS buffer and loaded onto a 10–60% sucrose gradient, before further centrifugation for 90 min at 8000 g , 4 °C. The concentrated band was extracted and dialysed in PBS before a further round of centrifugation with the same conditions. The pellet was suspended in PBS before cryo-EM grids were prepared. SDS-PAGE and periodic acid schiff staining A sample of the purified tokyovirus was treated with SDS-PAGE sample buffer (2332330/AE-1430 EzApply, ATTO) and boiled for 5 min. 10 μg of each denatured sample was loaded into a well (2331635/CHR12.5L c, ATTO), and 7 μl of standard molecular marker (2332340/AE-1440, ATTO) run to estimate the molecular weights of the resulting protein bands. Electrophoresis was performed at constant current of 10.5 mA with an electrophoresis buffer (2332323/WSE-7055, ATTO) using an Electrophoresis system (2322240/WSE-1010, ATTO). The resultant gel was stained with a regent (2332370/AE-1340, ATTO). Periodic acid Schiff staining was carried out according to the protocol described in the PAS stain kit (GlycoGel Stain Kit 24693-1, PSI). Performance test of 1 MV cryo-HVEM A JEOL JEM-1000EES cryo-HVEM (JEOL Inc.) installed at the research center for ultra-HVEM of Osaka University was used. The microscope was equipped with a LaB 6 filament electron gun at an accelerating voltage of 1 MV, an autoloader stage which can keep up to 12 frozen-hydrated EM grids at cryogenic conditions, and K2 IS direct detector camera (Gatan Inc.). The stage and storage of the sample was always cooled with liquid nitrogen that is automatically replenished. Image data were collected manually. For resolution limit test of the 1 MV cryo-HVEM, Pt-Ir film (JEOL Inc.) was imaged at a nominal magnification of 100,000 × (0.22 Å/pixel on specimen) and 0.6–2 μm defocus. Movie images were recorded on K2 IS camera in counting mode at a dose rate of ∼ 8 e − /pixel/s with 0.2 s/frame for 1 s exposure. Motion-corrected full frames were summed with DigitalMicrograph software (Gatan Inc). For image quality test, MTF and DQE curves were measured using the shadow of a beam stopper metal blade in both counting mode and super-resolution mode of the Gatan K2 IS in the cryo-HVEM. The data was processed with FindDQE 69 . For comparison, the data was collected with the same conditions using a 300 kV Titan Krios G2 (Thermo Fisher Scientific) and K2 Summit camera installed in the institute. Cryo-HVEM data acquisition of tokyovirus An aliquot (2.5 μL) of the purified tokyovirus particles was placed onto R 1.2/1.3 Quantifoil grids (Quantifoil Micro Tools) that were glow-discharged using a plasma ion bombarder (PIB-10, Vacuum Device Inc.) immediately beforehand. This grid was then blotted (blot time: 10 s, blot force: 10) and plunge-frozen using a Vitrobot Mark IV (Thermo Fisher Scientific) with the setting of 95% humidity and 4 °C. A total of 304 micrographs were manually collected using a JEOL JEM-1000EES (JEOL Inc.) equipped with an autoloader stage and K2 Summit camera (Gatan Inc.) optimised for 1 MV HVEM in a total of seven sessions using three grids. Micrograph movie frames were collected in super resolution mode at a magnification equivalent to 1.456 Å/pixel with a target defocus of 2–4 μm. Each exposure was 32 s, at a frame interval of 0.2 s for a total of 160 frames per micrograph. The frames were stacked using EMAN2 70 . Image processing of tokyovirus Micrograph movies were imported into a beta build of RELION 3.1, before motion correction with the RELION implementation 65 of MotionCor2 64 . Motion correction was performed using 5 × 5 patches with B-factor blurring of 500 Å 2 . Contrast Transfer Function (CTF) estimation of the images was carried out using CTFFIND 4.1.10 66 with the following parameters: lower defocus limit, 2500 Å; upper defocus limit, 80,000 Å; step size, 100; exhaustive search. After the first run using a box size of 512, good CTF fits were selected, and failed fits were re-run using a larger FFT box size. This was repeated until an FFT box size of 2,048. Other parameters were left at default settings. After this point, micrographs which failed to find a good fit were discarded. We define a “good” CTF fit as one which has distinguishable Thon rings and with which the simulated power spectrum aligns. This resulted in a total of 156 micrographs being carried forward. 1529 particles were manually picked, extracted with 4× downsampling to 5.824 Å/pixel (2400 × 2400 pixel boxes become 600 × 600 pixel boxes) and 2D classified into 40 classes with a circular mask diameter of 2600 Å, angular sampling of 2° (automatically increased to 3.75° due to use of GPU acceleration) and a search range of 7 pixels. 1458 particles were carried over in good classes, and 2D classified a second time with 1° angular sampling (automatically increased to 1.825°) and “ignore CTF until first peak” enabled, resulting in 1419 particles in good classes. All 3D processing was carried out with icosahedral symmetry imposed. An initial model was generated with the stochastic gradient descent algorithm in RELION 3.1. These particles were passed to 3D classification into 5 classes, using 1.8° angular sampling for 25 iterations, followed by 0.5° angular sampling for a further 25 iterations. The two best classes were selected, totalling 1,297 particles. 3D refinement was carried out using only a spherical mask. Particles were re-extracted and re-centred with 3× downsampling (4.368 Å/pixel) and refined further, then re-extracted and re-centred with 2× downsampling (2.912 Å/pixel). Magnification anisotropy refinement and CTF refinement were carried out, improving attained resolution. 3D classification into 5 classes with alignment disabled was carried out using a mask blocking the disordered internal volume (viral DNA) and the best class selected comprising 1182 particles. Particle polishing had a negligible effect when used. Final post-process resolution depends heavily on the mask used: a capsid-only mask results in a 7.7 Å estimated resolution, including the scaffold proteins and internal membrane lowers this to 8.7 Å. A soft spherical mask gives a 9.4 Å resolution. Local resolution was calculated using the blocres module of Bsoft 71 with no mask applied. The full pathway through the SPA 3D reconstruction is summarized in Fig. S6 . Bioinformatics, model building and fitting of MCP The MCP protein of tokyovirus was identified from the previously reported genome. The primary amino acid sequence of tokyovirus MCP was compared to those of other NCLDVs by BLASTP 72 , and the SWISS-MODEL 57 and I-TASSER 73 servers were used to generate a homology model using the MCP model of PBCV-1 (PDBID: 5TIP) 53 as a template. This homology model was rigid body fit into a tokyovirus MCP trimer extracted from the threefold symmetry axis and sections of the model which lay outside of density were adjusted by COOT 58 and energetically refined by PHENIX 59 . The central point of the trisymmetron was identified visually and extracted using Volume Eraser tool in UCSF Chimera 60 , then the volume was cropped to 100 3 pixels. The resulting model was saved. The trisymmetron map was segmented using the SEGGER 74 module of USCF Chimera and segments which overlapped the model saved into a separate volume. For the penton base protein, the PDB structure of the penton of PBCV-1 15 was fitted to the centre of the fivefold axis and SEGGER was used to extract the region. This also selected the majority of the adjacent five MCPs. Therefore, molmap in UCSF Chimera 60 was used to generate a 15 Å molecular map, and the map was passed to the relion_mask_create module of RELION 3.1 to generate a binary mask with an extension of 5 pixels and a soft edge of 5 pixels. The mask was then imposed on the extracted region. Visualisation of data The 3D reconstructions of tokyovirus were visualised using relion_display module of RELION 3.1 50 or UCSF Chimera 60 depending on dimensionality. Pt-Ir power spectra were visualised using Fiji 75 and the rotational averages were calculated using the Radial Profile Extended plugin. Data availability The cryo-HVEM reconstructions have been uploaded to the EMDB and are available at the following accession codes, 30797: capsid-only post-process mask, 7.7 Å, 30798: capsid, scaffold and internal membrane post-process mask, 8.7 Å, 30799; soft spherical post-process mask, 9.4 Å.
Despite their name, giant viruses are difficult to visualize in detail. They are too big for conventional electron microscopy, yet too small for optical microscopy used to study larger specimens. Now, for the first time, an international collaboration has revealed the structure of tokyovirus, a giant virus named for the city in which it was discovered in 2016, with the help of cryo-high-voltage electron microscopy. They published their results on Dec. 11 in Scientific Reports. "'Giant viruses' are exceptionally large physical size viruses, larger than small bacteria, with a much larger genome than other viruses," said co-corresponding author Kazuyoshi Murata, project professor, Exploratory Research Center on Life and Living Systems (ExCELLS) and National Institute for Physiological Sciences, the National Institutes of Natural Sciences in Japan. "Few studies have revealed the capsid—the protein shell encapsulating the double-stranded viral DNA—structure of large icosahedral, or 20-sided, viruses in detail. They present special challenges for high-resolution cryo-electron microscopy from their size, which imposes hard limits on data acquisition." To overcome the challenge, the researchers used one of the few high-voltage electron microscopy (HVEM) facilities in the world that is equipped to image biological specimens. This type of electron microscope accelerates voltage to theoretically increase the power of the microscope, which allows for thicker samples to be imaged at higher resolutions. A) Particle structure of tokyovirus. From the outside, MCP: major capsid protein (light blue), mCP: minor capsid protein (blue), ScP: scaffold protein (yellow), IM: internal membrane (gray), covering the viral DNA inside. * is the protruding structure on the internal membrane. B) A novel network structure of mCP and ScP (yellow) in which 7 types of mCP structures are combined in a complex manner. mCPs are ScP are viewed from inside the shell. A triangular mCP network extends between the 5-fold vertices (bottom left and right). Credit: Scientific Reports (2022). DOI: 10.1038/s41598-022-24651-2 At the Research Center for Ultra-High Voltage Electron Microscopy at Osaka University, the team imaged flash-frozen tokyovirus particles, with the goal of reconstructing a single particle in full detail for the first time. "Cryo-HVEM on biological samples has not been previously reported for single particle analysis," Murata said. "For thick samples, such as tokyovirus with a maximum diameter for 250 nanometers, the influence of the depth of field causes an internal focus shift, imposing a hard limit on attainable resolution. Accelerating the voltage, or shortening the wavelength of the emitted electrons, can increase the depth of field and improve the optical conditions in thick samples." Prepared with these adjustments, the researchers imaged tokyovirus in detail to clarify the structure of the full virus particle. They achieved a 3D reconstruction at a resolution of 7.7 angstroms, a resolution just a bit lower than the technology could theoretically attain. Murata said that the result of the resolution was limited by the amount of data they could collect. "Cryo-HVEM currently requires the manual collection of micrographs taken with the microscope," Murata said. Micrographs are photographs taken with the microscope. "We identified 1,182 particles from 160 micrographs, which is an extremely small number compared to reports of other giant viruses imaged with less powerful microscopes." According to Murata, a lower magnification increases the number of particles included in each micrograph, but the magnification must be high enough to image the particles in detail. While the automated acquisition of micrographs—routinely used in standard cryo-electron microscopy—has facilitated a significant increase in the number of images captured at high magnification, but the manual mode allowed researchers to maintain a better particle count per micrograph while also maintaining a higher sampling frequency. A) The MCP structural model of another virus is fitted to the MCP density. The presence of unknown proteins other than MCP was recognized in the upper part (white area to which the structural model is not fitted). B) The unknown protein portion is shown in red. Credit: Scientific Reports (2022). DOI: 10.1038/s41598-022-24651-2 Even with limited samples and slightly lower resolution, Murata said, the researchers gathered enough information to better understand the giant virus particles structure with more clarity than ever before. "The cryo-HVEM map revealed a novel capsid protein network, which included a scaffold protein component network," Murata said, noting that this scaffolding network's connection between vertices in the icosahedral particle may determine the particle size. "Icosahedral giant viruses, including tokyovirus, have large, uniform sized functional cages created with limited components to protect the viral genome and infect the host cell. We are beginning to learn how this works, including the advanced functions of the structures and how we might be able to apply this understanding." The researchers plan to implement automated acquisition software capable of maintaining their desired parameters to image more giant virus structures and discover common architecture to better understand how the limited structures can be used for multifunctional organisms, Murata said.
10.1038/s41598-022-24651-2
Physics
New detection method turns silicon cameras into mid-infrared detectors
David Knez et al, Infrared chemical imaging through non-degenerate two-photon absorption in silicon-based cameras, Light: Science & Applications (2020). DOI: 10.1038/s41377-020-00369-6 Journal information: Light: Science & Applications
http://dx.doi.org/10.1038/s41377-020-00369-6
https://phys.org/news/2020-07-method-silicon-cameras-mid-infrared-detectors.html
Abstract Chemical imaging based on mid-infrared (MIR) spectroscopic contrast is an important technique with a myriad of applications, including biomedical imaging and environmental monitoring. Current MIR cameras, however, lack performance and are much less affordable than mature Si-based devices, which operate in the visible and near-infrared regions. Here, we demonstrate fast MIR chemical imaging through non-degenerate two-photon absorption (NTA) in a standard Si-based charge-coupled device (CCD). We show that wide-field MIR images can be obtained at 100 ms exposure times using picosecond pulse energies of only a few femtojoules per pixel through NTA directly on the CCD chip. Because this on-chip approach does not rely on phase matching, it is alignment-free and does not necessitate complex postprocessing of the images. We emphasize the utility of this technique through chemically selective MIR imaging of polymers and biological samples, including MIR videos of moving targets, physical processes and live nematodes. Introduction Many fundamental molecular vibrations have energies in the mid-infrared (MIR) window—a wavelength region that stretches from approximately 2 to 10 μm. For this reason, the MIR range is of particular interest for spectroscopic imaging. The ability to generate images with chemical selectivity is of direct relevance to a myriad of fields, including the implementation of MIR-based imaging for biomedical mapping of tissues 1 , 2 , 3 , inspection of industrial ceramics 4 , stand-off detection of materials 5 , mineral sensing 6 , 7 , and environmental monitoring 8 , among others. Given its unique analytical capabilities, it is perhaps surprising that MIR-based imaging is not a more widely adopted technology for chemical mapping. The relatively scarce implementation of MIR imaging has been due in part to the lack of bright and affordable light sources in this range, although recent developments in MIR light source technology have largely overcome this problem 9 , 10 , 11 . Nonetheless, a remaining limitation is the performance and high cost of the MIR cameras. Current cameras are based on low bandgap materials, such as HgCdTe (MCT) or InSb, which inherently suffer from thermally excited electronic noise 12 . Cryogenic cooling helps to suppress this noise, but it renders the MIR camera a much less practical and affordable detector than mature Si-based detectors for the visible and near-IR. Electronically cooled MCT detectors are a promising alternative, although the matrixes of such detector arrays are still of low density and are not yet on par with high definition Si-based CCD cameras. Recognizing the attractive features of Si-based cameras, several strategies have been developed that aim to convert information from the MIR range into the visible/NIR range, thus making it possible to indirectly capture MIR signatures with a Si detector. A very recent development is the use of an entangled MIR/visible photon pair, which allows MIR imaging and microscopy utilizing nonlinear interferometry for detecting visible photons entangled to their MIR counterparts on a Si-based camera 13 , 14 , 15 . Another strategy accomplishes the MIR-to-visible conversion by using a nonlinear optical (NLO) response of the sample, such as in third-order sum-frequency generation (TSFG) microscopy 16 . Photothermal imaging, which probes the MIR-induced changes in the sample with a secondary visible beam, is another example of this approach 17 , 18 , 19 , 20 , 21 . An alternative but related method is the acoustic detection of the MIR photothermal effect, which has recently been demonstrated 22 . Another technique uses a nonlinear optical crystal placed after the sample to up-convert the MIR radiation with an additional pump beam through the process of sum-frequency generation (SFG) 23 , 24 , 25 , 26 , 27 , 28 , 29 . The visible/NIR radiation produced can be efficiently registered with a high bandgap semiconductor detector. Elegant video-rate MIR up-conversion imaging has recently been accomplished with a Si-based camera at room temperature, offering an attractive alternative to imaging with MCT focal plane arrays 30 . A possible downside of SFG up-conversion techniques is the requirement of phase matching of the MIR radiation with the pump beam in the NLO medium. This requirement implies crystal rotation to enable the multiple projections needed for capturing a single image and postprocessing for each measured frame for image reconstruction. An alternative to utilizing an optical nonlinearity of the sample or a dedicated conversion crystal for indirect MIR detection (SFG up-conversion) is the use of the NLO properties of the detector itself. In particular, the process of non-degenerate two-photon absorption (NTA) in wide bandgap semiconductor materials has been shown to permit the detection of MIR radiation at room temperature with the help of an additional visible or NIR probe beam 31 , 32 , 33 , 34 . In NTA, the signal scales linearly with the MIR intensity with detection sensitivities that rival those of cooled MCT detectors 31 . Compared with SFG-based up-conversion, NTA does not depend on phase matching and avoids the need for an NLO crystal altogether, offering a much simpler detection strategy. Moreover, the nonlinear absorption coefficient drastically increases with the energy ratio of the interacting photons 35 , 36 , 37 , 38 , 39 , allowing detection over multiple spectral octaves. Although NTA has been shown to enable efficient MIR detection with single pixel detectors, its advantages have not yet been translated to imaging with efficient Si-based cameras. Here, we report rapid, chemically selective MIR imaging using NTA in a standard CCD camera at room temperature. The nature of nonlinear absorption enhancement for direct-band semiconductors has been modelled with allowed-forbidden transitions between two parabolic bands 37 , 38 , 39 , 40 . The nonlinear absorption coefficient α 2 for photon energies ħω pump and ħω MIR can be written as 40 : $$\begin{array}{l}\alpha _2\left( {\omega _{\mathrm{p}},\omega _{{\mathrm{MIR}}}} \right) = K\frac{{\sqrt {E_{\mathrm{p}}} }}{{n_{\mathrm{p}}n_{{\mathrm{MIR}}}E_{\mathrm{g}}^3}}F\left( {x_{\mathrm{p}},x_{{\mathrm{MIR}}}} \right)\\ F = \frac{{\left( {x_{\mathrm{p}} + x_{{\mathrm{MIR}}} - 1} \right)^{3/2}}}{{2^7x_{\mathrm{p}}\left( {x_{{\mathrm{MIR}}}} \right)^2}}\left( {\frac{1}{{x_{\mathrm{p}}}} + \frac{1}{{x_{{\mathrm{MIR}}}}}} \right)^2,x_{\mathrm{p}} = \frac{{\hbar \omega _{\mathrm{p}}}}{{E_{\mathrm{g}}}},x_{{\mathrm{MIR}}} = \frac{{\hbar \omega _{{\mathrm{MIR}}}}}{{E_{\mathrm{g}}}}\end{array}$$ (1) where E p is the Kane energy parameter, n p and n MIR are refractive indices and K is a material independent constant. The function F accounts for the change in the nonlinear absorption as the ratio between the pump and MIR photon energies is adjusted, with dramatic enhancements when the pump energy is tuned closer to the bandgap energy E g . For an indirect bandgap semiconductor, such as Si, optical transitions can be understood as a nonlinear process that involves three interacting particles—two photons and a phonon. Several models have been considered to describe multiphoton absorption in Si, including earlier “forbidden-forbidden” models 41 and more recently suggested “allowed-forbidden” and “allowed-allowed” pathways 42 . The latter two models agree well with degenerate absorption experiments 43 . For the case of NTA, experiments demonstrate enhancement behaviour similar to those seen in direct-bandgap semiconductors 44 , 45 , with the “allowed-allowed” pathways providing the best description 46 . Modest numbers of acquired and derived nonlinear absorption coefficients of only a few cm/GW have classified Si as a rather inefficient material for NTA. For this reason, attempts to develop MIR detection strategies based on Si detectors have been scarce 46 . In this work, we show that despite previous concerns, detecting MIR radiation through NTA in silicon is not only feasible but readily provides a very practical approach for MIR imaging with standard cameras. Results MIR detection with a Si photodiode We first discuss the utility of Si as a MIR NTA detector using picosecond pulses of low peak intensities. In Fig. 1a , we compare the linear absorption of 9708 cm −1 (1030 nm) photons by a standard Si photodiode with that of NTA for a 2952 cm −1 (3388 nm) MIR and 6756 cm −1 (1480 nm) pump pulse pair. Since the 1030 nm photon energy exceeds the Si bandgap energy ( E g ~1.1 eV (1100 nm)), strong one-photon absorption can be expected. Based on this measurement, the estimated responsivity is R = 0.2 A/W, close to the reported response for Si detectors at 1030 nm. In the NTA experiment, the MIR and pump photon energies add up to the same energy (9708 cm −1 ) as in the one-photon experiment, and thus, we may expect a response in Si, albeit weaker. The current photon energy ratio is ω pump /ω MIR = 2.2. The NTA response is shown in orange and is compared with the degenerate two-photon absorption of the pump pulse. As expected, the NTA signal scales linearly with the NIR pulse energy. Previously reported values of a 2d ~ 2 cm/GW 43 for the degenerate cases and a 2n ~ 5 cm/GW 46 for the non-degenerate cases with comparable photon ratios agree well with our observations. Note that there is a regime where the NTA is stronger than the degenerate two-photon absorption of the pump using a 0.65 nJ MIR pulse at 3388 nm. Fig. 1: Detection of weak infrared radiation via non-degenerate two-photon absorption in a Si photodiode. a Linear absorption (blue) as a function of the pulse energy at 1030 nm, non-degenerate two-photon absorption (orange) as a function of the pump pulse energy at 1480 nm and degenerate two-photon absorption (purple) as a function of the pump pulse energy at 1480 nm. For the non-degenerate curve, the MIR pulse energy at 3388 nm was set at 0.65 nJ. Inset: proposed scheme of photon absorption in Si. b Full dynamic range for MIR detection with a detection floor of a 200 fJ picosecond pulse energy for the given detector parameters. Note that 1 V on the y -axis corresponds to 8.2 × 10 4 electrons/pulse Full size image We next studied the sensitivity of MIR detection through NTA in Si. In Fig. 1b , the detected NTA signal is plotted as a function of the MIR pulse energy MIR (at 2952 cm −1 ) for various energies of the pump pulse. For these experiments, especially at higher NIR peak intensities, the degenerate contribution of the pump pulse has been subtracted using the modulation of the MIR beam and lock-in detection. We observe that the signal scales linearly with the MIR pulse energy for all settings. The minimally detectable MIR picosecond pulse energy is ~200 fJ using rather modest NIR pump peak intensities. In previous work with a direct large-bandgap GaN detector, a detection limit of 100 pJ was reported using femtosecond pulses and a photon energy ratio >10 31 . Here, we observe higher detection sensitivities in Si while using picosecond pulses and a much lower photon energy ratio. Such high detection sensitivities are remarkable and are due in part to the favourable pulse repetition rate (76 MHz) used in the current experiment, offering much better sampling than the kHz pulse repetition rates used previously. The strategy used here offers superior sensitivity, detecting 4 orders of magnitude smaller MIR peak intensities of 20 W/cm 2 (with 0.09 MW/cm 2 at 1480 nm pump pulse) versus 0.2 MW/cm 2 (with 1.9 GW/cm 2 at 390 nm pump pulse), as previously reported 31 . Given that the enhancement scales with the photon energy ratio, we may expect even greater sensitivities for experiments with higher pump photon energies and lower MIR photon energies, with a projected detection floor as low as a few tens of femtojoules (1 W/cm 2 ). MIR spectroscopy with a single pixel Si detector As an example of the utility of MIR detection with a Si photodetector, we perform an MIR absorption spectroscopy experiment on a dimethyl sulfoxide (DMSO) film of a few tens of microns. For this purpose, we spectrally scan the MIR energy in the 2750–3150 cm −1 range and detect the MIR transmission via NTA on a Si photodiode. The spectral resolution is determined by the spectral width of the picosecond pulse (~5 cm −1 ). For these experiments, the MIR pulse was kept at 15 mW (~10 kW/cm 2 peak intensity), while the NIR pump beam was set to 100 mW (66 kW/cm 2 ). Because the pump and MIR pulses are parametrically generated by the same source, there is no temporal walk-off on the picosecond timescale while performing the scan. The resulting DMSO absorption spectrum shows the characteristic lines associated with the symmetric and asymmetric C–H stretching modes 47 , which corroborates the Fourier transform IR (FTIR) absorption spectrum (Fig. 2 , see the section “Materials and methods”). Fig. 2: Absorption spectrum of dimethyl sulfoxide (DMSO) using non-degenerate two-photon detection for measuring the transmitted MIR radiation. The results are in excellent agreement with the spectrum obtained with conventional ATR-FTIR of bulk DMSO Full size image MIR imaging through on-chip NTA in a CCD camera Given the excellent NTA performance of a single pixel Si detector, we next explored the feasibility of MIR imaging through direct on-chip NTA in a Si-based CCD camera. Figure 3 provides a schematic representation of the MIR wide-field imaging system based on NTA. The pump and MIR beams are generated by a 4 ps, 76 MHz optical parametric oscillator (OPO) and are expanded to a beam diameter of ~3 mm. The MIR arm contains the sample and a 100 mm CaF 2 lens to map the image in a 1:1 fashion onto the CCD sensor. The pump beam spatially and temporally overlaps with the MIR beam with the aid of a dichroic mirror so that both beams are coincident on the CCD chip. Note that phase matching is not important for NTA, implying that the angle of incidence of the pump beam can be adjusted freely. Here, we use a conventional, Peltier-cooled CCD camera (Clara, Andor, Northern Ireland) featuring 6.45 × 6.45 μm 2 pixels in a 1392 × 1040 array. The current magnification and effective numerical aperture of the imaging lens (NA = 0.015) provides an image with ~100 μm resolution, corresponding to ~20 pixels on the camera. Though not the ultimate goal of the current experiments, better spatial resolution can be easily achieved using focusing systems with a higher numerical aperture. Fig. 3 Schematic of a wide-field MIR imaging system based on non-degenerate two-photon absorption in a Si-based CCD camera Full size image In Fig. 4a , we show the NTA image of the MIR beam projected onto the CCD sensor using a 1 s exposure time. The degenerate background signal has been subtracted to solely reveal the MIR contribution. With the current experimental arrangement, the background has to be measured only once for a given NIR pump intensity and can be subtracted automatically during imaging, requiring no further postprocessing. See Fig. S2 for a direct comparison of the NIR degenerate background with the NTA signal contribution. Here, we used peak intensities of ~1.5 kW/cm 2 for the MIR beam and ~1.4 kW/cm 2 for the NIR pump beam. Under these conditions, each camera pixel only receives pulse energies on the order of a few femtojoules. In Fig. 4b , we show the same MIR beam with a razor blade blocking half of the beam, emphasizing the attained MIR contrast. The fringing at the blade interface is a direct consequence of light diffraction at the step edge. More images of test targets are provided in the Supplementary Information (Fig. S3 ). Fig. 4: Visualizing the MIR beam profile on a CCD camera. a Image of the MIR (3394nm) beam profile using a 1478nm pump pulse. b Image of the razor blade covering half of the MIR region. The cross section is shown at the top of the panel. Error function analysis shows that the resolution is ~15 pixels (~100μm) under the current conditions Full size image For the current experimental conditions, we find that the MIR intensity changes on the order of 10 −2 OD in the image are easily discernible even with exposure times shorter than 1 s. To demonstrate the chemical imaging capabilities, we perform MIR imaging on an ~150-μm thick cellulose acetate sheet commonly used as transparencies for laser jet printing. Figure 5a depicts the FTIR spectrum of cellulose acetate in the 2500–3500 cm −1 range, showing a clear spectral feature due to C–H stretch vibrational modes. In Fig. 5b , a strip of the cellulose acetate sheet is imaged at 3078 cm −1 , off-resonant with the C–H stretching vibration. Transmission through the sheet is high because of the lack of absorption. To highlight the contrast, the letters “C–H” have been printed with black ink directly onto the material, providing a mask with limited transmission throughout the 2500–3500 cm −1 range. When tuning into the CH-mode resonance (Fig. 5b , 3001 cm −1 ), the transmission is seen to decrease, resulting in lower contrast between the ink and the film. When the MIR is tuned to the maximum of the absorption line (Fig. 5 , 2949 cm −1 ), the limited transmission through the film completely eliminates the ink/film contrast. The relative magnitude of MIR absorption, extracted from the images, maps directly onto the absorption spectrum in Fig. 5a , demonstrating quantitative imaging capabilities. The observed contrast is based on a rather modest absorption difference of only 7 × 10 −2 OD. More examples of MIR imaging of printed cellulose acetate samples can be found in the Supplementary information (Fig. S4 ). Fig. 5: Spectral imaging of a 150-μm thick cellulose acetate film. The printed letters serve as a mask that blocks broadband radiation. a FTIR transmission spectrum. MIR image taken at b an off-resonance energy, c the high energy side of the absorption maximum and d the absorption maximum. The exposure time for all images is 1 s Full size image With the wide-field MIR imaging capabilities thus established, we highlight several examples of chemical imaging of various materials. To suppress contrast due to the refractive index differences, we suspended the materials in (vibrationally non-resonant) D 2 O to reveal the true absorption contrast. Figure 6a depicts the interface between D 2 O and an ~20-μm thick polydimethylsiloxane film, a silicon-based organic polymer commonly used as vacuum grease. The difference between the images taken on and off-resonance with the methyl stretching mode reveals clear chemical contrast. Note that the boundary between the polydimethylsiloxane film and D 2 O is evident due to light scattering at the interface. Similarly, in Fig. 6b , chemical contrast is evident when tuning on and off the C–H stretching resonance of a 30 μm membrane of poly(2,6-dimethylphenylene oxide-co- N -(2,6-dimethylphenylene oxide) aminopyrene), a material of considerable interest as an ion-exchange membrane. Last, we demonstrate MIR imaging of a bee’s wing in Fig. 6c , a rather complex natural structure that is rich in chitin. The chitin MIR spectrum in the 2500–3500 cm −1 range contains overlapping contributions from OH-, NH- and CH-groups, resulting in broad spectral features. The absorption difference between the 3239 and 3081 cm −1 vibrational energies is ΔOD = 0.04, yet the contrast difference is still evident from the NTA MIR image. Fig. 6: MIR images of various materials accompanied with corresponding FTIR spectra. The left column shows resonance MIR images, whereas the middle column shows MIR images taken at an energy that corresponds with a designated absorptive line. The right column displays the FTIR absorption spectra of the sample with on (orange) and off (grey) resonance frequencies indicated. a Interface between D 2 O and silicone lubricant. b APPPO polymer film. c Wing of a common bee. The exposure time for all images is 1 s Full size image MIR videography of the sample dynamic s The signal strength is sufficient for MIR imaging at even faster acquisition rates. In the Supplemental Information, we provide MIR imaging through NTA with a 100 ms exposure time along with an analysis of the pixel noise (Figs. S5 – S7 ). Given that the current camera requires an additional 100 ms of readout time per frame, the effective imaging acquisition time was pushed to 5 fps. Under these conditions, we recorded videos of several mechanical and physical processes as well as live microorganisms. First, in Video 1 (please see Supplementary information), the real-time movement of a printed target on cellulose acetate films is demonstrated, both under vibrational off-resonance (V1a) and resonance (V1b) conditions. Video V2 shows a live recording of the dynamics of a immersion oil droplet placed atop the CaF 2 window under vibrationally resonant conditions. The flowing droplet can be seen with clear chemical contrast in real time. Moreover, one can observe the formation of intensity fringes near the edge of the droplet due to Fresnel diffraction and interference within the oil film, i.e., Newton’s ring effect. In Video V3 , we show NTA-based MIR imaging of live nematodes suspended in a D 2 O buffer, recorded at 3381 nm (2958 cm −1 ). The image contrast is due to absorption by the methyl stretching vibrations of protein, in addition to refractive effects. The video demonstrates that active, live nematodes can be captured in real time under the MIR illumination conditions used in NTA detection. Discussion In this work, we have shown that the principle of NTA can be extended to MIR imaging by direct on-chip two-photon absorption in a CCD camera. This principle enabled us to acquire images at 100 ms exposure times at femtojoule-level picosecond pulse energies per pixel, experimental conditions that allow wide-field MIR imaging of live, freely suspended organisms at reasonably high frame rates. The use of a CCD camera serves as an attractive alternative to standard MIR cameras, such as cryogenically and electronically cooled MCT detectors. NTA enables good quality MIR images without cryogenic cooling, significantly reducing the complexity and cost of the detector. In addition, NTA-enabled imaging benefits from the mature technology of Si-based cameras, offering robust and affordable detection solutions. These advantages are not at the expense of sensitivity, as previous work based on MIR femtosecond pulses has shown that NTA offers comparable sensitivity to (single pixel) MCT detectors 31 . The NTA process can be used to detect MIR radiation over a very broad range. Other than the steepness of the semiconductor’s band edge, there are no fundamental limitations on the detection spectral range 31 , 36 , 43 , 46 . In fact, higher efficiencies of the NTA process have been demonstrated when tuning towards higher MIR wavelengths near 10 μm, without the necessity of re-alignment, a sensor change or additional data processing. Unlike other recent methods for improving MIR detection with visible/NIR detectors, our method takes MIR light as its direct input. Photothermal imaging, for instance, relies on MIR-induced optical changes in the sample (expansion, refractive changes), which are subsequently probed with a visible/NIR beam. NTA-based imaging does not rely on secondary effects in the sample due to MIR illumination, as it registers intensity changes in the MIR directly. Similarly, our new NTA imaging approach differs fundamentally from SFG up-conversion techniques. While the latter also uses a visible/NIR camera to generate MIR-based images, the SFG up-conversion mechanism is based on a separate step that converts the MIR light with the help of a pump beam into visible radiation in an external nonlinear optical crystal 29 . To capture a full image, rapid sampling of crystal orientations must be applied to fulfil phase matching, followed by an image reconstruction step. The NTA method approaches comparable imaging performance while using MIR and pump intensities that are an order of magnitude lower. NTA avoids the external light conversion step and thus significantly simplifies the overall imaging system. Because NTA does not rely on phase matching, it can generate images in a single shot and forgoes the need for post-acquisition image reconstruction. We note that although NTA is related to degenerate two-photon absorption (DTA), none of the results reported here could be achieved through DTA. DTA is a special case of NTA, in which both incident photons have the same energy. Unlike DTA, which has been used for characterizing the temporal widths of NIR pulses through autocorrelation measurements in semiconductor detectors 48 , 49 , only NTA makes it possible to turn wide bandgap semiconductors into MIR detectors. The principle of MIR detection with a Si camera through NTA addresses a pertinent issue in MIR imaging, in particular as applied to microscopy. Fast MCT cameras used for this purpose feature a small number of pixels, with 128 × 128 displays being typical, which limits the mapping of large areas and imaging at high definition. Modern Si cameras benefit from much higher pixel numbers while also gaining from optimized readout electronics. Through NTA, these advantages can be ported into the field of MIR imaging, making fast imaging of larger areas possible. Although the current work shows the feasibility of NTA-based MIR imaging, the approach can be further improved to achieve even better performance. For instance, the 1.5-mm thick silica window in front of our CCD sensor, which shows significant MIR attenuation, can be easily replaced with a window of higher transmittance in this range. Moreover, modern back-illuminated scientific CMOS cameras offer higher quantum efficiencies, higher pixel numbers and faster frame rates, underlining that NTA can be conducted at much higher efficiencies than what is presented here. In addition, higher NTA efficiencies can be obtained with shorter pulses. The use of high-repetition rate femtosecond pulses would allow imaging at much lower average power while maintaining high efficiency. Detector arrays based on materials other than Si are also interesting for NTA applications. GaAs, for instance, exhibits significantly higher two-photon absorption efficiencies and a steeper band edge absorption than Si, which are both favourable for MIR detection through NTA. Finally, the practical implementation of the NTA imaging technique requires the availability of a pump beam in addition to an MIR source. Although OPO systems constitute a natural choice because of their broadly tunable synchronized pump/idler pulse pairs, recent developments in MIR light source technology promise alternative solutions that are more compact and affordable, including efficient frequency conversion with long wavelength fibre lasers. Such developments will likely improve the practical implementation of NTA-based imaging for a wide range of applications. Materials and methods FTIR experiments Conventional infrared absorption spectra were measured using a Jasco 4700 FTIR spectrometer both in transmission and attenuated total reflection (ATR) geometries. For the ATR experiments, the Jasco ATR-Pro One accessory equipped with a diamond crystal was used. The spectra were averaged over 20 scans and were acquired with a 2 cm −1 resolution, close to the resolution of the corresponding picosecond NTA experiments. Sample handling Most of the prepared samples were suspended in D 2 O to suppress refractive effects and thus revealed pure absorption contrast. DMSO and silicone lubricant (Dow Corning) were obtained from Sigma-Aldrich and were used without further purification. The sample materials, including the APPPO polymer film and clipped bee wings, were immersed in D 2 O and confined between hermetically sealed 1-mm thick CaF 2 windows (diameter = 1″). Experiments on cellulose acetate films were performed in air without the use of CaF 2 windows. C. elegans were obtained from Carolina Biological. Nematodes were picked up from agar plates with filter paper, immersed in a phosphate buffered saline D 2 O solution and placed between two CaF 2 windows spaced by a 50-μm Teflon spacer. Non-degenerate two-photon absorption detection with a Si photodiode We used a conventional Si photodiode (FDS100, Thorlabs) with the parameters described in the Supplementary Information. The transparent window in front of the Si material was removed to improve MIR transmission. The experiments were performed in a pump-probe geometry with the setup depicted in Fig. 3 , without utilizing a separate imaging lens in the MIR arm. Both MIR and NIR beams were focused onto the Si photodiode by a f = 100 mm broadband achromat (Trestle Optics) 50 . The NIR intensity was varied by the combination of a half-wavelength plate and Glan-Thompson polarizer. The MIR intensities were controlled by another half waveplate and a wire-grid polarizer. The polarization of both NIR and MIR optical pulses were linear and parallel and were kept constant throughout the experiments. The MIR beam was modulated at 160 Hz by a mechanical chopper, and the NTA signal contribution was isolated by a lock-in amplifier (SR510, Stanford Instruments). Imaging using a CCD camera A Si-based CCD camera (DR-328G-CO2-SIL Clara, Andor) was used. The setup is explained in Fig. 3 of the main text. We used a 1:1 imaging system with an f = 100 mm CaF 2 lens to project the image onto the CCD camera (Clara, Andor). For live nematode imaging, the imaging system was changed to ×2 magnification using a 4f imaging system composed of an f = 50 mm CaF 2 lens and an f = 100 mm broadband achromat (Trestle Optics) 50 .
The MIR range of the electromagnetic spectrum, which roughly covers light in the wavelength regime between 3 to 10 micrometers, coincides with the energies of fundamental molecular vibrations. Utilizing this light for the purpose of imaging can produce stills with chemical specificity, i.e. images with contrast derived from the chemical composition of the sample. Unfortunately, detecting MIR light is not as simple as detecting light in the visible regime. Current MIR cameras exhibit excellent sensitivity but are very sensitive to thermal noise. In addition, the fastest MIR cameras suitable for chemical mapping have sensors with low pixel numbers, thus limiting imaging at high definition. To overcome this problem, several strategies have been developed for shifting the information carried by MIR light into the visible range, followed by efficient detection with a modern Si-based camera. Unlike MIR cameras, Si-based cameras exhibit low noise characteristics and have high pixel densities, making them more attractive candidates for high performance imaging applications. The required MIR-to-visible conversion scheme, however, can be rather complicated. Presently, the most direct way for achieving the desired color conversion is through the use of a nonlinear optical crystal. When the MIR light and an additional near-infrared (NIR) light beam are coincident in the crystal, a visible light beam is generated through the process of sum-frequency generation, or SFG for short. Although the SFG up-conversion trick works well, it is sensitive to alignment and it requires numerous orientations of the crystal to produce a single MIR-derived image on the Si camera. In a new paper published in Light Science & Applications, a team of scientists from the University of California, Irvine, describes a simple method for detecting MIR images with a Si camera. Instead of using the optical nonlinearity of a crystal, they used the nonlinear optical properties of the Si chip itself to enable a MIR specific response in the camera. In particular, they used the process of non-degenerate two-photon absorption (NTA), which, with the help of an additional NIR 'pump' beam, triggers the generation of photo-induced charge carriers in Si when the MIR light illuminates the sensor. Compared to SFG up-conversion, the NTA method avoids the use of nonlinear up-conversion crystals altogether and it is virtually free of alignment artifacts, making MIR imaging with Si-based cameras significantly simpler. The team, led by Dr. Dmitry Fishman and Dr. Eric Potma, first established that Si is a material suitable for MIR detection through NTA. Using MIR light with pulse energies in the femtojoule (fJ, 10-12 J) range, they found that NTA in silicon is sufficiently efficient for detecting MIR. This principle enabled them to perform vibrational spectroscopy measurements of organic liquids employing just a simple Si photodiode as the detector. The team then moved to replace the photodiode with a charge-coupled device (CDD) camera, which also uses silicon as the photosensitive material. Through NTA, they were able to capture MIR-derived images on a 1392x1040 pixel sensor at 100 ms exposure times, yielding chemically selective images of several polymer and biological materials as well as living nematodes. Despite using technology not specifically optimized for NTA, the team observed the ability to detect small (10-2) changes in optical density (OD) in the image. "We are excited to offer this new detection strategy to those who use MIR light for imaging," says David Knez, one of the team members. "We have high hopes that the simplicity and versatility of this approach allows for the broad adoption and development of the technology." Adding that NTA may speed up analysis in a wide variety of fields, such as pharmaceutical quality assurance, geologic mineral sampling, or microscopic inspection of biological samples.
10.1038/s41377-020-00369-6
Medicine
Study reveals 'extensive network' of industry ties with healthcare
Mapping conflict of interests: scoping review, BMJ (2021). DOI: 10.1136/bmj-2021-066576 Journal information: British Medical Journal (BMJ)
http://dx.doi.org/10.1136/bmj-2021-066576
https://medicalxpress.com/news/2021-11-reveals-extensive-network-industry-ties.html
Abstract Objective To identify all known ties between the medical product industry and the healthcare ecosystem. Design Scoping review. Methods From initial literature searches and expert input, a map was created to show the network of medical product industry ties across parties and activities in the healthcare ecosystem. Through a scoping review, the ties were then verified, cataloged, and characterized, with data abstracted on types of industry ties (financial, non-financial), applicable policies for conflict of interests, and publicly available data sources. Main outcome measures Presence and types of medical product industry ties to activities and parties, presence of policies for conflict of interests, and publicly available data. Results A map derived through synthesis of 538 articles from 37 countries shows an extensive network of medical product industry ties to activities and parties in the healthcare ecosystem. Key activities include research, healthcare education, guideline development, formulary selection, and clinical care. Parties include non-profit entities, the healthcare profession, the market supply chain, and government. The medical product industry has direct ties to all parties and some activities through multiple pathways; direct ties extend through interrelationships among parties and activities. The most frequently identified parties were within the healthcare profession, with individual professionals described in 422 (78%) of the included studies. More than half (303, 56%) of the publications documented medical product industry ties to research, with clinical care (156, 29%), health professional education (145, 27%), guideline development (33, 6%), and formulary selection (8, 1%) appearing less often. Policies for conflict of interests exist for some financial and a few non-financial ties; publicly available data sources seldom describe or quantify these ties. Conclusions An extensive network of medical product industry ties to activities and parties exists in the healthcare ecosystem. Policies for conflict of interests and publicly available data are lacking, suggesting that enhanced oversight and transparency are needed to protect patient care from commercial influence and to ensure public trust. Introduction In an influential 2009 report, the Institute of Medicine described a multifaceted healthcare ecosystem rife with industry influence. 1 Central to the ecosystem are healthcare providers, researchers, clinical care facilities, journals, professional societies, and other healthcare institutions and supporting organizations engaged in medicine’s core professional activities: providing beneficial care to patients, conducting valid research, and providing evidence based clinical education and guidance. In so doing, these individuals and institutions frequently collaborate with pharmaceutical, medical device, and biotechnology product manufacturers. 1 2 3 4 5 Although these for profit entities play a crucial role in the ecosystem, particularly in developing new tests and treatments, their primary objective is to ensure financial returns to shareholders. Thus, industry collaborations inevitably introduce potential commercial bias into the healthcare ecosystem. With absent rigorous conflict of interest oversight across the entire system, the Institute of Medicine warned that medicine’s extensive ties to the medical product industry “threaten the integrity of scientific investigations, the objectivity of medical education, the quality of patient care, and the public’s trust in medicine.” 1 Controlling conflict of interest across the ecosystem, however, requires a system level understanding of how influence can enter and circulate through it. Research on the influence of the medical product industry is voluminous and expanding, aided in no small part by new data sources such as Open Payments, 6 7 8 9 a US federal database that makes public nearly all payments to physicians and teaching hospitals by pharmaceutical, medical device, and other healthcare product manufacturers. Other lines of inquiry have begun to explore the ties between medical product companies and regulators, patient advocacy groups, and other influential parties in the healthcare ecosystem. 3 4 10 11 12 And the analytic lens has broadened to include not only financial ties but also non-financial ones, such as medical product manufacturers offering healthcare professionals research data, authorship, and other opportunities for professional advancement. 13 14 15 Yet, with few exceptions, 1 2 16 17 most analyses focus on one or two narrow types of ties between the medical product industry and a single party, such as healthcare professionals, hospitals, or journals, or a single activity, such as research, education, or clinical care. The reality is that companies take a multipronged approach to developing and marketing products, enlisting the assistance of multiple influential parties throughout the healthcare ecosystem. The US opioid epidemic, for example, provides numerous instances of pharmaceutical manufacturers strategically developing financial ties with multiple entities in the healthcare ecosystem and leveraging those to create secondary influences, resulting in profound patient harm. 16 17 The complex interactions evident in the case of opioids, however, are seldom documented or explored in the literature on conflict of interests. We are unaware of any study that has endeavored to identify and characterize the full extent of medical product industry ties across the healthcare ecosystem, which involves individuals who and organizations that deliver healthcare, as well as politicians, regulators, supply chain entities, and others who shape the practice of medicine indirectly. The entire spectrum of direct ties, and subsequent indirect pathways for potential influence, could result in cumulative effects on patient care and public trust and are thus important to systematically document and assess. We therefore developed an evidence based map to encompass the complex network of ties between the pharmaceutical, medical device, and biotechnology industries and healthcare ecosystem. To identify all known pathways that could enable companies to ultimately influence patient care, we systematically explored the full range of direct industry ties, both financial and non-financial, across the ecosystem, as well as indirect ties to and from other parties and activities within the healthcare ecosystem. We also cataloged the presence of conflict of interests oversight along these routes, as well as the extent to which industry ties are transparent to regulators, the public, and other key audiences. Our results provide a system level view of the medical product industry’s potential for influence across the healthcare ecosystem, ultimately culminating at patient care. Methods Our methods were twofold. First, we used targeted literature searches and expert input to draft a map depicting the ties between the pharmaceutical, medical device, and biotechnology industries and the key healthcare related activities and parties that shape utilization (ie, prescribing and use of medical devices and biotechnology products). Then we conducted a systematic scoping review to verify and refine the map and to catalog and characterize all documented industry ties across the healthcare ecosystem. Mapping We began by reviewing publications known to the research team (in particular, the Institute of Medicine’s extensive 2009 report 1 ) and cataloged all identified parties (individuals and organizations involved in healthcare, such as hospitals, prescribers, public health agencies), activities (domains of clinical inquiry, judgment, and decision making, such as research, clinical care, guideline development), and linkages among them. Using terms such as “pharmaceutical industry”, “device industry”, and “conflict of interest”, and the “similar articles” function, we then conducted a targeted search of the medical and scientific literatures (through PubMed) to document industry ties to these parties and activities, as well as any additional parties, activities, and linkages in the healthcare ecosystem. To further explore additional, poorly documented ties, we used Google to search the gray literature, business publications, and lay literature, such as newspaper and magazine articles. All investigators independently performed searches in Google until saturation was achieved—that is, no new activities, parties, or linkages being identified. We used these findings to draft a preliminary map of the healthcare ecosystem, showing the network of ties between industry and each party and activity, as well as ties among parties and activities. Next we obtained input from an international panel of experts with broad expertise in industry ties and deep knowledge of specific parties and activities (supplementary appendix A). We selected prominent experts on industry ties to healthcare parties or domains, or both. Experts were also selected who could reflect on these problems internationally, not just in the US context. Additionally, we included experts with deep knowledge of pricing and distribution systems, as these topics seldom appear in the literature on conflict of interest. Through WebEx we conducted semistructured interviews (supplementary appendix B) with experts individually to review the map, using their feedback on an ongoing basis to further search the literature, evaluate depicted ties, identify missing ties, and refine the map accordingly. We worked iteratively, making alterations to the general approach, categorizations, and visual presentation until reaching agreement within the research team. Experts were recruited until saturation was achieved (with no further changes suggested), which occurred after eight experts had been interviewed. We also solicited additional comments and final approval from the experts by email. Finally, we worked with design experts to optimize the visual clarity of the map. Scoping review We conducted a systematic scoping review of medical product industry ties to verify and refine the map, cataloging and characterizing documented financial and non-financial ties across the broad healthcare ecosystem. Our methods are reported according to the preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) (supplementary appendix C). 18 Protocol Scoping reviews are not eligible for registration in Prospero; nonetheless, we used Prospero’s systematic review protocol for planning and reporting purposes (supplementary appendix D). Eligibility criteria The scoping review included qualitative and quantitative experimental and observational studies with English language full text available. Included studies documented ties between industry and a party or activity in the healthcare ecosystem. The scoping review excluded commentaries, conference presentations, abstracts, literature reviews, letters, and editorials. Information sources and search strategy Using search terms derived from our initial literature review, including “drug”, “device”, “industry influence”, “commercial support”, and “conflict of interest”, we conducted a scoping review (through PubMed, Scopus, and Embase) of the medical, scientific, and gray literatures to 31 December 2019, to systematically identify all published investigations documenting these and other emergent activities, relevant parties in the healthcare ecosystem, and linkages. We also included records from our previous manual searches that focused on poorly documented potential ties. Relevant policies for conflict of interests and publicly available data sources that appeared in our searches were collated for separate analysis. To supplement our search for the policies and transparency sources, we also gathered information from MediSpend Legislative Watch, an industry facing website that compiles summaries and links for policies for conflict of interests and transparency requirements in countries across North America, Europe, the Middle East, and Asia. 19 Study selection process All search results were imported into Covidence, an online systematic review software program. The eligibility criteria were imported as a set of codes that were used for screening. To ensure reliability, multiple reviewers (SC, MM, SZ, DK) screened all items, with differences resolved through discussion. Data items and data collection process For all included articles, we abstracted data on activities, parties, types of ties (financial, non-financial), year and location of study, type of publication (gray, peer reviewed), and funding source. The data abstraction form (supplementary appendix E) was piloted on random samples of 10 included studies and modified as needed using feedback from the team. Full data abstraction began after four rounds of pilot testing, once sufficient intercoder agreement had been obtained (93.48-100%); calculable κ statistics ranged from 0.63-0.95, indicating substantial to near perfect agreement. Subsequently, one of two team members (MM, SZ) abstracted each included study, with additional feedback from others (SC, DK) as needed. Methodological quality appraisal We did not appraise methodological quality or risk of bias of the included articles, which is consistent with guidance on the conduct of scoping reviews. Synthesis Microsoft Excel was used to create descriptive statistics to characterize the publications identified in our scoping review along the extracted domains. Data from our scoping results was used to refine and verify our map, identifying, characterizing, and organizing all known pathways by which companies might potentially influence patient care. We created additional maps to separately show industry’s financial and non-financial entry points to the system. Patient and public involvement We did not include patients or members of the public in the research, as this was beyond the study’s scope. A patient representative reviewed the manuscript after submission. Results Mapping: Industry and the healthcare ecosystem Figure 1 depicts the healthcare ecosystem, mapping the complex network of ties associated with the pharmaceutical, medical device, and biotechnology industries across the key activities and parties in the healthcare ecosystem. Beyond its direct ties to all parties and some activities, the medical product industry has numerous indirect ties across the healthcare ecosystem. Similarly, non-financial ties might reinforce or extend companies’ financial ones ( fig 2 and fig 3 ). Fig 1 Ties between the medical product industry and healthcare ecosystem Download figure Open in new tab Download powerpoint Fig 2 Pathways of financial ties between the medical product industry and healthcare ecosystem Download figure Open in new tab Download powerpoint Fig 3 Pathways of non-financial ties between the medical product industry and healthcare ecosystem Download figure Open in new tab Download powerpoint Relevant parties in the system operate in diverse sectors of public and private life and include non-profit entities (eg, foundations, advocacy groups), the healthcare profession (eg, journals, medical schools, individual professionals), the market supply chain (eg, payers, purchasing and distribution agents), and government (eg, public officials, regulators). The medical product industry also has direct ties to patients and prescribers ( box 1 and supplementary appendix F). Notably, the prescriber category is distinct from the individual professionals category, although some clinicians might belong to both categories: Prescriber denotes clinicians in a patient care role (eg, physicians, nurses, physician assistants, advanced nursing professionals) who directly determine utilization of pharmaceuticals, medical devices, and biotechnology products, whereas individual professionals more broadly includes clinicians, researchers, and other healthcare professionals and experts engaged in research, guideline development, formulary selection, health professional education, and other extraclinical activities. Box 1 Definitions of terms used in the healthcare ecosystem Medical product industry Pharmaceutical, medical device, and biotechnology companies that develop and manufacture medical products used in patient care 1 Parties Public officials —elected or appointed individuals in government positions 20 Regulators —government bodies that regulate healthcare delivery or payments 21 Public health agencies —government agencies that are involved in healthcare but do not directly deliver or regulate healthcare 22 Payers —private and public health insurers 23 Purchasing and distribution agents —organizations that mediate pharmaceutical pricing, payment, and distribution (eg, pharmacy benefit managers, group purchasing organizations, wholesalers) 23 Care delivery organizations —facilities in which clinical care occurs, including hospitals, medical centers, clinics, private practices Medical education companies —independent privately held businesses, usually for profit entities, that provide education to healthcare professionals 24 Medical schools —institutions that award degrees for doctor of medicine or doctor of osteopathic medicine and support academic activities 25 Professional societies —membership organizations consisting, and serving the interests, of healthcare professionals of the same type (eg, nurse practitioners) or from the same specialty (eg, family practitioners); activities might include education, development of guidelines and ethical codes, lobbying and advocacy, and publishing 26 Journals —publications that report clinical and scientific information to physicians and other healthcare professionals 27 Individual professionals —clinicians, researchers, journal editors, healthcare executives, and other experts engaged in research, guideline development, formulary selection, clinical education, or other professional activities outside of clinical care Foundations —entities that support charitable activities by making grants to unrelated organizations or institutions or to individuals for scientific, educational, cultural, religious, or other charitable purposes 28 Advocacy organizations —entities that provide patient focused or caregiver focused support, advocacy, and education, often focused on a disease or set of diseases 11 Prescribers —clinicians engaged in patient care activities, including prescribing and use of medical devices and biotech products (eg, physicians, nurse practitioners, physician assistants) Activities Research —rigorous investigation into biology, human disease, or healthcare delivery, the results of which guide best healthcare practices 29 Health professional education —Knowledge or skill acquisition related to healthcare that occurs away from patients as a free standing activity, with undergraduate, graduate, and continuing (postgraduate) components Guideline development —systematically developed statements to assist practitioner and patient decisions about appropriate healthcare for specific clinical circumstances 30 Formulary selection —the development of ranked or tiered lists of prescription drugs that are covered by a health plan or stocked by a healthcare facility; tiers typically carry different levels of cost sharing (eg, copayments or coinsurance levels) 23 31 Clinical care —Clinical interaction between patient and healthcare professional Ties Financial —economic assets or monetary payments, including but not limited to consulting fees, research funds, salary, stocks, patents, licenses, gifts, meals, travel funds, educational funds, and materials and equipment for research, education, and clinical care Non-financial —other assets, including but not limited to information (eg, advertising, literature, reprints, and textbooks, and educational and training sessions), authorship, and data RETURN TO TEXT Many medical product industry ties to these parties are financial, involving money or items of financial value, as when companies negotiate prices with supply chain agents; purchase reprints from journals; make contributions to public officials for campaigns; provide consultancy, speaking, or key opinion leader payments to healthcare professionals; or financially support government agencies, healthcare organizations, and non-profit entities through donations, grants, or fees. Other ties are non-financial, as in companies’ direct-to-consumer advertising to patients, advertising and detailing of prescribers, unpaid professional consultancy work, or the offer of data, authorship, and other professional opportunities to clinicians and researchers. All party types have financial ties to medical product companies. Only payers and distribution agents lack additional, non-financial ties ( table 1 , supplementary appendix E). Table 1 Examples of parties’ and activities’ direct financial and non-financial ties to the medical product industry View this table: View popup View inline The healthcare ecosystem also includes five activities of clinical inquiry, judgment, and decision making at risk of commercial bias ( box 1 and table 1 ). The medical product industry directly participates in two such activities—research and guideline development. Again, ties might be financial or non-financial, or both. For example, companies might directly fund research or guideline development and might directly provide data, content, or other non-financial assets in support of these activities. The medical product industry also has numerous, indirect connections to three additional activities—formulary selection, medical education, and clinical care. We found no documentation to support that companies directly participate in these activities. However, they maintain extensive ties with parties who participate in these activities. For example, individuals and organizations with medical product industry ties often participate in formulary decision making, educational activities, or patient care. Similarly, linkages among activities offer companies indirect ties across the healthcare ecosystem—for example, research informs guideline development, formulary selection, and health professional education. 1 2 The clinical care activity is unique; representing the intersection between patients and prescribers, it is shaped by all other activities and is the ultimate target of industry interest. 1 58 59 Scoping review The literature search resulted in 2457 citations ( fig 4 ), after elimination of duplicates. On screening of the titles or abstracts and assessing full text for eligibility, we included 538 articles for data abstraction and synthesis ( table 2 ). The articles were published between 1980 and 2019, with half appearing after 2012. The publications spanned 37 countries, with 348 (65%) based in the United States and 190 (35%) in other geographical regions. Most of the articles (451, 84%) were peer reviewed research studies. Overall, 498 (93%) examined pharmaceutical companies, 162 (30%) studied the medical device and biotechnology industries, and 22 (4%) included all three. Notably, nearly all articles in our analysis documented financial transactions (501 (93%)), with non-financial ties appearing less often (158 (29%)). Fig 4 Flow of articles through scoping review Download figure Open in new tab Download powerpoint Table 2 Characteristics of 538 studies included in scoping review View this table: View popup View inline The most frequently identified parties were within the healthcare profession. Individual professionals were described in 422 (78%) of the studies, prescribers in 65 (12%), medical schools in 34 (6%), and professional societies in 31 (6%). All other parties appeared in fewer than 5% of the included studies ( table 2 ). The medical product industry’s party ties first appeared in the literature in 1980; however, our scoping review found recent citations (2018 or later) for ties to all parties except payers and medical education companies (the most recent documentation dated from 2010 and 2008, respectively). In total, 303 (56%) of the publications documented medical product industry ties to research, with clinical care and health professional education appearing somewhat less often: 156 (29%) and 145 (27%), respectively. Ties to guideline development and formulary selection appeared in 33 (6%) and eight (1%) publications, respectively ( table 2 ). Although these activity ties appeared in the literature as early as 1980, our scoping review also identified citations dating from 2018 or later for all activities. Oversight and transparency For the medical product industry’s direct ties to parties and activities, table 3 documents the presence or absence of conflict of interests oversight and public data, as uncovered by our scoping review or documented on MediSpend Legislative Watch. 19 Policies for medical product industry ties are widespread among healthcare professionals and organizations, with numerous national and international bodies promulgating standards for managing such exchanges. Few, however, substantively deal with non-financial ties. Government parties are subject to varying federal, state, and local policies. Medical product industry communications to patients are federally regulated in the US and New Zealand and more tightly restricted elsewhere, but no established guidelines or policies seem to deal with companies’ financial incentives to patients (eg, copay coupons). Similarly, we found no conflict of interests oversight for medical product industry exchanges with non-profit organizations or supply chain agents. Table 3 Medical product industry interactions: Sources of transparency and conflict of interests oversight, by activity and party View this table: View popup View inline Public databases are far from universal, and the ones we identified exclusively deal with financial transactions ( table 3 ). For example, in the US, Open Payments makes transparent most but not all payments from the medical product industry to teaching hospitals and some prescribers (physicians and dentists, with data collection expanding in 2021 to include physician assistants, nurse practitioners, clinical nurse specialists, certified registered nurse anesthetists, and anesthesiological assistants), and Open Secrets and FollowTheMoney track companies’ contributions to state and federal candidates and officials for political campaigns. In Europe, the United Kingdom, Australia, and elsewhere, public reporting systems provide varying degrees of transparency into medical product industry payments to diverse prescribers and individual professionals (in some cases, including physician assistants, nurse practitioners, and others), care delivery organizations, health professional schools, professional societies, foundations, and advocacy organizations. We found no transparency sources for medical product industry ties to regulators, public health agencies, payers, purchasing and distribution agents, or journals. Discussion We conducted an extensive scoping review and interviewed experts to document an extensive network of medical product industry ties to activities and parties in the healthcare ecosystem. The pharmaceutical and medical device and biotechnology industries have established numerous ties with non-profit entities, the healthcare profession, the market supply chain, and government. Beyond clinical care, critical activities in the network include research, health professional education, guideline development, and formulary selection. We found that conflict of interests oversight exists for some financial and a few non-financial ties between the medical product industry and other parties in the healthcare ecosystem, potentially leaving many interactions unregulated. Moreover, public data sources seldom describe or quantify these ties. This observed lack of conflict of interests oversight and transparency offers ample opportunities for medical product industry ties to potentially influence diverse clinical activities, and ultimately patient care, without the public’s knowledge. Efforts by all parties are urgently needed to deal with these gaps to protect patient care from commercial bias and to preserve public trust. Implications Our mapping illustrates the ways in which medical product industry influence could flow down through a complex network to reach clinical care and impact patients. Companies maintain direct ties to all parties and some activities; these direct ties then potentially extend through interrelationships among parties and activities. Industry influence might accumulate or amplify as it travels through multiple pathways to reach clinical care in ways that could be completely opaque to both clinicians and patients, yet indirect ties and the cumulative effects of those ties are seldom, if ever, examined in the literature. The medical product industry’s direct financial ties to clinicians are known to influence prescribing and other activities in which the industry participates. 1 2 7 9 89 90 Our mapping illustrates how evaluating individual industry ties might underestimate the routes and magnitude of potential influence. The findings from our scoping review illustrate the breadth of medical product industry ties to the healthcare ecosystem, with studies from 37 countries spanning six continents—documenting the great scope and diversity of industry targets across the globe. At the same time, our findings highlight the outsized focus in the literature on the healthcare profession, especially on individual professionals and prescribers. This emphasis could result from the relative availability of these data through Open Payments and other public sources. By compiling and mapping the full network of the medical product industry’s reach across the healthcare ecosystem, we depict the ways in which potential influence moves well beyond the spheres of individual professionals and prescribers. Recent examples illustrate the power and implications of the complex ties we expose. Appendix G details how opioid manufacturers provided funding and other assets to prescribers, patients, public officials, advocacy organizations, and other healthcare parties, who, in turn, pressured regulators and public health agencies to quash or undermine opioid related guidelines and regulations. 47 49 Moreover, we found no evidence that the medical product industry’s activities around opioids differed from routine company practices. Analyses of past cases of consumer harm related to medical product industry promotion, as with the drug Vioxx (rofecoxib; Merk) 91 92 and the weight loss drug fenfluramine-phenteramine (American Home Products), 93 have shown a similar, multipronged strategy of outreach to numerous parties, culminating in severe patient harm. Many additional examples of harm from industry promoted products are likely to have been unrecognized or unattributed to medical product companies’ activities. Such harms remain unexplored, but many might have led to physical harms as well as social, psychological, and other negative effects on patients. 94 Moreover, medical product industry influence could undermine healthcare equity and sustainability by driving up costs for individual patients and the healthcare system overall. 95 96 97 98 99 In the context of eroding patient trust in the healthcare system, elucidating mechanisms of undue influence is critical. 100 Our analysis and resulting map will facilitate better understanding of these pathways of potential influence and might enable regulators and the healthcare community to better protect patients and ensure public trust. Limitations of this study This study has several limitations. First, our findings are limited to medical product industry ties known by our experts or documented in the academic, gray, and lay literatures. Additional ties might yet exist, although our strategy of systematic, duplicative searching and feedback from an international panel of experts is unlikely to have missed common or important ties. Second, we cannot quantify the magnitude of medical product industry influence along pathways in our map. Third, our study documents not actual bias but the pathways for potential influence across the system. Fourth, our scoping review included older evidence that might not reflect current practices; however, half of included papers were published after 2012, and medical product industry ties to nearly all parties and activities have even more recent documentation, so the body of evidence likely remains relevant. Finally, our review might have missed some policies for conflict of interests and publicly available data sources because we focused on ties, although we did supplement our search by consulting MediSpend Legislative Watch. 19 More research is needed to explore these issues. Conclusions The medical product industry maintains an extensive network of financial and non-financial ties with all major healthcare parties and activities. This network seems to be mostly unregulated and opaque. Although the medical product industry is a critical partner in advancing healthcare, companies must also work to maximize profits as part of fiduciary responsibility to shareholders or owners and thus use all available means to promote products. With absent effective conflict of interests oversight, such promotion might ultimately threaten the integrity, equity, and sustainability of healthcare systems and impact individual patients. It is therefore up to other key parties, including the healthcare profession and policy makers, to effectively manage commercial influence, protect patient safety, and ensure public trust. What is already known on this topic Most studies of conflict of interests related to pharmaceutical, medical device, and biotechnology companies have focused on one party (eg, prescribers, organizations) or activity (eg, research, education, clinical care) in the healthcare ecosystem The range and interrelationships of company ties across the healthcare ecosystem are incompletely described What this study adds The medical product industry maintains numerous ties with all major healthcare parties and activities This extensive network of ties is often unregulated and non-transparent Enhanced oversight and transparency are needed to shield patient care from commercial influence and to preserve public trust in healthcare Ethics statements Ethical approval The institutional review board at Memorial Sloan Kettering Cancer Center determined the study as exempt, category 4 research. Data availability statement Data are available upon reasonable request to the corresponding author. Acknowledgments We thank Andrew Briggs, Anna Kaltenboeck, Aaron Kesselheim, Joel Lexchin, Barbara Mintzes, Aaron Mitchell, Daisy Smith, and Nancy Yu for serving as experts in the study; Beth Shirrell and Renee Walker of Gold Collective for design assistance; and Jennifer Hsu and Joseph Kanik for research support. Footnotes Contributors: DK, SC, BB, and PB interviewed experts. DK, SC, MM, and SZ did the scoping review. MM, SZ, SC, and DK analyzed the data. BB, SZ, SC, and DK produced the figures. SC and DK drafted the manuscript. SC, DK, MM, SZ, and PB revised the manuscript. SC and DK are the guarantors. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. Funding: SC, PBB, and DK were supported in part by Arnold Ventures and from a grant to Memorial Sloan Kettering Cancer Center from the National Cancer Institute (P30 CA008748) for this work. The funders had no role in the design, conduct, reporting, or dissemination plans for this study. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: support from Arnold Ventures and the Memorial Sloan Kettering Cancer Center. DK’s spouse has equity interest and serves on the scientific advisory boards of Vedanta Biosciences and Opentrons and provides consulting for Fimbrion. PBB has received personal fees from Mercer, Foundation Medicine, Grail, Morgan Stanley, NYS Rheumatology Society, Cello Health, Anthem, Magellan Health, EQRx, Meyer Cancer Center of Weill Cornell Medicine, and National Pharmaceutical Council; personal fees and non-financial support from United Rheumatology, Oppenheimer, Oncology Analytics, Kaiser Permanente Institute for Health Policy, Congressional Budget Office, America’s Health Insurance Plans, and Geisinger; and grants from Kaiser Permanente and Arnold Ventures outside the submitted work. We, the authors, would like to disclose a potential intellectual bias, in that our concern about undue medical product industry influence motivated us to conduct this study. In view of this, to ensure accurate, reproducible findings, we methodically solicited input on relevant parties, domains, and connections from an international panel of experts on conflict of interests. We then scoured the published literature for all available documentation of these parties, domains, and connections, following rigorous standards for conducting scoping reviews. The lead authors (SC and DK) affirm that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as originally planned have been explained. Dissemination to participants and related patient and public communities: We plan to disseminate our findings widely and have developed an interactive tool on our research group’s website ( ) to enable patients, the public, journalists, policy makers, researchers, and others to explore the flow of medical product industry assets through the healthcare ecosystem. Provenance and peer review: Not commissioned; externally peer reviewed. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: .
The medical product industry maintains an extensive network of financial and non-financial ties with all major healthcare parties and activities, reveals a study published by The BMJ today. This network seems to be mostly unregulated and opaque, and the researchers call for enhanced oversight and transparency "to shield patient care from commercial influence and to preserve public trust in healthcare." Although the medical product industry is a critical partner in advancing healthcare, particularly in developing new tests and treatments, their main objective is to ensure financial returns to shareholders. In an influential 2009 report, the Institute of Medicine described a multifaceted healthcare ecosystem rife with industry influence. Yet most studies of conflict of interests related to pharmaceutical, medical device, and biotechnology companies have focused on a single party (eg. healthcare professionals, hospitals, or journals) or a single activity (eg. research, education, or clinical care). The full extent of industry ties across the healthcare ecosystem is therefore still uncertain. To address this gap, a team of US researchers set out to identify all known ties between the medical product industry and the healthcare ecosystem. They searched the medical literature for evidence of ties between pharmaceutical, medical device, and biotechnology companies and parties (including hospitals, prescribers and professional societies) and activities (including research, health professional education and guideline development) in the healthcare ecosystem. Data in 538 articles from 37 countries, along with expert input, was used to create a map depicting these ties. These ties were then verified, cataloged, and characterized to ascertain types of industry ties (financial, non-financial), applicable policies on conflict of interests, and publicly available data sources. The results show an extensive network of medical product industry ties—often unregulated and non-transparent—to all major activities and parties in the healthcare ecosystem. Key activities include research, healthcare education, guideline development, formulary selection (prescription drugs that are covered by a health plan or stocked by a healthcare facility), and clinical care. Parties include non-profit entities (eg. foundations and advocacy groups), the healthcare profession, the market supply chain (eg. payers, purchasing and distribution agents), and government. For example, the researchers describe how opioid manufacturers provided funding and other assets to prescribers, patients, public officials, advocacy organizations, and other healthcare parties, who, in turn, pressured regulators and public health agencies to quash or undermine opioid related guidelines and regulations. And they warn that many other examples of harm from industry promoted products remain unexplored. The results show that all party types have financial ties to medical product companies, with only payers and distribution agents lacking additional, non-financial ties. They also show that policies for conflict of interests exist for some financial and a few non-financial ties, but publicly available data sources seldom describe or quantify these ties. The researchers acknowledge that their findings are limited to known or documented industry ties, and that some data might have been missed. However, they say their strategy of systematic, duplicative searching and feedback from an international panel of experts is unlikely to have missed common or important ties. As such, they conclude: "An extensive network of medical product industry ties to activities and parties exists in the healthcare ecosystem. Policies for conflict of interests and publicly available data are lacking, suggesting that enhanced oversight and transparency are needed to protect patients from commercial influence and to ensure public trust."
10.1136/bmj-2021-066576
Earth
How the Arctic Ocean became saline
Michael Stärz et al, Threshold in North Atlantic-Arctic Ocean circulation controlled by the subsidence of the Greenland-Scotland Ridge, Nature Communications (2017). DOI: 10.1038/ncomms15681 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms15681
https://phys.org/news/2017-06-arctic-ocean-saline.html
Abstract High latitude ocean gateway changes are thought to play a key role in Cenozoic climate evolution. However, the underlying ocean dynamics are poorly understood. Here we use a fully coupled atmosphere-ocean model to investigate the effect of ocean gateway formation that is associated with the subsidence of the Greenland–Scotland Ridge. We find a threshold in sill depth ( ∼ 50 m) that is linked to the influence of wind mixing. Sill depth changes within the wind mixed layer establish lagoonal and estuarine conditions with limited exchange across the sill resulting in brackish or even fresher Arctic conditions. Close to the threshold the ocean regime is highly sensitive to changes in atmospheric CO 2 and the associated modulation in the hydrological cycle. For larger sill depths a bi-directional flow regime across the ridge develops, providing a baseline for the final step towards the establishment of a modern prototype North Atlantic-Arctic water exchange. Introduction The tectonic evolution of ocean gateways and CO 2 changes are key controls of Cenozoic (from 65 Myr ago until present) climate change and ocean circulation. The last 65 Myr ago (Ma) of the earth’s history are characterized by a gradual long-term cooling trend, and the superposition of relatively abrupt climate changes that occurred on much faster timescales 1 , 2 , 3 . However, it remains a major challenge to identify to which extent tectonic events and CO 2 changes controlled the different trends and climate variations. Especially during the late Eocene to early Miocene interval ( ∼ 35–16 Ma), the climate of the North Atlantic-Arctic sector is prone to instabilities 4 , 5 , 6 , 7 , 8 , 9 . Therein, the subsidence of the Greenland–Scotland Ridge (GSR) from subaerial conditions towards a submarine rise constitutes an active ocean gateway control of North Atlantic-Arctic water exchange 9 , 10 , 11 , 12 , 13 . The long-term subsidence history of the GSR is, however, interfered by recurrent Icelandic mantle plume activity, causing topographic uplift in response to thermal variations in the mantle 13 , 14 . Periodic uplifting of the seafloor through the Neogene, driven by frequent mantle plume intensity, has shown to correlate with excursions in Atlantic deep sea δ 13 C records indicating a moderated southward sill overflow of Northern Component Water—a predecessor of modern North Atlantic Deep Water 13 , 15 . Although the long-term evolution of such ocean gateway developments on adjacent ocean water mass characteristics are generally accepted to induce basin-scale reorganizations 6 , 9 , 12 , 13 , 16 , 17 , the climatic impacts, as well as the associated mechanisms of climate changes remain largely elusive. Using a fully coupled Earth System Model (ESM) 18 , we investigate the effect of the GSR subsidence during an interval between ∼ 35 and 16 Ma. In our simulations we use Miocene background climate conditions ( ∼ 20–15 Ma), as a basis and apply different GSR depths and CO 2 concentrations as a surrogate for different conditions during the subsidence interval. We find a non-linear impact of ocean gateway depth controls on the water mass exchange and Arctic Ocean circulation that is mainly controlled by the effect of sill depth on mixed layer characteristics. For gateway depths close to the depth of the mixed layer, additional simulations of different atmospheric CO 2 concentrations show a modulation of the atmospheric hydrological cycle, controlling the overall Arctic salinity and ocean gyre circulation in the sub-polar Arctic (Greenland and Norwegian Seas). The critical threshold in gateway depth is constrained by the characteristic depth of wind driven mixing, unravelling the underlying processes that allow a theoretical assessment of the circulation system of semi-enclosed ocean basins throughout Earth’s history. Results Experimental approach In this study we apply an ESM (see model description in Methods) to simulate the subsidence of the GSR by incremental changes of the mean ridge height, starting from a quasi-enclosed towards a deep Miocene topographic configuration of the Greenland–Scotland ocean gateway ( Fig. 1 ). The ocean component of the ESM is characterized by a curve-linear grid that provides a maximum horizontal resolution of ∼ 30 km near the grid pole at Greenland 18 . This ocean grid space is too coarse to resolve non-rotational meso-scale flow patterns, as defined by the internal Rossby radius of deformation. However, in our model the GSR gateway is wide enough to simulate a rotationally controlled flow regime across the gateway. Related to considerable uncertainties of the GSR subsidence history, the model is setup with alternative boundary conditions (early Miocene ∼ 20–15 Ma) 19 compared with present-day representing a template used in our ocean gateway studies (for details on the model scenarios and boundary conditions, see Methods). This setting includes a closed Bering Strait and Canadian Archipelago configuration, providing a single ocean gateway control of the GSR. The final model setup is further advanced by embedding a high resolution bathymetry reconstruction of the northern North Atlantic-Arctic Ocean 20 into the global topographic dataset ( Fig. 1 ). Figure 1: Geographical settings of Miocene topography circa 20 to 15 Ma. ( a ) Global compilation of Miocene geography 19 (elevation and depth in metres); ( b ) A high resolution (0.5°; depth in metres) bathymetric dataset is embedded in the global setup comprising the northern North Atlantic, subpolar Arctic (Greenland and Norwegian Seas) and the Eurasian Basin in the Arctic Ocean 20 . The Arctic Ocean is defined by the central Arctic and the Greenland and Norwegian Seas as shown by the stippled lines. The schematic circulation shows pathways of the North Atlantic Current (NAC), the Norwegian Current (NC) and the East Greenland Current (EGC). The yellow star reflects the location of site 336 of the Deep Sea Drilling Project 21 at the northern flank of the GSR. Full size image Within a set of model scenarios we consider a gradual deepening of the GSR by stepwise changes (between 22 and 200 m below sea level, mbsl; see Methods and Supplementary Table 1 ) to study the effect of sill depth changes 21 , 22 on climate and ocean circulation ( Figs 2 and 3 ). Parallel with the GSR deepening, the corresponding salt water import across the seaway largely controls the overall salinity, baroclinity and gyre strength in the Arctic Ocean ( Figs 4 , 5 and 6 ; Supplementary Fig. 5 ). To analyse the impact of GSR sill depth changes, we primarily focus on the evolution of ocean gateway circulation, the establishment of salinity (density) gradients and the gyre circulation in the Greenland and Norwegian Seas. Figure 2: Modelled climatologies of surface air temperatures. Surface air temperatures (SAT in °C) for ( a ) the preindustrial, ( b ) the time interval spanning 35–16 Ma and ( c ) the SAT response to the opening of the Greenland–Scotland Ridge gateway. Full size image Figure 3: Seaway opening evolution for the last 42 Ma in context of the Greenland–Scotland Ridge subsidence history. ( a ) Salinity (‰) and ocean velocity (cm s −1 ; velocities <0.5 cm s −1 are not shown) maps at water depths of 50 m for the model scenarios (EO) at different sill depths of the Greenland–Scotland Ridge (GSR). ( b ) Modelled salinity and velocity profiles across the GSR section ( Fig. 1 ) for the preindustrial (PI) and Miocene GSR sill depths are displayed in context of the subsidence evolution as derived from DSDP site 336 (refs 21 , 22 ). Full size image Figure 4: Non-linear response of Greenland-Scotland Ridge sill depth changes on ocean characteristics. The Greenland-Scotland Ridge (GSR) deepening (mbsl, metres below sea-level) scales with a non-linear increase in mean salinity (‰, black bars) and water mass (Sv, red bars) import into the Arctic, whereas a local maximum in gyre strength (grey dots) is obtained by the circulation in the Greenland and Norwegian Seas, as indicated by the horizontal barotropic streamfunction (Sv). Full size image Figure 5: Evolution of the subpolar Arctic gyre from restricted freshwater conditions towards a brackish and modern prototype salinity distribution. Zonal section at ∼ 70°N across the subpolar Arctic (Greenland and Norwegian Seas) gyre displays the salinity (colour coded; ‰) and pressure (contour lines; Pa) evolution from ( a ) a restricted (sill depth at 22 m) to ( b ) semi-enclosed estuarine (GSR sill depth at 50 m) and ( c ) bi-directional (sill depth at 200 m) to ( d ) modern prototype circulation regime (sill depth at ca. 960 m) across the Greenland-Scotland Ridge gateway. Full size image Figure 6: Impact of atmospheric CO 2 changes and gateway changes on vertical salinity characteristics in the subpolar Arctic. Mean salinity profiles (‰) and haloclines (δ S /δ z ; ‰/m) of the subpolar Arctic (Greenland and Norwegian Seas) for different atmospheric CO 2 levels ( a , d ), Greenland-Scotland Ridge (GSR) gateway sill depths ( b , e ) and different atmospheric CO 2 levels at limited GSR sill depths of 50 metres below sea level ( c , f ). Full size image Lagoonal circulation The restricted ocean gateway geometry (GSR sill depth at 22 mbsl and GSR width of ∼ 370 km, as compared with our standard gateway width of ∼ 1,300 km) results in a quasi-enclosed Arctic Ocean with minor communication to the world oceans via lagoonal circulation. This circulation is characterized by hydraulic controls of an intense uni-directed flow regime ( Fig. 3b ) that is accomplished by a positive virtual balance of the net Arctic freshwater input (net precipitation and river runoff: +0.7 Sv). Thereby, the absence of northward ocean heat and salt transports governs near freezing-point temperatures, near basin wide seasonal sea ice cover and the presence of ephemeral perennial sea-ice ( Supplementary Figs 2,4 and 5 ). Arctic freshwater excess and the reduction of northward ocean heat and salt transport results in an Arctic ‘freshwater lake’ stage, accompanied by a regional surface air temperature drop of ∼ 5–9 °C and decreased precipitation in the Norwegian and Greenland Seas, as compared with the standard model climatology ( Fig. 2 , Supplementary Fig. 5 ). In this setting the GSR operates as an oceanographic barrier that steers major parts of North Atlantic Current (NAC) along the isobaths towards Irminger and Labrador Seas ( Fig. 3a ). The absence of southern placed sources of salty waters that are usually transported by the modern Norwegian Current (NC) analogue inhibits the development of pronounced vertical and horizontal salinity gradients in the Arctic Ocean. Without vertical and horizontal salinity gradients, as provided by a restricted Arctic freshwater environment, the prevailing barotropic mode inhibits a dynamic ocean regime due to minor salinity driven density and pressure gradient forces ( Fig. 3 ). To highlight the relevance of vertical and horizontal salinity gradients driving Arctic Ocean dynamics, we run an additional model sensitivity study, assuming Arctic water masses of constant salinity (28‰; herein the salinity driven part of the density calculation is kept constant but pressure and temperature related density changes are taken into account). As shown by the sensitivity study, the absence of salinity contrasts minimizes pressure gradients that fail to balance the wind driven Ekman transport, hence the baroclinic geostrophic imbalance results in a collapse of the Arctic Ocean circulation ( Supplementary Fig. 6 ). A more quantitative approach to analyse the dynamics in the Norwegian and Greenland Seas is given by the calculation of gyre strength—as expressed in terms of the horizontal barotropic streamfunction (vertically integrated water mass movement)—reveals a relative weak gyre strength of −13 Sv within an Arctic freshwater environment. Resulting pressure gradient forces, which are typically associated with ocean baroclinity, are largely impeded—thereby the gyre strength is strongly suppressed. Remaining gyre dynamics within the quasi-enclosed Arctic Ocean basin are mainly forced by wind-stress. Semi-enclosed estuarine circulation In general, progressive GSR gateway deepening from 22 m to ∼ 80 m enables the northward penetration of dense North Atlantic waters via near bottom flow across a shallow GSR sill establishing a semi-enclosed estuarine circulation. For characteristic gateway depths that are placed within the typical depth range of the Ekman layer, the ingress of North Atlantic water to the Greenland and Norwegian Seas is constrained by the opposed Arctic outflow at the surface mixed layer. Thereby, frictional processes at the bottom of the gateway and internal friction between the two water masses limit the inflow of North Atlantic water to the Arctic Ocean. The inflow of Atlantic water induces a salinisation process towards a brackish Arctic Ocean regime ( Fig. 3a,b , Supplementary Fig. 5 ). As a result of ocean salt exchange across the GSR gateway ( Fig. 4 ) and net Arctic freshwater input via the atmospheric hydrological cycle (net precipitation and river runoff), a vertical Arctic salinity gradient and halocline establishes for the first time ( Fig. 6b,e ). Thereby the location of the strongest vertical salinity (density) gradient ( Fig. 6 ) defines the depth of the halocline (a similar depth of the pycnocline is given in Supplementary Fig. 14 ). The formation of horizontal and vertical salinity gradients invigorate Arctic gyre circulation following isolines of salinity. Highest densities at the surface are generated by Ekman mixing and outcropping isopycnals in the centre of the gyre ( Fig. 5 ). To maintain this gyre circulation, the corresponding horizontal density gradient must be established to obtain the compensation of the wind driven Ekman transport (baroclinic geostrophic balance) 23 . Wind stress and associated Ekman mixing (generally operating in the upper ca. 40–200 m of the ocean) 23 induces wind stirring and relative buoyancy of dense subsurface waters that in turn establish the corresponding horizontal pressure gradient across the gyre. Lateral entrainment of subsurface waters from the GSR gateway conserves mass transport for this upwelling. For a detailed perspective on the dynamics of a semi-enclosed estuarine circulation, we further subdivide the circulation regime into two flow cases that are defined by the relative depth of the ocean gateway with respect to the mixed layer depth. For GSR sill depths between 22 and 50 mbsl (first case), limited inflow of Atlantic water to the Arctic occurs above the characteristic depth of the mixed layer ( ∼ 50 mbsl) and for the second flow case, the bulk inflow of Atlantic water takes place beneath the surface mixed layer at gateway depths between 50 and 80 mbsl. At GSR depths between 22 and 50 mbsl, the first case depicts a bulk inflow of Atlantic water that takes place above the characteristic depth of the surface mixed layer ( ∼ 50 mbsl). Thereby, the lateral inflow of North Atlantic waters across the gateway perturbs the Arctic stratification as represented by excursions of the characteristic halocline ( Fig. 6e ). Notably, at 50 mbsl of GSR sill depth, the gyre strength approaches its maximum at −28 Sv, matching the modelled halocline ( Fig. 6d,e ) and the depth of the surface mixed layer ( ∼ 50 mbsl, see depth of the wind driven upper ocean layer provided in Methods). At gateway sill depths that intersect the approximate depth of the mixed layer, the injection of salty Atlantic water just below the halocline establishes pronounced vertical salinity contrasts. This corresponds to effective Ekman pumping and steep baroclinity, obtaining intensified Arctic gyre strength ( Fig. 4 ; −28 Sv). The second case sets GSR sill depths between 50 and 80 mbsl with bulk inflow of Atlantic that places beneath the mixed layer depth. The vertical separation of bulk Atlantic water inflow with respect to the mixed layer above tends to weaken the amplitude of the halocline and therefore the baroclinic-geostrophic balance of the gyre circulation. Bi-directional to modern prototype circulation By deepening the GSR sill depth from 80 to 100 mbsl we identify the transition from an estuarine towards a bi-directional seaway circulation and a ventilation of the Arctic Ocean ( Fig. 3 ). The transition regime is controlled by the depth of the halocline, defining the interface between the surface mixed layer and the ocean layer below. The subsidence of the GSR sill from the surface mixed layer to the ocean layer underneath, as defined by the characteristic depth of the halocline ( ∼ 50 mbsl, see also pycnocline in Supplementary Fig. 14 ), obtains vertical differentiation of the surface-subsurface outflow and underlying inflow to the Arctic Ocean. At gateway depths that are placed well beneath the depth of the Arctic halocline, unrestricted Atlantic water inflow to the sub-polar Arctic is indicated by a pronounced unperturbed Arctic halocline. In parallel with the establishment of an Arctic halocline ( Fig. 6e ) and bi-directional circulation regime, the through-flow into the Arctic Ocean constitutes the reorganization from a brackish towards ventilated Arctic salinity regime. A reduction in strength of the Arctic gyre is compensated by the evolution of a north-south directed current system instead ( Fig. 4 ). Progressive deepening of the GSR sill from 100 to 200 mbsl towards a deep gateway configuration of ∼ 960 m depth additionally strengthens the entrainment of Atlantic waters. In return, a considerably decreased Arctic gyre circulation evolves that responds to a more effective cross-sectional gateway transport ( Fig. 4 ). In contrast to a bi-directional circulation across the gateway, the final establishment towards a modern prototype current system is characterized by the zonal differentiation between the northward directed North Atlantic Current in the East and the East Greenland Current to the West (NAC versus EGC) of the Greenland and Norwegian Seas ( Fig. 3 ). Although a modern like deep GSR gateway configuration provides unrestricted ocean water interchange and therefore reducing the Arctic halocline, however, our gateway studies still obtain stronger than preindustrial vertical salinity contrasts ( Fig. 6a ). This is mainly due to relatively fresh Arctic surface waters—fed by Arctic rivers and net precipitation—balanced by relatively salty southern sourced Atlantic water (for information on the modelled freshwater balance and salinity trends in the Arctic Ocean, see Methods). Atmospheric CO 2 controls on the Arctic Ocean regime To further test the sensitivity of stratification and gyre strength at the brackish salinity regime (GSR sill depth at 50 m), we focus on the effect of different atmospheric CO 2 concentrations, capturing a wide range of Eocene to Miocene greenhouse gas variations ( ∼ 278–1,200 p.p.m. in the atmosphere, refs 24 , 25 , 26 , Fig. 7 ). Therefore, we additionally investigate the model scenario at critical GSR depths ( ∼ 50 mbsl) by a variety of atmospheric CO 2 concentrations (278, 450, 600, 840 p.p.m.). The CO 2 concentration case at 278 p.p.m. reflects the climate sensitivity at preindustrial CO 2 levels, whereas the standard CO 2 levels at 450 p.p.m. represents the modelled climatology ( Fig. 2b ) that has also been used for the previously presented gateway studies. Figure 7: North Atlantic-Arctic circulation regime modulated by atmospheric CO 2 levels at sensible Greenland-Scotland Ridge sill depths for circa 36 to 31.5 Ma. ( a ) Salinity (‰) and ocean velocity (cm s −1 ; velocities <0.5 cm s −1 are not shown) maps at water depths of 50 metres below sea-level (mbsl) for model scenarios (sill depth at ∼ 50 mbsl) at 278, 450, 600, 840 p.p.m. CO 2 in the atmosphere, respectively. ( b ) Evolution of Greenland-Scotland Ridge gateway circulation and different atmospheric CO 2 levels set into context of reconstructed CO 2 proxy history (error bars show documented uncertainties) 25 , 26 . Full size image At fixed gateway depths (GSR sill depth at 50 mbsl), providing a semi-enclosed estuarine circulation, we find that CO 2 controls via the atmospheric hydrological cycle modulate the strength in Arctic gyre circulation ( Supplementary Fig. 7 ). Elevated atmospheric CO 2 levels induce an increase in the Arctic freshwater budget ( Supplementary Table 2 ) and a more accentuated halocline establishes in the Arctic Ocean ( Fig. 6 ). Increasing freshwater excess in combination with reduced Atlantic water inflow progressively shifts Arctic salinities towards fresher conditions. Interestingly, at fixed gateway depths of ∼ 50 mbsl the standard CO 2 case (450 p.p.m.) reveals maximum gyre strength. Especially in this CO 2 scenario a pronounced Arctic halocline (pycnocline) provides pronounced baroclinic forcing to balance the wind stress (for further information on the effect of atmospheric CO 2 changes on the brackish Arctic Ocean see Methods). Discussions The modelled climate shows distinct global warming ( Fig. 2 ) that matches the global mean temperature reconstruction suggesting +6 °C increase of surface temperatures 27 with respect to preindustrial conditions. Apart from topographic height reduction associated with the lapse rate, major warming anomalies as compared to preindustrial ( Supplementary Fig. 1 ) are related to changes in atmospheric CO 2 and changes in land surface characteristics (land albedo, potential evaporation over land) related to aspects of the global energy balance 28 , 29 (planetary albedo, effective longwave emissivity). Our modelled Paleogene climate is characterized by warm background conditions and exhibits a sensitive surface air temperature response to CO 2 perturbations, which is governed by climate feedbacks, such as water vapour 30 (+58% increase compared to preindustrial; Supplementary Table 2 ) and changes from single to multi-year sea-ice 28 , 29 , 30 ( Supplementary Fig. 2 ). Previous investigations show that the model generally reveals a strong climate response due to high sensitivity of climate feedbacks especially in the lower range of CO 2 changes (between 278 and 450 p.p.m. CO 2 ) 28 , 31 . Compared with preindustrial, the computed Paleogene climate shows a reduction in sea-ice volume, increased water vapour, precipitation and river runoff ( Supplementary Fig. 3 and Supplementary Table 2 ), consistent with precipitation records 32 , proposing a stronger Arctic Ocean freshwater balance (further information is given in modelled freshwater balance and salinity trends in the Arctic Ocean in Methods). Geological constraints on the subsidence history of the GSR from a subaerial gateway towards modern sill depths remain largely elusive. Early Deep Sea Drilling Project (DSDP) reconstructions of paleo-water depth 21 , 22 at the GSR do not suggest significant tectonic activity until 36 Ma ago. Thereafter, accelerated gateway deepening across the Eocene-Oligocene transition to depth ranges ∼ 200–300 mbsl is followed by a prolonged period of tectonic dormancy. The superposition of Icelandic mantle plume variability and associated seafloor uplift variations through time provides an in detail unknown control on the GSR gateway opening. Seismic reflection profiles that transect the V-shaped Reykjanes Ridge south of Iceland offer insight into the temporal evolution of Icelandic mantle plume activity up to 55 Ma back in time 14 : reconstructed mantle plume activity before initial GSR subsidence (>36 Ma), as derived from residual depth anomalies of seismic profiles indicate a strong decline in Icelandic mantle plume activity between ∼ 55 and 35 Ma, but with a still subaerial ridge at the end of this period. Significant deepening and the subsidence of the GSR below sea level afterwards sets an active control of modest periodic (3–8 Ma period) mantle plume variations that ranges within the uncertainties of depth reconstructions 9 , 10 , 11 , 12 , response in depth variations 14 , 15 and sea level fluctuations 33 . Further to the North of the GSR, tectonic widening of the Fram Strait constitutes an alternative candidate that complicates the overall interpretation on the gateway opening of the central Arctic Ocean 5 , 20 . Although the transition from anoxic towards fully oxygenated conditions found in sediments records in proximity to the North Pole suggests a ventilation control via the Fram Strait, however, age model interpretations remain ambiguous 5 , 34 , 35 . As suggested by the geological subsidence model of DSDP site 336 (ref. 21 ), before initial gateway deepening around 36 Ma, lagoonal circulation conveyed Arctic freshwater excess towards the North Atlantic. For this period a relatively warm climate likely enhances Arctic freshwater excess that promotes a sluggish freshwater environment in the restricted Arctic Ocean. As a result of the applied boundary conditions, our model shows a barotropic Arctic Ocean that fails to establish sufficient pressure gradient forces to balance the wind driven geostrophic circulation. Such a resilient Arctic Ocean circulation does not support the development of contourite drift deposits 36 before the gateway opening, enabling a regime of ultra-slow sedimentation rates such as suggested for quasi-enclosed ocean basins 34 , 35 . Referring to the GSR depth record, accelerated subsidence of the GSR around 36–31 Ma initiates a semi-enclosed estuarine seaway exchange and brackish salinity regime in the sub-polar Arctic. In response to the GSR deepening, a nonlinear salinisation process controls the stratification and the associated baroclinic geostrophic balance in ocean circulation, which in turn coincides with initial contourite sediment drift formation in the Greenland and Norwegian Seas 12 , 37 . Within time periods of characteristic gateway depth levels that are placed around 50 mbsl, our model results indicate an accelerated circulation regime with most pronounced baroclinic forces driving the gyre circulation. On the basis of the GSR depth record, our results suggest a change from semi-enclosed estuarine towards a bidirectional circulation ∼ 32 Ma, induced by subsidence of the GSR beneath the surface ocean mixed layer. Such a scenario matches the initial change in isotope records at the Walvis Ridge 9 , 38 —a proxy used for identifying the origin of water masses. This record indicates the contemporary onset of sub-polar Arctic deep water reaching the South Atlantic around 33 Ma ( Supplementary Information and Supplementary Fig. 8 ). Accompanied by the salinisation process, mixing of a δ 18 O depleted ‘Arctic freshwater lake’ with the surrounding oceans implies changes in the global salinity distribution and global shifts in benthic δ 18 O ( Supplementary Table 2 ), which lies within the variability of compiled isotope records 1 , 2 , 3 . In combination, the GSR history and the model results suggest a period ( ∼ 36–32 Ma; Fig. 3b ) of estuarine North Atlantic-Arctic circulation across the Eocene-Oligocene transition ( ∼ 33.8 Ma) that is characterized by remarkable CO 2 variations 26 , 39 and relative sea-level changes 33 ( Fig. 6b and Supplementary Fig. 8 ). Our results suggest that after a first order tectonic pre-conditioning of the GSR gateway and the establishment of an estuarine circulation, atmospheric CO 2 changes and glaciation induced sea level variations may have modulated the overall salinity and gyre strength in a brackish Arctic Ocean at shorter millennial time-scales. Especially at the Eocene-Oligocene transition, the contemporary drop in atmospheric CO 2 and relative sea level changes with respect to the GSR sill depth (for example, by Antarctic glaciation) may have partly counterbalanced the opposing effects of the Arctic freshwater balance and the Atlantic water inflow on the salinity and circulation of the Arctic Ocean. According to the GSR subsidence record 21 , 22 , at ∼ 32 Ma the GSR sill falls well below the surface–subsurface ocean interface (> ∼ 50 mbsl) as defined by the halocline, which constitutes the establishment of a bidirectional seaway circulation and the development towards a present-day north-south directed EGC-NAC current system. In principal, our modelling study shows a pronounced near surface stratification that defines the critical depth in ocean gateway circulation of a semi-enclosed Arctic Ocean basin. The dynamic exchange across the gateway is fundamentally limited to the characteristic depth of wind driven mixing. This depth is determined by the depth of frictional influence, which is controlled by the Coriolis force and local ocean conditions like stratification and turbulent mixing. In light of our modelling results, the theoretical derivation on the depth of frictional influence restricts the critical threshold regime in ocean gateway circulation to the wind driven upper ocean layer. This theoretical and dynamical framework provides a baseline to derive critical gateway depths that are defined by the transition between a semi-enclosed estuarine and fully ventilated ocean regime. In future studies, the presented framework gives support to interpret high-resolution sediment records that target past climate variability in the North Atlantic-Arctic sector. Given the lack of calcareous carbonates in sediment records from the Greenland and Norwegian Seas and the Arctic Ocean, an analysis of near bottom flow changes associated with the simulated ocean circulation regime shift via, for example, sortable silt records or oceanic circulation reconstructions using high resolution imaging of sedimentary structures might be sensible. Such methods could be complemented by biomarker based reconstructions of temperature (for example, alkenone SST) and sea ice conditions (for example, IP25) to test the presented framework. Methods Model description The General Circulation Model COSMOS (community of earth system models) comprises the standardized IPCC4 model configuration which incorporates the ocean model MPIOM 18 , the ECHAM5 atmosphere model at T31 spherical resolution ( ∼ 3.75 × 3.75°) with 19 vertical levels 40 and the land surface model JSBACH including vegetation dynamics 41 , 42 . The ocean model is resolved at 40 unevenly spaced vertical layers and takes advantage of a curve-linear grid at an average resolution of 3 × 1.8° on the horizontal dimension, which increases towards the grid poles at Greenland and Antarctica ( ∼ 30 km). High-resolution in the realm of the grid poles advances the representation of detailed physical processes at locations of deep water formation, as Weddell, Labrador and Greenland and Norwegian Seas. The ocean model includes an Hibler-type dynamic-thermodynamic sea-ice model. The interactive transfer of energy and fluxes between atmosphere and ocean runs without flux corrections and is handled via the coupler OASIS3 (ref. 18 ). Net precipitated water over land, which is not stored as snow, intercepted water or soil water, is either interpreted as surface runoff or groundwater and is redirected towards the ocean via a high-resolution river routing scheme 43 . The model has been applied for scientific questions focusing on the Quaternary 29 , 44 , 45 , 46 , as well as the Neogene 28 , 31 , 47 , 48 , 49 . Model boundary conditions The model setup uses state-of-the-art model boundary conditions encompassing a time period ( ∼ 23–15 Ma) within the early and middle Miocene, which we apply as a template to investigate our North Atlantic/Arctic gateway studies. For this time period the continental ice on Antarctica as well as tectonic boundary conditions (continental distribution, land surface elevation, shelf seas, bathymetry and sediment loading) are derived from Herold et al . 19 . In general the Miocene orography (Andes, Rocky Mountains, East Africa, Tibetan Plateau) and the height of the Antarctic ice-sheet are reduced compared to present-day, whereas the Greenland ice-sheet is absent in the Miocene setup. Ocean gateways like Bering Strait and the Canadian Archipelago evolved after the middle Miocene but Tethys through-flow and Panama Seaway still connected the ocean basins. After the closure of Turgay Strait during the middle/late Eocene 50 , the general late Eocene to Miocene ocean gateway settings at the Arctic have been established. Into this global tectonic reconstruction we have nested a regional high resolution bathymetric dataset comprising the middle Miocene (15 Ma) Greenland and Norwegians Seas and Eurasian Basin 20 , which is adequately represented in our spatial model resolution due to the close locality of the grid pole ( Fig. 1 ). The Greenland-Scotland Ridge acts as an oceanographic barrier and represents the single gateway restricting the exchange of water masses/fluxes between the Arctic Ocean (incl. the Greenland and Norwegian Seas) and the northern North Atlantic. Because of the tectonic opening of the Atlantic basin in time, the depth ( ∼ 970 m) and width ( ∼ 1,300 km) dimensions of the GSR provided by the Miocene bathymetric model setup are comparably smaller than preindustrial. Further north, the Miocene bathymetric constraints also show a more shallow ( ∼ 1,900 m depth) and a more narrow ( ∼ 500 km) Fram Strait with respect to the preindustrial bathymetry. The study of Jakobsson et al . 5 suggests that the Fram Strait progressively opened at ∼ 18.2–17.5 Ma, which is accompanied by a regional Arctic sea-level drop 51 . In contrast, a more recent age model—established by Rhenium–Osmium isotope measurements—indicates that such an opening might have occurred much earlier during the Late Eocene 35 , 36 . Besides conflicting age models 4 , 34 , 35 , 52 , depth reconstructions show a narrow and relatively deep ( ∼ 2,000 mbsl) Fram Strait that already existed during the Oligocene ( ∼ 30 Ma) 20 . Further, sill depth variations of the Fram Strait during the Oligocene-early Miocene ( ∼ 30–20 Ma) possibly ranged between ∼ 2,000 and 1,500 mbsl, before the Fram Strait progressively broadened between 20 and 15 Ma (ref. 20 ). Although the sill depth changes (between ∼ 2,000 and 1,500 mbsl) of the Fram Strait might be important for the exchange of deepwater masses 20 , nevertheless, our study focuses on the effect of GSR sill depth changes in the range of the upper 200 mbsl. On the basis of a Late Miocene vegetation reconstruction 53 , we derived physical soil characteristics, such as soil albedo and maximum water holding field capacity by adapting vegetation related parameters from Stärz et al . 29 . In general the global soil albedo in the Miocene setup decreases in the visible (−0.01) and near infrared spectrum (−0.03) compared to preindustrial (PI; 0.13 and 0.21, respectively). Further the total water holding field capacity increases (+0.03 m) with respect to PI (0.63 m). The time interval between 35 and 16 Ma is characterized by changes in greenhouse gas concentrations. We performed several experiments with different CO 2 forcing scenarios (278, 450, 600 and 840 p.p.m.) that are within the range of CO 2 reconstructions 24 , 25 , 26 , 39 ( Supplementary Table 2 ). Model scenarios The model scenarios, which have been performed in this study, are listed in the Supplementary Table 1 . The preindustrial control run is described in Wei and Lohmann 44 . The final 100 years of each model simulation are used for analysis. We set-up two model scenarios with Miocene boundary conditions as a paleo template for our experiments starting from present-day ocean salinity and temperature fields with atmospheric CO 2 -levels at 278 (EO_278) and 450 (EO_450) p.p.m. Both scenarios (EO_278, EO_450) run for at least 4,000 yrs in order to minimize salinity and temperature trends in the deep ocean. The effect of tectonic gateway changes (subsidence of the Greenland-Scotland Ridge) on the ocean dynamics is investigated by means of various model scenarios. The ocean gateway sensitivity studies are performed at different height and width dimensions of the GSR. By initializing the ocean model from the final 100 yrs mean of EO_450, we performed model scenarios by changing the GSR sill depth to 200, 150, 100, 80, 60, 50, 40, 30 and 22 mbsl, respectively. In a further model scenario we decrease the width of the GSR ocean passage towards ∼ 370 km at a sill depth of 22 mbsl (comprising the uppermost two vertical layers of the ocean’s model grid) respectively, in order to simulate the effect of an oceanographically quasi-enclosed system of the Arctic Ocean. This strategy allows us to pursue the long-term climate response as a consequence of the tectonic GSR subsidence history at timescales of millions of years and to identify potential nonlinear behaviour in this process. Further we initialized all GSR model scenarios with 1‰ salinity and 0 °C ocean temperature of the Arctic Ocean in order to reach an equilibrated state after another 2,000 yrs of model integration. Apart of the gateway sensitivity studies we performed additional experiments at different levels of atmospheric CO 2 ( Supplementary Table 1 ) reflecting a broad range of greenhouse gases representative for the Eocene to Miocene time period ( Supplementary Fig. 8 ). Timing constraints The geological understanding on the subsidence of the Greenland-Scotland Ridge (GSR) during the Eocene towards an oceanographic rise at present water depths remains a major challenge 54 . The general understanding is that the subsidence of the submarine ridge ( Fig. 1 ) controlled the onset ( ∼ 35–33 Ma) 9 , 10 , 11 and long-term variability 13 , 15 of the Atlantic circulation by southward overflow of Northern Component Water (NCW, which constitutes the precursor of modern North Atlantic Deep Water). In general, we classify two main groups of marine reconstructions that constrain different timing on the opening of the Arctic Ocean: Early Oligocene opening Palaeontological depth estimates at DSDP site 336 ( Figs 1 and 3 ) 21 , 22 suggest that during the early Oligocene at 32.4–27 Ma, the rapid deepening of the GSR has been triggered by an abrupt suppression of the Icelandic mantle plume 22 , 55 . This coincides with the onset of deepwater formation as indicated by sediment drift body formation in the northern North Atlantic 10 , 12 and parallels the onset ( ∼ 33 Ma) of decreasing radiogenic neodymium (Nd) isotope composites ( 143 Nd/ 144 Nd normalized, expressed as ɛ Nd in Supplementary Fig. 8 ) in the South Atlantic, respectively, which has been interpreted as a signal of NCW derived from the Greenland and Norwegian Seas 9 . Late Oligocene/early Miocene opening Findings of a major unconformity along the western North Atlantic continental rise have been associated to tectonic controls of deep water outflow from the Greenland and Norwegians Seas during the late Oligocene to earliest Miocene 56 . Moreover, the lack of cosmopolitan benthic foraminifer species 57 , 58 and indications of poorly ventilated intermediate/deep water 11 , as seen in Oligocene strata from the Norwegian-Greenland Seas, rather point towards the retention of an open circulation regime and restricted oceanographic conditions of the Greenland and Norwegian Seas late into the early Miocene 21 , 22 , 59 . Modelled freshwater balance and salinity trends Although the model reproduces the reconstructed global Miocene temperature signal 27 ( Fig. 2 , Supplementary Table 2 ) some classic model discrepancies still remain, such as relatively warm tropical and cool polar temperatures compared to the proxy records 27 , 60 ( Supplementary Fig. 9 ). In consequence the model scenarios might also represent a limited increase of atmospheric water transport towards the Arctic region. Although our modelled increase of freshwater transports towards the Arctic still may underestimate the import via the atmospheric hydrological cycle to a certain extent 32 , however, the freshwater input that dominates the stratification at the ocean gateway is sufficiently well captured by our model scenarios ( Supplementary Table 2 ). Fresh/brackish water conditions in the Arctic Ocean are caused by relatively shallow GSR sill depths (<80 mbsl) that limit the inflow of the North Atlantic Current ( Supplementary Figs 4 and 5 ). Low-salinities in this ‘freshwater lake’ environment in the Arctic Ocean are maintained by a net freshwater excess (0.67 Sv) that is strongly increased (268%) compared to the preindustrial Arctic freshwater balance ( Supplementary Table 2 ). The model-data uncertainties regarding the global latitudinal warming ( Supplementary Fig. 9 ) and increases in the modelled atmospheric hydrological cycle are likely to be a robust ‘estimate’ during the Cenozoic 61 . The model scenarios show an increased precipitation pattern in the high northern latitudes ( Supplementary Fig. 3 ), which is in general agreement with elevated Neogene precipitation records of the northern hemisphere 32 . In our model scenarios the opening of a quasi-enclosed Arctic Ocean by the GSR subsidence initiates a salinisation process in the Arctic Ocean. Progressive deepening of the GSR sill depth (ca. 22–100 mbsl) shows that the ridge subsidence causes a gradual nonlinear warming and a salinity increase from fresh/brackish to modern conditions in the Arctic Ocean ( Supplementary Fig. 5 ). A similar scenario would be the temporary presence of fresh Arctic surface waters, known as the ‘Azolla event’, that have been controlled by limited oceanic heat/salt exchange with adjacent oceans during the Eocene 62 . Wind stress Although we pointed out the main driving mechanism, other forces that are related to CO 2 changes, like wind stress and large scale ocean circulation changes (Atlantic Meridional Overturning Circulation) may also affect Arctic circulation ( Supplementary Fig. 11 , Supplementary Table 2 ). In general, the associated wind stress is essential to induce Ekman transport for the gyre circulation in the Greenland and Norwegian Seas. Compared to preindustrial, EO_450 shows strong reductions in the Greenland and Norwegian Seas wind system ( Supplementary Fig. 11 ), however, the modern-like meridional NAC versus EGC current system is still maintained. Referring to the gateway studies, the wind field anomalies with respect to EO_450 are mainly related to increased sea ice cover, as a consequence of limited northward heat transports. However, as seen in Supplementary Fig. 11 the wind stress anomalies do not show remarkable changes among model studies using different gateway depths (model studies with GSR sill depth <100 mbsl). Therefore, based on our model studies, we conclude that both the wind stress and density driven pressure gradients are a prerequisite to establish a gyre circulation, however, variations in the gyre strength are rather controlled by the baroclinic effects, as a consequence of gateway depth changes. Salinity driven pressure gradient forces on gyre strength In another model sensitivity study (based on EO_GSR100) we test the impact of prescribing uniform salinities at 28‰ in an Arctic Ocean environment at GSR sill depths of 100 mbsl. Therefore, the modified model version of the ocean density calculation depends on ocean temperature and pressure applying salinity as a constant (28‰) 63 . Without salinity gradients in the Arctic Ocean, the temperature related part in the density calculation is not sufficient enough to establish horizontal density and pressure gradients in order to drive the Arctic Ocean circulation ( Supplementary Fig. 6 ). As a consequence the geostrophic-baroclinic balance is not maintained and the Arctic Ocean circulation system collapses. In contrast to the model sensitivity study (applying uniform Arctic Ocean salinities at 28‰), the model scenario EO_GSR100 shows ocean currents that strongly follow along with the pycnoclines, suggesting a close relationship between ocean currents and pressure gradient forces ( Supplementary Fig. 6 ). Effect of atmospheric CO 2 changes on the Arctic Ocean Based on the model scenario EO_GSR50 with GSR sill depth at 50 m, we focus on a range of atmospheric CO 2 changes (278, 450, 600, 840 p.p.m.), ranging from preindustrial CO 2 (278 p.p.m.) to 840 p.p.m. in the atmosphere (refs 24 , 25 , 26 , 39 ; Supplementary Fig. 7 ). Compared to preindustrial, the 278 CO 2 case (EO_GSR50_278) reveals +3.1 °C of global warming at the surface as a consequence of changes in the boundary conditions 19 , 29 and an increased Arctic freshwater balance ( Supplementary Table 2 ). Raising atmospheric CO 2 levels of the EO model scenarios result in additional warming in parallel with increased freshening in the Arctic Ocean ( Supplementary Table 2 ). This is largely related to the effect of an enhanced Arctic freshwater balance in combination with limited gateway water mass exchanges ( Fig. 7 and Supplementary Fig. 5 ). The CO 2 -induced climate warming (+4.94 °C more than in the 278 CO 2 case) drives a reinforced Arctic freshwater balance (+0.16 Sv) via the atmospheric hydrological cycle, which largely shifts the salinity regime further towards fresh conditions in the Arctic. The additional Arctic freshwater release that is transferred into the North Atlantic, combined with increased high latitude warming dampens deepwater formation and results in a consequent slowdown of the Atlantic Meridional Overturning Circulation (ca. 5 Sv compared to PI). At the GSR gateway, especially the additional Arctic freshwater export in combination with attenuated salt import of southern sourced North Atlantic waters reduces the overall salinity and baroclinity in the subpolar Arctic. As a result, at high CO 2 levels (840 p.p.m.) the Arctic gyre strength strongly reduces (ca. 7 Sv) compared to the standard CO 2 (450 p.p.m.) case. Parameter change effects on the model scenarios In order to decipher the effect of CO 2 changes (ΔCO 2 ) and GSR sill depth changes (ΔGSR) with respect to the synergy term, we applied a factor separation analysis 64 . The identifiers (Exp. Id) that are used in the following formula refer to the list of model scenarios in Supplementary Table 1 : An isolated decline in atmospheric CO 2 (ΔCO 2 ) results in a global atmospheric cooling and a sea surface temperature drop that is most pronounced in regions that are associated with a increase in sea-ice cover ( Supplementary Fig. 12 ). On the other hand, deepening of the GSR sill (ΔGSR) effectively promotes northward directed oceanic heat transport, a change from perennial to seasonal sea ice cover and warming SATs in the high-northern latitudes. The sea surface temperature changes are dominated by atmospheric cooling compared with changes in the heat transport that are associated with oceanic readjustments. The deepening of the GSR sill depth (ΔGSR) from 50 m to Miocene depth reconstructions of ∼ 960 m provides the establishment of a North/South directed North Atlantic/East Greenland Current system. These changes cause a strong surface air temperature warming in the high northern latitudes that is most pronounced in the Greenland and Norwegian Seas ( Supplementary Fig. 12 ). The comparison of these mono-causal impacts of GSR changes and CO 2 changes (Δ(GSR+CO 2 )) with their combined effects via the synergy term (ΔSYN) reveals that especially the SAT anomalies in the Norwegian and Greenland Seas are dominated by the associated readjustments in the ocean circulation regime. Interestingly, in the Southern Ocean both, the ΔCO 2 drop, as well as ΔGSR deepening, induces a regional SAT warming. However, the combined effect (Δ(GSR+CO 2 )) reveals a general cooling in the Southern Ocean indicating strong feedbacks that are related to the synergy term (ΔSYN). Depth of the wind driven upper ocean layer The thickness of the wind driven layer can be determined by the depth of frictional influence, or approximated by calculation of the Ekman depth: Thereby, f constitutes the Coriolis parameter, and the eddy viscosity—a parameter that describes the effect of stratification and turbulent mixing—is given by A z (1.0 × 10 −2 m 2 s −1 ). D E is especially dependent on the choice of the eddy viscosity coefficient A z . In the numerical ocean model within COSMOS, this parameter is computed for every time step including the speed of ocean currents, background viscosity based on the mixing of internal wave breaking, wind induced stirring dependent on the local static stability (stratification) and the local Richardson number 65 , 66 . Future studies focussing on a more realistic representation of turbulent mixing as given in high resolution meso-scale eddy resolving ocean models, will provide a more detailed assessment of the upper mixed layer of the ocean in order to constrain the regime shift of the GSR gateway studies. As shown in the manuscript, the depth of frictional influence determines the halocline (pycnocline) and therefore the regime shift in gateway circulation of semi-enclosed stratified ocean basins. Global calculations of the vertical extent of the wind driven layer (except of the low latitudes close to the equator since f becomes increasingly low) provide depth values as a function of latitude, between less than 40 m near the poles and more than 200 m at the low latitudes, respectively. The GSR places at ∼ 58–69°N, yielding calculated Ekman layer depths of ∼ 46–54 m ( Supplementary Fig. 10 ) in agreement with mixed layer depths in the central Arctic Ocean that range between 25 and 50 m (ref. 67 ). This result matches the modelled halocline depth—similar results are obtained for the thermocline and pycnocline ( Supplementary Figs 13 and 14 )—as well as the associated baroclinic geostrophic response with respect to GSR sill depth changes as demonstrated in maximum Arctic gyre strength. Extensions of the model code During sea-ice formation at the surface of the ocean, small amounts of salt water get incorporated into sea-ice. To consider salt water inclusions in the sea-ice matrix, the standard salinity of sea-ice is prescribed at 5‰ in the COSMOS model. For computed sea surface salinities below 5‰, we define new sea-ice salinity at 1‰ in order to account for the formation of sea-ice in a freshwater environment. Further, we included the dynamic computation of the freezing point temperature 68 which is currently fixed at −1.9 °C. This formula actually integrates the effect of sea surface salinity ( S ; pressure, ρ =1,013 hPa) in the calculation of the freezing point temperature t f : Following the oceanographic opening of the Arctic Ocean gateway and the salinity evolution in the Arctic, as a consequence of salinisation processes the sea surface temperature decreases, which is dominated by the calculation of the freezing point temperature ( Supplementary Fig. 4 ). Code availability The standard model code of the ‘Community Earth System Models’ (COSMOS) version COSMOS-landveg r2413 (2009) is available upon request from the ‘Max Planck Institute for Meteorology’ in Hamburg ( ). Data availability The modelling data that support the findings of this study are available in Pangaea with the identifier ‘ ’ (ref. 69 ). Additional information How to cite this article: Stärz, M. et al . Threshold in North Atlantic-Arctic Ocean circulation controlled by the subsidence of the Greenland-Scotland Ridge. Nat. Commun. 8, 15681 doi: 10.1038/ncomms15681 (2017). Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The Arctic Ocean was once a gigantic freshwater lake. Only after the land bridge between Greenland and Scotland had submerged far enough did vast quantities of salt water pour in from the Atlantic. With the help of a climate model, researchers from the Alfred Wegener Institute have demonstrated how this process took place, allowing us for the first time to understand more accurately how Atlantic circulation, as we know it today came about. The results of the study have now been published in the journal Nature Communications. Every year, ca. 3,300 cubic kilometres of fresh water flows into the Arctic Ocean. This is equivalent to ten percent of the total volume of water that all the world's rivers transport to the oceans per year. In the warm and humid climate of the Eocene (ca. 56 to 34 million years ago), the inflow of freshwater was probably even greater. However, in contrast to today, during that geological period there was no exchange of water with other oceans. The influx of saline Atlantic and Pacific water, which today finds its way into the Arctic Ocean from the Pacific via the Bering Strait and from the North Atlantic via the Greenland-Scotland Ridge, wasn't possible - the region that is today completely submerged was above the sea at that time. Only once the land bridge between Greenland and Scotland disappeared did the first ocean passages emerge, connecting the Arctic with the North Atlantic and making water exchange possible. Using a climate model, researchers from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) have now successfully simulated the effect of this geological transformation on the climate. In their simulations, they gradually submerged the land bridge to a depth of 200 metres. "In reality, this tectonic submersion process lasted several million years," says Climate Scientist Michael Stärz, first author of the study. "Interestingly, the greatest changes in the circulation patterns and characteristics of the of the Arctic Ocean only occurred when the land bridge had reached a depth of over 50 metres below the surface." This threshold depth corresponds to the depth of the surface mixed layer, and determines where the relatively light Arctic surface water ends and the underlying layer of inflowing North Atlantic water begins. "Only when the oceanic ridge lies below the surface mixed layer can the heavier saline water of the North Atlantic flow into the Arctic with relatively little hindrance," explains Stärz. "Once the ocean passage between Greenland and Scotland had reached this critical depth, the saline Arctic Ocean as we know it today was created." The formation of ocean passages plays a vital role in global climate history, as it leads to changes in heat transport in the ocean between the middle and polar latitudes. The theory that the Arctic Basin was once isolated is supported by the discovery of freshwater algae fossils in Eocene deep-sea sediments that have been obtained during international drilling near the North Pole in 2004. What was once a land bridge now lies ca. 500 metres under the ocean and consists almost entirely of volcanic basalt. Iceland is the only section remaining above the surface.
10.1038/ncomms15681
Earth
North Atlantic currents may not be linked to Meridional Overturning Circulation
F. Li et al, Subpolar North Atlantic western boundary density anomalies and the Meridional Overturning Circulation, Nature Communications (2021). DOI: 10.1038/s41467-021-23350-2 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-23350-2
https://phys.org/news/2021-06-north-atlantic-currents-linked-meridional.html
Abstract Changes in the Atlantic Meridional Overturning Circulation, which have the potential to drive societally-important climate impacts, have traditionally been linked to the strength of deep water formation in the subpolar North Atlantic. Yet there is neither clear observational evidence nor agreement among models about how changes in deep water formation influence overturning. Here, we use data from a trans-basin mooring array (OSNAP—Overturning in the Subpolar North Atlantic Program) to show that winter convection during 2014–2018 in the interior basin had minimal impact on density changes in the deep western boundary currents in the subpolar basins. Contrary to previous modeling studies, we find no discernable relationship between western boundary changes and subpolar overturning variability over the observational time scales. Our results require a reconsideration of the notion of deep western boundary changes representing overturning characteristics, with implications for constraining the source of overturning variability within and downstream of the subpolar region. Introduction The high-latitude North Atlantic is a key region in the global ocean circulation system. Strong buoyancy loss creates North Atlantic Deep Water (NADW) that subsequently spreads to other ocean basins via the Deep Western Boundary Current (DWBC) 1 , 2 and interior (as opposed to DWBC) pathways 3 . The formation and spreading of NADW are essential elements of the Atlantic Meridional Overturning Circulation (MOC) 4 , 5 . Paleoclimate studies have suggested a strong association between abrupt climate changes during the last glacial cycle and changes in MOC strength, the latter attributed to the strength and location of deep water formation in the subpolar region 6 . On modern time scales, the cessation of deep water formation by winter convection in the Labrador Sea has been proposed as a potential tipping point for future climate change 7 . Moreover, a recent study has suggested an emerging impact of increased freshwater export on weakened deep water formation in the subpolar region 8 . Thus, deciphering relationships between the formation and export of deep water and the MOC’s structure as well as variability is of central importance to understanding and predicting the effects of a warming climate. Recent results from a new trans-basin ocean observing system in the subpolar North Atlantic (OSNAP; Fig. 1 ) 9 , 10 showed that water mass transformation in the eastern subpolar gyre (east of Greenland) dominated subpolar overturning over the period from 2014 to 2016. Surprisingly, winter convection in the Labrador Sea contributed minimally to the mean and variability of the subpolar MOC, even though unusually strong convection occurred in that basin during winter 2014/2015 11 , 12 . These results contradict the view of convection in the Labrador Sea as the major contributor to MOC variability throughout the North Atlantic 13 , 14 via the propagation of density anomalies created by the varying strength of deep convection in this basin 15 , 16 , 17 , 18 , 19 . Though models disagree as to the strength of the linkage between Labrador Sea convection and the MOC, this linkage is a consistent model feature 20 . These new observations raise the question about the source of Labrador Sea density anomalies and their impact on MOC variability in the subpolar basin. Fig. 1: OSNAP array. a Locations of OSNAP moorings (yellow dots) and glider survey domain (yellow line) on bathymetry (1000 m intervals). Arrows indicate the major currents intercepted by the OSNAP array from west to east: LC Labrador Current, WGC West Greenland Current, EGC East Greenland Current, IC Irminger Current, ERRC East Reykjanes Ridge Current, NAC North Atlantic Current. b 2014–2018 Mean velocity perpendicular to the OSNAP section (units: m s −1 ; positive poleward), overlaid by isopycnals (contoured). The isopycnal of 27.65 kg m −3 delimits the upper and lower limbs of the subpolar Meridional Overturning Circulation (MOC), which is slightly different for subsections (27.70 and 27.55 kg m −3 for OSNAP West and East respectively). The OSNAP moorings are marked by vertical black lines. Three moorings from the French Reykjanes Ridge Experiment (RREX) program are marked by vertical purple lines. Hatching in the eastern Iceland basin indicates the glider survey domain. Full size image Because there is the possibility of a delayed impact of strong convection on the overturning due to a residence time of ~2–3 years for newly-formed upper NADW (UNADW) in the Labrador Sea interior 21 , 22 , there is a valid argument that the initial OSNAP record, 21 months in duration, was insufficient to capture this impact. Thus, in this study we use the extended OSNAP data between August 2014 and May 2018 (46 months) now available to further examine the linkage between wintertime convection and MOC variability. Namely, we first assess the impact of subpolar convection on western boundary density anomalies and then, in turn, quantify the impact of those western boundary anomalies on the subpolar MOC variability. We analyze OSNAP array observations to assess this variability on monthly to interannual time scales and define the MOC as the maximum of the overturning stream function in density ( σ θ ) space (Supplementary Fig. 1 ). Details of the OSNAP array design and calculations can be found in Methods. Results Subpolar overturning circulation Over our study period, the MOC across the OSNAP array has a time-mean of \(16.6\,\pm\, 0.7\,{{{\rm{Sv}}}}\) ( \(1\,{{{\rm{Sv}}}}\,=\,\,{10}^{6}\,{{{{\rm{m}}}}}^{3}\,{s}^{-1}\) ) and exhibits strong monthly variability overlaid by year-to-year differences (Fig. 2 ). The quoted uncertainty is the standard error in the mean, estimated via Monte Carlo simulations based on the mean and uncertainty in individual months 10 . Although the 2014–2018 mean MOC is larger than the 2014–2016 mean 10 , the difference is not statistically significant at the 95% level (Methods). Composite monthly means, constructed by averaging the values of the same month for all years, appear to show seasonal MOC cycles (Supplementary Fig. 2 ), but the seasonal changes are not statistically significant (Methods). Fig. 2: Subpolar Meridional Overturning Circulation (MOC) time series. 30-day MOC estimates across the full array, OSNAP West and East, respectively. Shading indicates uncertainty in the 30-day estimate, obtained from a Monte Carlo method 10 . Horizontal dashed lines indicate the 12-month averages (10-month for 2017–2018). The total Ekman transport (not shown) is \(-1.5\,\pm\, 0.02\,{{{\rm{Sv}}}}\) during the whole time period. Full size image On monthly to interannual time scales, MOC variability is dominated by overturning in the eastern subpolar gyre (OSNAP East) rather than in the Labrador Sea (OSNAP West). OSNAP East explains 82% of the total MOC variance and its mean ( \(16.8\,\pm\, 0.6\,{{{\rm{Sv}}}}\) ) is approximately seven times larger than the mean for OSNAP West ( \(2.6\,\pm\, 0.3\,{{{\rm{Sv}}}}\) ). Finally, we note that for this 4-year period, there is no evidence of a delayed MOC response to the intense Labrador Sea convection in winter 2014/2015. Therefore, the results confirm the dominance of overturning in the eastern subpolar gyre over that in the Labrador Sea, and confirm the weak response of overturning to strong Labrador Sea convection, both reported in the earlier analysis 10 . Linkage between deep convection and western boundary changes We further investigate a linkage, or lack thereof, between winter convection and the subpolar MOC by first focusing on the impact of deep convection in the interior on UNADW thickness anomalies along the western boundaries of the Labrador Sea and Irminger Sea. Here we use UNADW layer thickness, defined by the vertical distance between two density surfaces, as a proxy for density anomalies in the boundary and basin interior 23 . The thickness anomalies are derived relative to their 46-month mean. The upper isopycnal of UNADW is the density surface associated with the MOC, which is 27.70 and 27.55 kg m −3 for OSNAP West and East, respectively (Fig. 1b , Supplementary Fig. 1 ). The lower isopycnal for UNADW at both sections is 27.80 kg m −3 , chosen to exclude the lower overflow component of NADW (LNADW). We refer to UNADW and LNADW as the water masses contained in the lower limb of the MOC (e.g., 3 , 24 ). For the thickness calculation in the basin interior, we add a planetary potential vorticity constraint (<4 × 10 −12 m −1 s −1 ) to identify newly-formed deep waters 25 . The expected signature of water mass transformation in the Labrador and Irminger Seas during winter convection is an increase in UNADW layer thickness, as lighter surface water cools, slightly freshens and loses buoyancy 11 , 26 . UNADW layer thickness exhibits clear seasonality in the Labrador Sea interior, increasing by at least 500 m from January to April of each year (Fig. 3a ). These changes are consistent with the characteristics of UNADW formation and ventilation on seasonal to interannual time scales 11 , 27 . UNADW layer thickness in the Labrador Sea boundary currents (Labrador Current, LC and West Greenland Current, WGC, Fig. 1 ) shows similar variability, but is less than half the magnitude observed in the interior. This boundary-interior difference in thickness change is related to a number of factors: the comparatively weak water mass transformation in the boundary current 28 (Supplementary Fig. 3 ); the compensating exchange of temperature and salinity anomalies between the interior and the boundary current 29 ; and the fact that the interior-boundary exchange occurs over time scales from days to years 30 , 31 . Fig. 3: Upper North Atlantic Deep Water (UNADW) layer thickness anomalies. a Labrador Sea: UNADW layer ( σ θ = 27.70–27.80 kg m −3 ) thickness anomalies across the full Labrador Current (LC; dark blue) and West Greenland Current (WGC; light blue) arrays (see Fig. 1b for location), respectively, with shading represents uncertainty; layer thickness anomalies in the Labrador Sea interior (red, shading represents ±1 standard deviation) computed from Argo data north of the OSNAP line where seafloor >3000 m deep (Methods). b Irminger Sea: UNADW ( σ θ = 27.55–27.80 kg m −3 ) layer thickness anomalies across the East Greenland Current (EGC) array of tall moorings within the boundary current (blue; see Fig. 1b for location), with shading represents uncertainty; layer thickness anomalies in the Irminger Sea interior (red, shading represents ±1 standard deviation) computed from Argo data north of the OSNAP line where seafloor >2000 m deep (Methods). c , d Irminger Sea: similar as in ( b ), but for the lightest ( σ θ = 27.55–27.73 kg m −3 ) and most dense UNADW layers ( σ θ = 27.73–27.80 kg m −3 ), respectively. Full size image The import of upstream thickness anomalies may also obscure the relationship between thickness anomalies in the interior and the boundary currents. For example, the increase in WGC thickness in late 2015 and early 2016 precedes thickness increases in the LC and Labrador Sea interior, suggesting a possible upstream source. To support this supposition, we point to the similarity of the LC, WGC, and East Greenland Current (EGC) anomaly time series (blue lines in Fig. 3a, b ). Using daily values at these three boundary arrays, lagged correlations show that the maximum correlation (significant at the 95% level) occurs when the EGC thickness leads the WGC thickness by 22 days and the WGC thickness leads LC thickness by 310 days (Methods). These lags are consistent with a signal propagating via the boundary current system at the advective speed ~10 cm s −1 around Greenland 32 and the Labrador basin 30 , 31 . However, we also note the shared seasonality in these time series, with significant positive correlations at near zero lag and significant negative correlations at ~±180 days. Thus, in addition to the advective mechanism, it is also plausible that the observed ~1-year lag between the EGC and LC is a signature of a common seasonal cycle. A longer time series will be needed to carefully differentiate these possibilities. In the Irminger Sea a different picture emerges. While the UNADW thickness in the Irminger Sea interior exhibits strong monthly and interannual variability, including a sustained thickening during 2015–2016, a seasonal signal is not evident (Fig. 3b ). In contrast, UNADW in the EGC exhibits clear seasonality, but weak interannual variability. Pulses of thick UNADW appear every January in this boundary current, the strongest of which are in the early record. To further examine this dissimilarity between the thickness variability in the interior and boundary current, we partition the UNADW into its light and dense components. With this partitioning, thickness changes in the interior and in the EGC are now comparable for the lightest UNADW (27.55–27.73 kg m −3 ; Fig. 3c ), with strong seasonality in both time series. The boundary current thickness has an earlier peak most winters, with wintertime thickening in both time series diminishing over the observational record (from ~200 m in 2014/2015 to ~100 m in 2017/2018). It appears that this light UNADW component can be formed within the EGC itself, and also rapidly exported into the EGC from the interior, as previously reported from an analysis of OSNAP data 33 . Changes for the dense UNADW layer (27.73–27.80 kg m −3 ) in the interior mirror those for the full UNADW layer (Fig. 3d ). The gradual thickening during 2015–2016 may be associated with deep convection in the southwest Irminger basin 12 , 34 , 35 or the arrival of newly-formed UNADW from the Labrador basin, estimated to reach the Irminger basin via an interior pathway in ~1/2–2 years 22 , 36 . Thickness anomalies in the EGC for the dense UNADW are evident for the winters of 2014/2015 and 2015/2016 (Fig. 3d ). Our analysis has shown that only for the light UNADW in the Irminger Sea is there a simple relationship between thickness changes in the basin interior and the boundary currents. Thus, as with the Labrador Sea, we expect that thickness changes in the boundary current can be impacted by convection within the boundary current, by along-stream advection of thickness anomalies, as well as by convection in the interior. Similarly, as suggested above, it is likely that some thickness anomalies in the interior are also imported. Collectively, these impacts create records of interior and boundary variability that preclude clear attribution and linkage at least on the time scales studied here. Linkage between western boundary changes and the MOC We next evaluate the extent to which UNADW thickness anomalies in the OSNAP boundary arrays impact subpolar MOC strength. As has been shown previously, thickness variability in these boundary currents determines volume transport variability over seasonal and longer time scales 32 , 33 , 37 . In the Labrador Sea, thickness in the LC and WGC largely co-vary with time with small differences between them (~300 m). The thickness difference is related to the strength of the overturning because it reflects the change in density gradients across the basin. This change impacts the vertical velocity shear and thus the geostrophic flow carrying UNADW out of the Labrador Sea. In other words, the relatively low amplitude of the OSNAP West MOC variability arises from the cancellation of thickness changes in the boundary currents on either side of the Labrador Sea. Assuming a linear relationship between the layer thickness and the MOC, the layer thickness difference between the two boundaries would need to increase more than threefold (by ~900 m) for the OSNAP West MOC magnitude to reach that of OSNAP East (16.8 Sv). Such a change in the layer thickness far surpasses that observed in the basin during the past couple of decades 26 . When considering the nonlinearities of the system (e.g., the impact of layer thickness change on the velocity through the modification of the baroclinic shear in the boundary current), an even larger difference in the layer thickness would be needed to produce the same MOC increase. A calculation of OSNAP West MOC based only on LC and WGC velocity and density fields captures ~70% of the MOC variance calculated using data across the full OSNAP West section ( \({r}^{2}\,\approx\, 0.7\) ; Methods). It is consistent with the importance of the boundary region for the Labrador Sea overturning 21 , 38 . To further separate the contributions from the LC and WGC boundaries, we compute the MOC using time-varying fields at one boundary array and time-mean fields at the other, and then vice versa. Although variance in the OSNAP West MOC produced by changes in the LC and WGC individually (~6 Sv 2 ) exceeds the actual OSNAP West MOC variance (~2 Sv 2 ), changes occurring on either side can explain no more than ~10% of the actual OSNAP West MOC variance (Fig. 4a ). Furthermore, neither the anomalies in the individual boundary currents nor their combined effects have a statistically significant impact on the strength of the full subpolar MOC, as expected since the OSNAP West MOC contributes weakly to the full MOC. Fig. 4: Components of overturning variability. a OSNAP West Meridional Overturning Circulation (MOC) anomalies: overturning derived from the Labrador Sea array (blue; shading indicates uncertainty as shown in Fig. 2 ); MOC variability arising from time-varying density and velocity anomalies in the Labrador Current (LC; light gray) computed with time-mean velocities/densities at the West Greenland Current (WGC) boundary, and MOC variability arising from density and velocity anomalies in the WGC (black) computed with time-mean velocities/densities at the LC boundary. b OSNAP East MOC anomalies: overturning derived from the OSNAP East array (red; shading indicates uncertainty as shown in Fig. 2 ); MOC variability arising from time-varying density and velocity in the region between Greenland and mid-Iceland basin (black) computed with time-mean velocities/densities at the eastern boundary, and MOC variability arising from density and velocity anomalies in the East Greenland Current (EGC; light gray) computed with time-mean velocities/densities everywhere else. For the reconstruction based on the time-varying data at the western boundary (light gray line), the MOC is defined as the minimum of the stream function integrated from the bottom to the sea surface in density space (sign has been changed for comparisons). Full size image Turning to the eastern subpolar gyre, we first note that the geometry of the overturning here is remarkably different from that in the Labrador Sea. The MOC upper limb is mainly constrained to the eastern part of the OSNAP East array, where the North Atlantic Current flows broadly northward, and the lower limb is largely constrained to the central and western portions of the array (Fig. 1b ). However, density and velocity changes at the western (EGC) and eastern (Rockall Trough) boundaries together explain only ~50% of the total MOC variance derived from the full OSNAP East array. Changes in the EGC alone capture an even smaller fraction (10%) of the variability in the OSNAP East MOC (Fig. 4b ). Thus, variability in the western boundary current of the Irminger basin is not an indicator of the overturning circulation in the eastern subpolar North Atlantic on the time scale of these observations. However, when we consider changes in a wider region between Greenland and the central Iceland basin (e.g., the location at the OSNAP line that approximately separates the upper and lower limbs of MOC; see Fig. 1b for the location ~2600 m), we are able to reconstruct ~75% of the total MOC variance across OSNAP East (Fig. 4b ). Across OSNAP East, the MOC lower limb is captured only by including currents in a broad region that extends well beyond the western boundary current. The variability contained within the lower limb at OSNAP East is, of course, matched by variability in the upper limb; changes in the upper limb for the region between the central Iceland basin and Scotland explain ~65% of the full subpolar MOC (Supplementary Fig. 4 ). Discussion The extended OSNAP time series has supported the weak linkage between Labrador Sea convection and the subpolar MOC, even during a period with pronounced changes in deep convection. Furthermore, our results demonstrate that density anomalies along the western boundaries of the Labrador and Irminger Seas are not exclusively determined by changes in their basins’ interior. We suggest that convection in the boundary current itself, along-stream advection of upstream anomalies, and the limited exchange between the interior and the boundary are collectively responsible for the differences in the records of variability. The light component of UNADW in the Irminger Sea is an exception to this description. We note also that the interiors of the basins are themselves subject to the import of anomalies created elsewhere in previous winters, as noted earlier for the Irminger Sea. These OSNAP observations reveal that changes in the western boundary current in the subpolar North Atlantic are not, by themselves, indicators of subpolar MOC variability on monthly to interannual time scales. The MOC lower limb in the subpolar North Atlantic has a complex circulation 3 , 39 and changes in the full set of pathways combine to describe the MOC and its variability. Thus, a partial measurement (or proxy) of the DWBC transport or density variations in a limited geographical area (e.g., in the central Labrador Sea or within the Labrador Sea western boundary) is insufficient to reconstruct MOC changes in the subpolar region on these time scales. A recent, related study finds that the water mass transformation induced by air-sea heat and freshwater fluxes in the broad region between the Greenland-Scotland Ridge and OSNAP East can explain the overturning difference between those two sections (~7 Sv) 40 . Thus, the rapid increase in the MOC in the spring of 2015 can be attributed to strong buoyancy forcing throughout the eastern subpolar basin during the 2014/2015 winter. This distributed transformation is consistent with our analysis above in that it suggests the full OSNAP East array is needed to capture the full measure of the overturning. In summary, our results cast further doubt on the supposition that convection variability in the central Labrador Sea drives MOC variability via the export and propagation of density anomalies along the western boundary. In support of our work here, a recent modeling study 41 finds that density anomalies advected from the eastern subpolar North Atlantic dominate the density variability in the western boundary of the Labrador Sea. Thus, density anomalies found in the Labrador Sea likely carry upstream signals. Furthermore, other recent studies have suggested that the linkage between Labrador Sea convection and downstream MOC variability can be explained by shared variability in response to the North Atlantic Oscillation and other atmospheric forcing 42 , 43 , rather than by the equatorward propagation of MOC anomalies from the Labrador Sea. The 4-year OSNAP record provides new insights into the characteristics of the subpolar overturning variability, in particular, its relationship to deep convection and variability at the western boundary. Because our analysis is limited to the time scales resolved by the available record, we acknowledge that the observed relationships may not hold over longer time scales. However, since recent modeling studies do not yet agree on the role of the Labrador Sea in driving overturning variability on decadal and longer time scales (e.g., 41 , 44 ), longer direct measurements are clearly needed. OSNAP is an ongoing program and aims to provide at least 10 years of continuous observations in the region. Methods MOC calculations Here we provide a brief summary of the MOC definition at OSNAP. For more details on calculations of the property and velocity fields as well as the MOC and its uncertainty, the reader is referred to Li et al. 45 and Lozier et al. 10 . MOC is defined as the maximum of the overturning stream function in σ θ space, Ψ, as: $${{{\rm{MOC}}}}(t)\,=\,\,{{\max }}[\Psi ({{\sigma }},{{{\rm{t}}}})]\,=\,\,{{\max }}\left[\int _{{\sigma }_{{{\min }}}}^{\sigma }\int _{{x}_{w}}^{{x}_{e}}v(x^{\prime} ,\sigma^{\prime} ,\,t)\,dx^{\prime} d\sigma^{\prime} \right]\,({{{\rm{Sv}}}}),$$ (1) where v is the volume transport per unit length per unit density and is perpendicular to the OSNAP section (poleward positive). The double integrals are taken from west ( x w ) to east ( x e ) and from the top ( σ min ) across all density surfaces. The MOC upper (lower) limb is defined as the transport between the surface (bottom) and the density at which the overturning function reaches a maximum (namely σ MOC ). The 2014–2018 mean σ MOC is \(27.65\,{{{\rm{kg}}}}\,{m}^{-3}\) across OSNAP (Fig. 1b ). Updates to the MOC calculations There have been a couple of updates in the calculations since Lozier et al. 10 which are related to changes in the array configuration (e.g., addition or loss of instruments) or in the auxiliary data products. Those changes affect the calculations of the cross-sectional velocity field, and, to a lesser extent, the MOC estimate when integrating over the whole section. Here we describe individual changes generally from west to east, and assess their impact on the MOC by comparing the estimates with and without one specific change implemented. The time period over which the assessment is performed depends on the data availability, which ranges from 8 to 46 months (the maximum length of the observational record). All test runs have been conducted using the 30-day averaged data. Updated velocity climatology for the Labrador shelf current (LSC) The LSC is an unmeasured component at OSNAP West that flows southward at water depth shallower than ~300 m (see Fig. 1 for location). It carries the freshest and coldest waters in the subpolar region ( \(\theta \,\approx\,\) 0.8 °C and S ≈ 33.6), constituting a major component of the freshwater flux across the section 10 . In Lozier et al. 10 monthly climatological velocities from a high-resolution (1/12°) regional ocean circulation model (FLAME) were used for representing the LSC at OSNAP. In the current calculations, we have used an ensemble-mean velocity climatology for the whole length of the record (46 months), which is derived from three ocean or ocean–sea-ice models and one ocean reanalysis (Supplementary Table 1 ). This is to reduce a potential transport bias from any specific model. In addition, World Ocean Atlas 2018 (WOA18) temperature 46 and salinity 47 climatology were used to replace WOA13. The LSC transport shows very similar magnitudes among the four products with a shared seasonality that is strong in winter and weak in summer (not shown). The ensemble-averaged annual-mean LSC transport (−2.6 Sv) is slightly stronger than the transport in FLAME (−2.4 Sv). The LSC contains the lightest waters across the section and a stronger LSC transport (southward) causes a weaker transport in the upper MOC limb (northward). The comparison of the MOC estimates during 2014–2018 shows that using the ensemble-mean velocity climatology yields a reduction in the mean MOC by 0.2 Sv, compared to that using FLAME only. Inclusion of the data from three RREX moorings Three tall moorings that were part of the RREX array 48 were deployed in the summer of 2015 right on the OSNAP line in the eastern Irminger basin (Fig. 1 ). They had been continuously measuring temperature, salinity and velocity throughout the water column for 2 years (from 29-Jun-2015 to 26-Jul-2017). The RREX moorings are situated between four existing OSNAP moorings, designed to capture the property and velocity structures of the northward-flowing Irminger Current (IC) above the western flank of the Reykjanes Ridge. The RREX mooring data were retrieved in the summer of 2017 and thus none of them were included in our first 21-month OSNAP time series. In the current calculations, we have incorporated the RREX mooring data in the property and velocity interpolations in the region whenever they are available. To assess the impact on the flux estimate, we compared the transports calculated with and without the RREX mooring data during the 2-year RREX period. The comparisons show that using the RREX data yields a small increase in the mean IC transport (0.8 Sv or ~10% of the mean transport). Because the IC comprises waters in both the upper and lower MOC limbs, the corresponding impact on the MOC estimate is smaller. Over the 2015–2017 time period, the inclusion of the RREX mooring data results in an increase in the time-mean MOC by 0.2 Sv. Modified reconstruction of the deep current in the Iceland basin The deep current in the western Iceland basin contains the densest overflow waters in the subpolar North Atlantic flowing southward across the OSNAP East section. The deep current has a bottom velocity core above the eastern flank of the Reykjanes Ridge with a clear intrusion farther into the basin interior (Fig. 1b ). Our modification of the calculation method is concerning the reconstruction of the interior transport, i.e., water deeper than ~2200 m and between 28.0 and 24.4°W (between the M2 and IB3 moorings; Supplementary Fig. 5 ). In Lozier et al. 10 the bottom velocity field was derived based on the measurements from M2 and IB3 along with D4 in the following way: velocities from M2 and D4 were interpolated to fill the area between them, while geostrophic velocities were calculated between D4 and IB3 (Supplementary Fig. 5 ). In the current calculations, we have modified the method by disregarding the velocity interpolations and instead calculating the geostrophic flow between M2 and IB3 for the whole 2014–2018 time period. Because the M2 mooring is at a water depth (~2400 m) shallower than IB3 (~2800 m), we added the temperature and salinity measurements from the deepest instrument from D4 (at ~2600 m) to M2. It effectively minimizes the bottom triangle below the maximum common depth between the moorings M2 and IB3. To assess the impact from this change, we compared the transport of bottom waters ( σ θ > 27.80 kg m −3 ) between M2 and IB3 for the whole 46-month time period. It shows that the new method yields a small reduction in the time-mean transport in the bottom layer (0.5 Sv) which leads to a reduction in the time-mean MOC by 0.2 Sv. Updated objective analysis (OA) product Away from the boundary arrays, temperature and salinity fields are created based primarily on the Argo and mooring data in the vicinity, using an OA technique. In Lozier et al. 10 OA was performed on depth surfaces (OA depth ), which inevitably tends to mix waters with different densities. In the current calculations, we have used OA gridded products where OA is performed on density surfaces (OA dens ) to best preserve water properties. This updated product was used throughout the 2014–2018 time period. Input for the OA dens comprises the Argo data, OSNAP mooring data as well as the WOA18 climatology. Note that OA dens is limited to the upper water column owing to the limitation of the maximum sampling depth of Argo floats (2000 m). For the deeper layers, we use hydrographic data from the 2014 and 2016 summer OSNAP cruises. Density fields are calculated from gridded temperature and salinity fields using Gibbs Seawater Oceanographic Toolbox (version 3.06.11; ) of the International Thermodynamic Equation of Seawater-2010 (TEOS-10). The OA dens impact on the MOC estimate mainly arises from the use of density profiles for calculating the geostrophic flow in the glider survey domain above the Hatton–Rockall Plateau (Fig. 1 ), which is associated with the northward North Atlantic Current (NAC) transport in the upper MOC limb. To assess the impact, we calculated the transports with OA depth and OA dens , respectively, for the whole 46-month observational period. The comparisons between those two estimates reveal an increase of ~1 Sv in the time-mean NAC transport in the region when using OA dens , with an increase in the time-mean MOC of 1.2 Sv. Time-mean transport in the eastern Rockall Trough wedge In the easternmost part of the OSNAP section, there is a strong northward-flowing shelf-break current (~1 Sv) in the eastern wedge of the Rockall Trough (Fig. 1 ). An upward-looking ADCP (acoustic Doppler current profiler) was mounted at the ~500 m water depth for capturing this wedge transport (Supplementary Fig. 6 ). However, due to multiple instrument losses, there has been only ~8 months of good data returned (30-Oct-2014 to 19-June-2015) during the whole 4-year deployment. To accommodate this data loss, in Lozier et al. 10 velocities from the nearest current meter mooring (RTEB1) to the ADCP were used to fill the wedge when the ADCP data are not available. To assess its impact (e.g., to what extent it captures the shelf-break current), the transports were calculated using the RTEB1 data following Lozier et al. 10 during the 8-month time period when the ADCP data are available. Comparing it to the actual ADCP-derived transport suggests that using the RTEB1 data does not resolve the 1 Sv mean wedge transport. Because the transport is within the upper MOC limb, such an underestimation leads to an underestimation in the MOC of 0.8 Sv in the mean MOC during the same 8-month period. Thus, using the time-mean ADCP velocity for all the times when no ADCP data are returned leads to an increase in the time-mean MOC of 0.8 Sv. Updated time-mean altimetry reference velocity As part of the velocity calculations, the time-mean surface velocity over the observational record from satellite altimetry is used to provide the surface reference for calculating the absolute geostrophic velocities in the basin interior 10 . With the extension of the length of our records, we have now used 4-year mean altimetry velocity (23-Aug-2014 to 26-May-2018) in the current calculations. To assess the impact from this update, we compare the MOC estimates for the 2014–2018 time period using the time-mean surface velocity averaged over the first 21 months and over the whole 46 months, respectively. It shows a reduction in the time-mean MOC by 0.4 Sv with the 46-month mean altimetry reference. Note that in contrast to the other updates, the impact from the change in the reference velocity will be reduced with a longer record (i.e., when the change in the length of the record becomes relatively small compared to the length of record itself). Impact on MOC estimates The change in the mean MOC estimate is not a simple sum of all the changes caused by the individual updates listed above, because updates impact the velocity and MOC during certain times of the observational period. Overall, the updates have changed the time-mean MOC during the first 21-month period from \(14.9\,\pm\, 1.0\,{{{\rm{Sv}}}}\) 10 to \(16.5\,\pm\, 1.0\,{{{\rm{Sv}}}}\) , with negligible impact on the variability ( r = 0.97). However, the difference in the mean MOC estimates is not statistically significant according to their standard errors. Water mass transformation rate The time rate of the water mass transformation shown in Supplementary Fig. 3 is deduced from air-sea buoyancy fluxes by integrating surface density flux over the region where an isopycnal σ outcrops (e.g., 49 ). For each month, the transformation rate ( F ) at a given isopycnal can be derived as, $$F({\sigma }^{\ast })\,=\,\frac{1}{\varDelta \sigma }\iint \left[-\frac{\alpha }{{C}_{P}}Q\,+\,\frac{S}{1\,-\,S}(E\,-\,P)\right]\prod (\sigma ){{{\rm{dxdy}}}}\,({{{\rm{Sv}}}}),$$ (2) where $$\varPi (\sigma )\,=\,\left\{\begin{array}{ll}1 & {{{\rm{for}}}}[\sigma \,-\,{\sigma }^{\ast }]\,\le\, \frac{\varDelta \sigma }{2}\\ 0 & {{{\rm{elsewhere}}}}.\end{array}\right.$$ In Eq. ( 2 ), Q is the net surface heat flux, E the evaporation rate, P the precipitation rate, C P the specific heat capacity of seawater, S the surface salinity, α and β the thermal expansion and haline contraction coefficients, respectively. To calculate, we use \(\varDelta \sigma \,=\,0.2\,{{{\rm{kg}}}}\,{m}^{-3}\) . Q , E , and P are obtained by averaging the monthly NCEP/NCAR 50 and ERA5 surface flux products to the 1/4° ERA5 grids. The outcropping area of each σ for each month is determined from the EN4 gridded subsurface salinity 51 and ERA5 sea surface temperature. Uncertainty for the monthly F is obtained as the ensemble standard error by taking into account the estimates using a variety of Δσ with surface buoyancy fluxes from either NCEP/NCAR 50 or ERA5. Uncertainty in the time-mean F is obtained by combining the uncertainty in individual months randomly following standard error propagation theory. Argo data Temperature and salinity sampled by profiling floats as part of the Argo program ( ) are used to produce density profiles and to calculate the UNADW layer thickness in the Labrador and Irminger basin interiors. Argo data between 2014 and 2018 were downloaded by the U.S. Global Ocean Data Assimilation Experiment (USGODAE) Argo Data Assembly Center. We used delayed-mode data with quality flag of 1 (good) or 2 (probably good) and rejected all problematic Argo profiles according to the Argo floats gray list ( ). Typically, there were 40 and 35 profiles in the Labrador and Irminger basin interior, respectively, each month during 2014–2018. Here the Labrador basin interior is defined as the region north of the OSNAP West line and deeper than 3000 m; the Irminger basin interior is the region north of the OSNAP East line and deeper than 2000 m. All profiles were linearly interpolated onto a uniform vertical grid with 20 m intervals, and then used for calculating layer thickness. The interior thickness is computed as the average of the 20% largest values of all available profiles. Statistical analysis Lagged cross-correlations between two daily time series A and B are performed on data with daily means and the linear trend removed. The daily means are constructed by averaging the values of the same day of the year for all years. The uncertainty in the correlations is examined using the 95% significance level obtained as \(1.96/\sqrt{{N}_{{{{\rm{eff}}}}}}\) . N eff is the effective number of degrees of freedom that takes into account autocorrelation (e.g., 52 ): $${N}_{{{{\rm{eff}}}}}\,=\,N\frac{1\,-\,{r}_{A}{r}_{B}}{1\,+\,{r}_{A}{r}_{B}},$$ (3) where N is the number of observations, r A and r B are the lag 1 autoregressive autocorrelation coefficient of the two variables, respectively. As an alternative approach, we follow McCarthy et al. 53 and evaluate the uncertainty in the correlations by calculating p values from the t -statistic, $$T\,=\,r\sqrt{\frac{{N}_{{{{\rm{eff}}}}}\,-\,2}{1\,-\,{r}^{2}}},$$ (4) where r is the obtained cross-correlation coefficient. The statistical significance of the difference between two independent estimates is obtained following a common practice, i.e., by comparing the standard errors of the two sets (e.g., 54 ). If the gap between the two standard error bars equals the average of them, it indicates \(p\,\approx\, 0.05\) . The significance of the seasonal variations in the MOC is tested using one-way ANOVA (analysis of variance), by calculating p values from the f -statistic, $$F\,=\,\frac{{{{\rm{SSB}}}}/(k\,-\,1)}{{{{\rm{SSE}}}}/(N\,-\,k)}.$$ (5) In Eq. ( 5 ), SSB is the variance between months, $${{{\rm{SSB}}}}\,=\,\mathop{\sum }\limits_{j\,=\,1}^{k}{n}_{j}{({\bar{x}}_{j}\,-\,\bar{x})}^{2},$$ (6) where k is the number of months in the climatology, \({\bar{x}}_{j}\) is the mean for each month, n j is the number of values used to calculate each mean, and \(\bar{x}\) is the overall mean. SSE is the variance within all months, $${{{\rm{SSE}}}}\,=\,\mathop{\sum }\limits_{i\,=\,1}^{N}{({x}_{i}\,-\,{\bar{x}}_{j})}^{2},$$ (7) where x i is individual monthly values, and N is the total number of months. The integral time scale of the MOC at OSNAP is calculated from the autocorrelation function of the daily time series 55 , which is 6 days and 13 days for OSNAP West and East, respectively. Therefore, there is one independent observation every month. Using Eq. ( 5 ), we obtained F = 1.31, p = 0.52 for OSNAP West, and F = 1.73, p = 0.24 for OSNAP East. We use the square of correlation coefficient ( r 2 ) for evaluating how much the total MOC variance can be explained by a reconstructed MOC. That is, r 2 value means that a linear regression of the reconstructed MOC on the actual MOC explains \(({r}^{2} \,\cdot\, 100 \% )\) of the total variance in the latter (e.g., 55 ). Data availability The 2014–2018 OSNAP MOC time series is available in SMARTech Repository 56 . Calibrated and quality-controlled data from moored instruments and gliders were generated by each participating group and are available in designated repositories ( ). Code availability The code used to compute the OSNAP time series, and water mass layer thickness anomaly can be accessed upon request to F.L. Change history 02 February 2022 A Correction to this paper has been published:
A new international study has cast doubts on the view that variations in the density of some of the deepest currents of the subpolar North Atlantic Ocean are caused by winter surface conditions and represent changes in the strength of the Meridional Overturning Circulation (MOC). The study included the efforts of 15 research institutes and was led by Dr. Feili Li and Professor Susan Lozier from Georgia Institute of Technology, in partnership withProfessor Penny Holliday, from the National Oceanography Center (NOC). Research published on 24 May 2021 in Nature Communications shows observations made over four years from 2014 in the subpolar North Atlantic reveal no sign of strong winter cooling at the surface of the ocean on the density of the deepest boundary currents found in the western regions of ocean basins. Surprisingly, the authors also found no visible relationship between changes in those deep western boundary currents and variations in the strength of the MOC. Knowledge of the physical processes that govern changes in the MOC are essential for accurate climate projections. The MOC brings vast amounts of heat and salt into the northern Atlantic via the Gulf Stream and North Atlantic Current. Changes in the strength of the MOC directly affect sea level, climate and weather for Europe, North America and parts of the African continent. Climate projections all predict a slowing of the MOC as a result of greenhouse gas emissions, with potentially damaging impact on coastal communities and land. Scientists recovering instruments that have been collecting data in deep ocean for 2 years on the OSNAP array (Photo Credit NOC and GEOMAR). Credit: NOC and GEOMAR Previous analysis of models has led scientists to think that changes in the strength of the MOC are associated with changes in the density of the deep western boundary currents which make up the majority of the southward return flow of the MOC loop. In models, density can be strongly affected by a winter process called deep convection or deep water formation, where cold winds cool the surface water causing it to become very dense and sink to great depths (more than 2km). The relationship in models between convection, changes in deep western boundary currents and the strength of the MOC also underpins evidence from palaeoclimate proxies for periods of reduced MOC and low European temperatures. Retrieving data from the Labrador Sea. The OSNAP array consists of over 50 moorings between Canada, Greenland and Scotland, in ocean basins up to 3km deep. Each mooring takes several hours to bring back on board a research vessel. Credit: NOC and GEOMAR In 2014 scientific equipment was placed in the subpolar North Atlantic (OSNAP) to observe these processes in real life. The surprising new results will stimulate a reconsideration of the view that deep western boundary changes represent overturning characteristics, with implications for future climate projections as well as the interpretation of past climate change. Prof. Susan Lozier, overall lead of the international OSNAP program said: "It is gratifying to see what an international community of oceanographers can achieve with a concentrated effort of collaboration and determination. Programs such as OSNAP and RAPID are blueprints for how oceanographers across the globe can collectively study the ocean's role in climate change in the years and decades ahead." Prof. Penny Holliday, Associate Head of Marine Physics and Ocean Climate from the National Oceanography Center commented: "it is incredibly exciting to see how new observations from the OSNAP array are accelerating our knowledge of how these major ocean currents work, so that we can be more confident in our understanding of past climate change and in future climate projections."
10.1038/s41467-021-23350-2
Medicine
Ultraviolet light-induced mutation drives many skin cancers, study finds
Recurrent point mutations in the kinetochore gene KNSTRN in cutaneous squamous cell carcinoma, Nature Genetics, DOI: 10.1038/ng.3091 Journal information: Nature Genetics
http://dx.doi.org/10.1038/ng.3091
https://medicalxpress.com/news/2014-09-ultraviolet-light-induced-mutation-skin-cancers.html
Abstract Here we report the discovery of recurrent mutations concentrated at an ultraviolet signature hotspot in KNSTRN , which encodes a kinetochore protein, in 19% of cutaneous squamous cell carcinomas (SCCs). Cancer-associated KNSTRN mutations, most notably those encoding p.Ser24Phe, disrupt chromatid cohesion in normal cells, occur in SCC precursors, correlate with increased aneuploidy in primary tumors and enhance tumorigenesis in vivo . These findings suggest a role for KNSTRN mutagenesis in SCC development. Main Cutaneous SCC is the second most common cancer, with an annual global incidence exceeding 1 million 1 . To identify recurrent genomic aberrations that underlie the development of this malignancy, we used single-nucleotide variant (SNV) determinations from the whole-exome sequencing of 12 SCC-normal pairs ( Supplementary Tables 1 and 2 ; ref. 2 ) to distill a list of 336 candidate genes that were then resequenced in 100 matched SCC-normal pairs as well as in 5 SCC cell lines with an average depth exceeding 1,200× ( Supplementary Tables 1 , 2 , 3 , 4 ). Analysis of mutation type showed that the majority of tumors had a mutational profile characteristic of exposure to ultraviolet (UV) light ( Fig. 1a ), consistent with the known association of this cancer with sunlight. The mutation frequencies in TP53 , CDKN2A and HRAS —the three genes most studied in this cancer thus far—were consistent with those previously reported, confirming that the sequenced samples were genetically representative of this malignancy ( Fig. 1b and Supplementary Fig. 1 ). The number of mutations found in archived formalin-fixed, paraffin-embedded samples was not statistically different from that detected in fresh samples ( P = 0.55) ( Fig. 1b ). Previously reported SCC-associated inactivating mutations in TP53 and CDKN2A were identified as well as activating HRAS mutations and frequent disruption of the NOTCH1 and NOTCH2 genes ( Fig. 1b and Supplementary Table 2 ). Figure 1: Recurrent mutations in KNSTRN encoding p.Ser24Phe in cutaneous SCC. ( a ) The percentages of somatic point mutations in 100 primary cutaneous SCCs that are transitions compared to transversions. ( b ) Characterization of SCC–matched normal pairs. Mutation frequency is shown in parentheses next to each gene name. FFPE, formalin fixed and paraffin embedded. ( c ) Distribution of the SCC-associated alterations identified in this study across the kinastrin coding sequence. SXLP, SXLP motif; CC, coiled-coil region. ( d ) Detection of KNSTRN mutation encoding p.Ser24Phe in SCC precursor actinic keratoses (AKs) as well as primary SCCs by allelic discrimination quantitative PCR. NL, freshly excised normal skin. Data shown represent technical triplicates. WT, wild type. Full size image Among the recurrently mutated genes in SCC, KNSTRN ranked third behind CDKN2A and TP53 after normalizing for ORF length ( Fig. 1b ). KNSTRN encodes a kinetochore-associated protein that modulates anaphase onset and chromosome segregation during mitosis 3 . It is expressed in a broad range of human tissues, including in skin ( Supplementary Fig. 2 ). Somatic mutations in KNSTRN were present in 2 of 12 (17%) and 19 of 100 (19%) SCCs analyzed by whole-exome and targeted sequencing, respectively. Over half of these mutations mapped to a 17-amino-acid N-terminal region, with a 'hotspot' serine-to-phenylalanine substitution present at codon 24 (p.Ser24Phe) ( Fig. 1c and Supplementary Fig. 3 ) that was also detected in the cutaneous SCC cell line SCC-12B.2 ( Supplementary Table 4 ). This pattern of clustered somatic missense mutations is characteristic of dominant mutations in oncogenes 4 , although KNSTRN has thus far not been implicated in any published study of human cancer. Notably, the KNSTRN mutation encoding p.Ser24Phe involves a C>T transition that is characteristic of UV-induced mutagenesis. To determine whether KNSTRN mutagenesis might be an early event in SCC development, we screened 38 additional primary SCCs as well as 27 actinic keratoses, representing the earliest SCC precursor, for the presence of KNSTRN mutation encoding p.Ser24Phe. The mutation was detected in 5 of 27 (19%) and 5 of 38 (13%) actinic keratoses and SCCs, respectively, but was never identified in normal skin (0 of 122), indicating that it arises early in tumorigenesis ( Fig. 1d ). We next parsed all publicly available data sets from The Cancer Genome Atlas (TCGA). We identified KNSTRN mutations in 23 of 490 (4.7%) melanomas, another major sunlight-associated cancer, with 15 (65%) mapping to the 17-amino-acid N-terminal region and 10 (44%) specifically inducing the p.Ser24Phe substitution ( Supplementary Fig. 4 and Supplementary Table 5 ). KNSTRN mutations were rare events in the other surveyed cancers, with none displaying mutations resulting in the p.Ser24Phe substitution ( Supplementary Fig. 4a ). Thus, recurrent mutation of KNSTRN and, in particular, mutation encoding p.Ser24Phe appear selective for UV-associated malignancies. Aberrant KNSTRN expression has previously been shown to result in loss of chromatid cohesion in HeLa cells 3 , 5 ; however, the effects of mutant kinastrin protein in normal primary cells have not been described. To evaluate whether Ser24Phe kinastrin is functionally relevant in this context, we expressed wild-type or Ser24Phe kinastrin in primary human keratinocytes ( Fig. 2a and Supplementary Fig. 5 ) and assessed chromosome segregation during mitosis. Expression of mutant kinastrin disrupted sister chromatid cohesion, as demonstrated by a subset of cells containing unpaired chromatids in normal cells as well as SCC-13 cells ( P = 0.0002) ( Fig. 2b and Supplementary Fig. 6 ). Kinastrin proteins corresponding to four additional cancer-associated KNSTRN mutations (encoding p.Arg11Lys, p.Pro26Ser, p.Pro28Ser and p.Ala40Glu substitutions), including those present in melanoma, similarly disrupted chromosome segregation ( Supplementary Figs. 4b and 7 ). These functional data support a role for cancer-associated KNSTRN mutations in controlling chromosomal stability in normal as well as cancer cells. Figure 2: Ser24Phe kinastrin promotes aneuploidy and enhances tumorigenesis in vivo . ( a ) Disrupted sister chromatid cohesion in early-passage, primary human keratinocytes pooled from multiple donors and transduced to express wild-type or Ser24Phe kinastrin. Arrowheads mark unpaired chromatids. Scale bars, 5 μm. ( b ) Quantification of unpaired chromatids ( n = 2 biological replicates). *P = 0.0002. ( c ) Percentage of the genome affected by chromosomal gains and losses in SCC tissues with either wild-type or Ser24Phe mutant kinastrin. Data shown represent means ± s.d.; n = 5 independent primary tumors per group. * P = 0.007. CNVs, copy number variants. ( d ) Weight of mouse xenograft tumors 23 d after injection. Primary human keratinocytes were transduced to express human Cdk4 and oncogenic Ras as well as LacZ (CTR), wild-type kinastrin or Ser24Phe kinastrin and were injected subcutaneously into NOD SCID mice. Data shown represent means ± s.d.; n = 4 tumors per group. * P = 0.03, ** P = 0.04; NS, not significant. ( e ) Ki-67 immunohistochemistry of the tumors from d . Representative fields are shown. Scale bars, 30 μm. ( f ) Quantification of the mitotic index for the tumors from d . Two high-power fields each containing an average of 408 cells were quantified per tumor ( n = 4 tumors per group). Error bars, s.d. * P = 0.019, ** P = 0.002; NS, not significant. Full size image We next performed whole-genome copy number analysis on five primary SCCs with the KNSTRN mutation encoding p.Ser24Phe and on five histologically matched SCCs with wild-type KNSTRN to determine whether the observed perturbations in sister chromatid cohesion correlated with tumor aneuploidy. Affymetrix OncoScan arrays were used to interrogate genomic DNA and identify chromosomal gains and losses in each sample ( Supplementary Table 6 ). Whereas both groups of tumors were aneuploid, SCCs with the KNSTRN mutation encoding p.Ser24Phe were significantly more so ( P = 0.007), with a greater percentage of their genomes affected by copy number aberrations ( Fig. 2c ). We further observed that Ser24Phe kinastrin was capable of enhancing aneuploidy in TP53 -depleted primary human keratinocytes in the presence of the aneugen paclitaxel ( Supplementary Fig. 8 ). Neither wild-type nor mutant kinastrin significantly altered cell growth or cell cycle kinetics in two-dimensional culture ( Supplementary Fig. 9 ). However, mutant kinastrin selectively enhanced tumorigenesis in a mouse model of human Ras-driven SCC, resulting in a threefold increase in tumor weight in comparison to control tumors driven by Ras and Cdk4, along with an associated 53.7% increase in the mitotic index ( Fig. 2d–f ). Ser24Phe kinastrin thus disrupts sister chromatid cohesion, is associated with increased tumor aneuploidy and augments oncogene-driven tumor growth in vivo . Here we identify what are, to our knowledge, the first cancer-associated missense mutations in KNSTRN with dominant, protumorigenic consequences. The clustered distribution of KNSTRN mutations in skin cancer, their annotation as heterozygous events by allele frequency and the ability of mutant KNSTRN to enhance tumorigenesis all suggest that KNSTRN might be a previously unrecognized oncogene in human cancer. This possibility is consistent with results from a recently developed modeling algorithm predicting that KNSTRN is 14.3 times more likely to be an oncogene than a tumor suppressor 6 . Although our whole-genome copy number analysis did not detect deletions in KNSTRN , we do note that the gene appears to be lost in a subset of human cancers. Interrogation of GISTIC analyses performed on the TCGA Pan-Cancer data set 7 showed that KNSTRN was not located within a focal peak region of deletion in the aggregated collection of 4,934 primary tumors representing 11 cancer types. Moreover, closer examination of regions annotated as having KNSTRN loss in the Catalogue of Somatic Mutations in Cancer (COSMIC) and TCGA databases showed that nearly all (>99%) were part of a larger locus that included genes with established tumor-suppressive roles, making it difficult to specifically attribute tumorigenic activity to KNSTRN loss in these contexts. We further note that, although several SCCs were found to contain more than one KNSTRN mutation, this observation might relate to intratumoral heterogeneity. Consistent with this idea, we also observed multiple independent PIK3CA and BRAF mutations within the same tumor in a subset of the SCCs we analyzed. Our data support a model wherein mutant KNSTRN disrupts the chromatid cohesion required for faithful cellular replication, driving cells toward aneuploidy and culminating in tumor development. A systematic search of all sequenced genes for mutations that might co-occur or be mutually exclusive with KNSTRN mutations did not identify any statistically significant pairings to provide initial mechanistic insight, perhaps owing to the limited scope of this gene set ( Supplementary Table 7 ). Detection of KNSTRN mutation encoding p.Ser24Phe in actinic keratoses, which are known to be aneuploid 8 and frequently exhibit TP53 mutations 9 that may permit escape from cell cycle arrest and/or apoptosis, is consistent with this model of tumorigenesis and demonstrates that the UV signature–associated hotspot mutation encoding p.Ser24Phe occurs early during the progression of skin cancer precursors to frank carcinoma. The presence of KNSTRN mutation encoding p.Ser24Phe at comparable frequencies in actinic keratoses and SCCs is notably reminiscent of other early events in SCC tumorigenesis, such as mutational inactivation of TP53 (ref. 10 ). Clinically, our findings imply that tumors with KNSTRN mutation encoding p.Ser24Phe or similar dominant mutations might be more prone to aneuploidy, and the presence of this mutation might therefore predict aggressive tumor behavior with potential implications for disease-specific survival. Further exploration of how cancer-specific mutations in KNSTRN contribute to tumor development seems to be warranted. Methods Tumor tissues. Cutaneous SCCs and case-matched normal adjacent skin samples as well as actinic keratoses were collected under a protocol approved by the Institutional Review Board at Stanford University Medical Center. Individuals donating fresh surgical tissue provided informed consent. The archived specimens used fall under exemption 4. All diagnoses were verified by histological review. Samples with heavy neutrophilic infiltrate or widespread necrosis were excluded. Each SCC used for allelic discrimination contained ≥80% tumor tissue (US Biomax). Genomic DNA was isolated from all specimens using the DNeasy Blood and Tissue kit (Qiagen). A sample size of 100 was selected to approximate the number of cases needed to adequately represent this malignancy in the absence of prespecified mutation frequencies. Cells and cell lines. Primary human keratinocytes were isolated from fresh surgical specimens and grown in a 1:1 mixture of KSF-M (Gibco) and Medium 154 for keratinocytes (Gibco), supplemented with epidermal growth factor (EGF) and bovine pituitary extract (BPE). Keratinocyte differentiation was induced in vitro by introducing 1.2 mM calcium to the medium and then growing the cells at full confluence for up to 5 d. The human SCC cell lines SCC-12B.2 and SCC-13 (a generous gift from J.G. Rheinwald, Dana-Farber/Harvard Cancer Center) were cultured in DMEM (Gibco) supplemented with 20% bovine calf serum and 0.4 μg/ml hydrocortisone and KSF-M supplemented with 25 μg/ml BPE, 0.2 ng/ml EGF and 0.3 mM CaCl 2 , respectively. The human SCC cell lines A431, CAL-27 and SCC-25 were obtained from the American Type Culture Collection and grown in DMEM supplemented with 10% FBS. All cells were grown at 37 °C in a humidified chamber with 5% CO 2 . All cell lines were negative for mycoplasma with MycoAlert (Lonza) immediately before use. Library preparation and sequencing. Sequencing libraries were prepared with the Ovation Ultralow kit (NuGEN) using 50–100 ng of genomic DNA as input. Libraries were barcoded and pooled, and exon enrichment was then performed using a custom-designed capture measuring 1.4 Mb (SeqCap EZ Choice, NimbleGen). Enriched libraries were sequenced with the Illumina HiSeq platform with 101-bp paired-end reads. Selection of targeted sequencing genes. To prioritize candidates for targeted exome sequencing, genes containing somatic mutations by whole-exome sequencing were first filtered for expression in primary human keratinocytes 11 . Genes with somatic mutations distinct from established SNP positions (dbSNP Build 137), occurring in COSMIC cancer census genes and for which variants were predicted to be damaging 12 were assigned a higher priority, as were genes observed to be mutated recurrently. Genes not known to be causally implicated in cancer that were mutated in ≥2 of the 12 non-SCC exomes sequenced in parallel were removed from further analysis to minimize the likelihood of studying a non-pathogenic SNP or a sequencing artifact. SNV analysis. Paired-end alignment was performed with the Burrows-Wheeler Aligner (BWA) 13 to the hg19 reference using default parameters. SNV calling was performed with the Genome Analysis Toolkit (GATK) 14 , VarScan 15 and SeqGene 16 . GATK was run following Best Practices v3 for exomes, using Indel Realignment, the Unified Genotyper, the Variant Quality Score Recalibrator and Variant Filtration as recommended 17 . Quality scores of 50 were required for a call, whereas a quality of 10 was accepted for emitting. Recalibration was performed to the 1000 Genomes Project and HapMap 3.3 SNPs provided in the resource bundle. Resequencing analysis was recalibrated to the Mills and 1000 Genomes Project Gold Standard package with a maximum Gaussians parameter of 4. Variants were further filtered for clusters of greater than three SNVs in a 10-bp window. VarScan was run with default parameters. SeqGene was run with a threshold of 0.1 for SNV calling, and all other parameters were default. No minimum threshold was set for the resequencing analysis. Exome sequencing downstream analysis of GATK-called SNVs was performed on calls in a tranche of 99.0 or better. VarScan and SeqGene results were further filtered for false positives by removing any calls not supported by at least one read in each direction in the exome sequencing analysis. Unequal forward and reverse read distributions (with more than an 80%/20% split) were also removed from analysis. Low-coverage calls (<6 reads) were not held to this standard, but variant calls instead had to comprise at least 20% of the reads at that position. Resequencing downstream analysis was performed on all acquired mutation calls on the basis of the genotype designated by the SNV caller. It was further required that the SCC samples contain ≥2-fold enrichment of reads supporting the mutation in comparison to the control samples. For cell lines, a minimum variant allele frequency of 0.1 was required. Annotations to all mutation calls were performed with SeattleSeq 18 . Allelic discrimination. Probe sets and primers for wild-type KNSTRN and KNSTRN encoding p.Ser24Phe were custom designed (Custom TaqMan SNP Genotyping Assay Mix). Reactions were performed in triplicate with TaqMan Universal PCR Master Mix (Applied Biosystems). Immunofluorescence and immunohistochemistry. Site-directed mutagenesis was performed on isoform 3 ( NM_001142762 ) of KNSTRN , which was then cloned into the LZRS retroviral backbone for transduction into primary keratinocytes. For transduction into SCC-13 cells, KNSTRN was cloned into the pLEX lentiviral backbone with a sequence encoding a Flag-HA-poly(His) tag at the N terminus. Protein blotting was performed to confirm overexpression of kinastrin (Abcam, ab122769; 1:1,000 dilution), Cdk4 (clone h-303, Santa Cruz Biotechnology, sc-749; 1:1,000 dilution) and Ras (clone c-20, Santa Cruz Biotechnology, sc-520; 1:1,000 dilution), and equivalent loading was verified with antibody to β-actin (Sigma). Kinastrin staining (1:50 dilution; Abcam, ab122769) was performed on a skin cancer and normal tissue microarray (Biomax). Ki-67 staining (1:200 dilution; Dako, M7240) was performed on mouse xenograft tumors. Chromosome spreads. Primary human keratinocytes transduced to express LacZ, wild-type kinastrin or Ser24Phe kinastrin were grown in medium containing 100 ng/ml nocodazole (Sigma) for 12 h. Mitotic cells were selected by brief trypsinization followed by shaking off. Cells were resuspended in 75 mM KCl and cytospun onto glass coverslips with a cytocentrifuge (Shandon Cytospin 4). The resulting spreads were washed with KCM buffer (120 mM KCl, 20 mM NaCl, 10 mM Tris-HCl (pH 8.0), 0.5 mM EDTA and 0.1% Triton X-100), permeabilized in KCM buffer with 0.5% Triton X-100, stained with antibodies to CenpA and CenpB (a gift from the Straight laboratory at Stanford University; 1:1,000 and 1:500 dilution, respectively), fixed in 3.7% formaldehyde in KCM and stained with Hoechst (10 μg/ml). The investigator was blinded to the identities of the experimental groups during centromere quantification. Imaging. Images were collected in a z stack using an Olympus IX70 microscope and Softworx software (Applied Precision). The final images shown are maximum-intensity projections of deconvolved z stacks. Copy number analysis. Genomic DNA purified from ten fresh primary SCCs was interrogated using Affymetrix OncoScan arrays (v3) according to the manufacturer's instructions. Chromosomal gains and losses were identified using Nexus Express software. Flow cytometry. Primary human keratinocytes transduced to express dominant-negative p53 (Arg248Trp) as well as LacZ, wild-type kinastrin or Ser24Phe kinastrin were grown in medium containing 5 nM Taxol (Sigma) for 68 h, fixed in ethanol for at least 2 h at room temperature and stained for 30 min at room temperature (1% Triton X-100, 20 μg/ml propidium iodide, 0.1% BSA and 10 mg/ml RNase A). At least 100,000 cells were collected per event using a Scanford or BD FACSCalibur flow cytometer (BD Biosciences), and data were analyzed with FlowJo software (Tree Star). Doublets and debris were removed by forward scatter (FSC) versus side scatter (SSC) gating followed by propidium iodide width gating. FlowJo software was used to generate cell cycle kinetics using the Watson Pragmatic algorithm and to place G0/G1, S and G2 gates. Polyploid events were gated on mean fluorescence intensities greater than those for the G2 gate. Cell growth assays. We plated 2 × 10 3 cells per well in 24-well plates and analyzed them at various time points after seeding using the CellTiter-Blue Cell Viability Assay (Promega). Reactions were performed in triplicate. Statistical analysis. Two-tailed, unpaired t tests with Welch's correction were performed to compare mean values between experimental groups using GraphPad Prism software. Fisher's exact test was used to evaluate the significance of mutational co-occurrence. The resulting P values were corrected for multiple-hypothesis testing using the Benjamini-Hochberg method, and a threshold for calling significantly co-occurring events was determined on the basis of an empirical q value (0.0142) derived from the false discovery rate inherent in multiple-testing correction. Mouse xenografts. All mouse husbandry and experimental procedures were performed in compliance with policies approved by the Stanford University Administrative Panel of Laboratory Animal Care. Transduced primary human keratinocytes (1 × 10 6 ) were resuspended in 100 μl of PBS and 100 μl of Matrigel (BD Biosciences) and injected with a 27-gauge needle into the subcutaneous space of 6-week-old female NOD SCID mice (Charles River). Mice were randomly assigned to experimental groups. NCBI guidelines were consulted when calculating sample size for continuous variables to establish significance for α = 0.05. As tumor weight is an objective measurement, the investigator was not blinded to experimental group during this assessment. Accession codes. Sequence and array data have been deposited in dbGaP under accession phs000785.v1.p1 . Accession codes Accessions NCBI Reference Sequence NM_001142762
A genetic mutation caused by ultraviolet light is likely the driving force behind millions of human skin cancers, according to researchers at the Stanford University School of Medicine. The mutation occurs in a gene called KNSTRN, which is involved in helping cells divide their DNA equally during cell division. Genes that cause cancer when mutated are known as oncogenes. Although KNSTRN hasn't been previously implicated as a cause of human cancers, the research suggests it may be one of the most commonly mutated oncogenes in the world. "This previously unknown oncogene is activated by sunlight and drives the development of cutaneous squamous cell carcinomas," said Paul Khavari, MD, PhD, the Carl J. Herzog Professor in Dermatology in the School of Medicine and chair of the Department of Dermatology. "Our research shows that skin cancers arise differently from other cancers, and that a single mutation can cause genomic catastrophe." Cutaneous squamous cell carcinoma is the second most common cancer in humans. More than 1 million new cases are diagnosed globally each year. The researchers found that a particular region of KNSTRN is mutated in about 20 percent of cutaneous squamous cell carcinomas and in about 5 percent of melanomas. A paper describing the research will be published online Sept. 7 in Nature Genetics. Khavari, who is also a member of the Stanford Cancer Institute and chief of the dermatology service at the Veterans Affairs Palo Alto Health Care System, is the senior author of the paper. Postdoctoral scholar Carolyn Lee, MD, PhD, is the lead author. Lee and Khavari made the discovery while investigating the genetic causes of cutaneous squamous cell carcinoma. They compared the DNA sequences of genes from the tumor cells with those of normal skin and looked for mutations that occurred only in the tumors. They found 336 candidate genes for further study, including some familiar culprits. The top two most commonly mutated genes were CDKN2A and TP53, which were already known to be associated with squamous cell carcinoma. The third most commonly mutated gene, KNSTRN, was a surprise. It encodes a protein that helps to form the kinetochore—a structure that serves as a kind of handle used to pull pairs of newly replicated chromosomes to either end of the cell during cell division. Sequestering the DNA at either end of the cell allows the cell to split along the middle to form two daughter cells, each with the proper complement of chromosomes. If the chromosomes don't separate correctly, the daughter cells will have abnormal amounts of DNA. These cells with extra or missing chromosomes are known as aneuploid, and they are often severely dysfunctional. They tend to misread cellular cues and to behave erratically. Aneuploidy is a critical early step toward the development of many types of cancer. The mutation in the KNSTRN gene was caused by the replacement of a single nucleotide, called a cytosine, with another, called a thymine, within a specific, short stretch of DNA. The swap is indicative of a cell's attempt to repair damage from high-energy ultraviolet rays, such as those found in sunlight. "Mutations at this UV hotspot are not found in any of the other cancers we investigated," said Khavari. "They occur only in skin cancers." The researchers found the UV-induced KNSTRN mutation in about 20 percent of actinic keratoses—a premalignant skin condition that often progresses to squamous cell carcinoma—but never in 122 samples of normal skin, indicating the mutation is likely to be an early event in the development of squamous cell carcinomas. Furthermore, overexpression of mutant KNSTRN in laboratory-grown human skin cells disrupted their ability to segregate their DNA during cell division and enhanced the growth of cancer cells in a mouse model of squamous cell carcinoma. Finally, Lee compared five patient-derived squamous cell carcinomas that had the KNSTRN mutation with five samples that did not have the mutation. Although both sets of cells were aneuploid, those with the mutation had the most severely abnormal genomes. The identification of a new oncogene will allow researchers to better understand how these types of skin cancers develop. It may also give them clues about how to develop new therapies for the disease. In this case, it also neatly connects the dots between sun exposure and skin cancer. "Essentially, one ultraviolet-mediated mutation in this region promotes aneuploidy and subsequent tumorigenesis," said Khavari. "It is critical to protect the skin from the sun."
10.1038/ng.3091
Nano
How to enlarge 2-D materials as single crystals
Epitaxial growth of a 100-square-centimetre single-crystal hexagonal boron nitride monolayer on copper. Nature. DOI: 10.1038/s41586-019-1226-z Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1226-z
https://phys.org/news/2019-05-enlarge-d-materials-crystals.html
Abstract The development of two-dimensional (2D) materials has opened up possibilities for their application in electronics, optoelectronics and photovoltaics, because they can provide devices with smaller size, higher speed and additional functionalities compared with conventional silicon-based devices 1 . The ability to grow large, high-quality single crystals for 2D components—that is, conductors, semiconductors and insulators—is essential for the industrial application of 2D devices 2 , 3 , 4 . Atom-layered hexagonal boron nitride (hBN), with its excellent stability, flat surface and large bandgap, has been reported to be the best 2D insulator 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 . However, the size of 2D hBN single crystals is typically limited to less than one millimetre 13 , 14 , 15 , 16 , 17 , 18 , mainly because of difficulties in the growth of such crystals; these include excessive nucleation, which precludes growth from a single nucleus to large single crystals, and the threefold symmetry of the hBN lattice, which leads to antiparallel domains and twin boundaries on most substrates 19 . Here we report the epitaxial growth of a 100-square-centimetre single-crystal hBN monolayer on a low-symmetry Cu (110) vicinal surface, obtained by annealing an industrial copper foil. Structural characterizations and theoretical calculations indicate that epitaxial growth was achieved by the coupling of Cu <211> step edges with hBN zigzag edges, which breaks the equivalence of antiparallel hBN domains, enabling unidirectional domain alignment better than 99 per cent. The growth kinetics, unidirectional alignment and seamless stitching of the hBN domains are unambiguously demonstrated using centimetre- to atomic-scale characterization techniques. Our findings are expected to facilitate the wide application of 2D devices and lead to the epitaxial growth of broad non-centrosymmetric 2D materials, such as various transition-metal dichalcogenides 20 , 21 , 22 , 23 , to produce large single crystals. Main Recently, a macro-sized (with a length of up to 0.5 m) single-crystal Cu (111) foil and the epitaxial growth of single-crystal graphene on that substrate have been demonstrated, paving the way for the epitaxial growth of single crystals of various 2D materials 2 , 3 . However, the Cu (111) surface is not an appropriate template for the growth of single-crystal hBN owing to the nucleation of antiparallel hBN domains 19 . This is because the epitaxial growth of unidirectionally aligned domains requires the 2D material to adopt the symmetry of the substrate precisely to avoid changes in lattice orientation during a symmetric operation of the substrate. In contrast to graphene, most 2D materials, including hBN, have a lower symmetry (C 3 v for hBN) and therefore are not compatible with the C 6 v symmetry of the top-layer atoms of the Cu (111) surface. Thus, the ideal substrate for hBN growth must have C 3 v , C 3 , σ v or C 1 symmetry. Because the Cu (111) surface cannot be used to grow hBN single crystals, we have to choose a substrate with only σ v or C 1 symmetry, that is, without a regular low-index face-centred cubic (fcc) surface. To address this challenge, in this study we performed the successful synthesis of a Cu (110) vicinal surface, on which the presence of metal steps along the <211> direction led to a C 1 symmetry. This enabled the coupling of Cu <211> step edges with hBN zigzag edges, resulting in the unidirectional alignment of millions of hBN nuclei over a large 10 × 10 cm 2 area. In our experiment, 10 × 10 cm 2 single-crystal Cu foils of the Cu (110) vicinal surface were prepared by annealing industrial Cu foils using a designed high-temperature (1,060 °C) pre-treatment process before long-time standard annealing treatment (details in Methods and Extended Data Fig. 1a–g ). Such a large-area Cu (110) surface can be easily observed by optical imaging after mild oxidization in air (Fig. 1a ), because different surfaces form Cu 2 O with different oxidization rates and show characteristic colours 24 . The sharp Cu (220) peak in the X-ray diffraction (XRD) pattern in the 2 θ scan (Fig. 1b ) and the presence of only two peaks at an interval of exactly 180° in the ϕ scan (Fig. 1c ; fixed along Cu <100>) unambiguously confirmed that the single-crystal foil had no in-plane rotation (2 θ is the angle between the incident X-rays and the detector and ϕ is the in-plane rotation angle of the sample). Furthermore, at different positions (as marked in Extended Data Fig. 2a ), electron backscatter diffraction (EBSD) maps (Fig. 1d , Extended Data Fig. 2b, c ), low-energy electron diffraction (LEED) patterns (Fig. 1e , Extended Data Fig. 2d ) and scanning transmission electron microscopy (STEM) images (Fig. 1f ) revealed the single-crystal nature of the Cu (110) substrate. Here, we note that the exact surface index and the deviation of the tilt angle from ideal Cu (110) is difficult to observe, which means that the tilt angle is smaller than the experimental accuracy of about 1°. We have chosen, however, to call the substrate Cu (110) unless otherwise specified. Fig. 1: Characterization of single-crystal Cu (110) obtained by annealing an industrial Cu foil. a , Optical image of the as-annealed Cu foil after mild oxidation in air. Owing to their different oxidation rates, the Cu (110) and Cu (111) domains have a different thickness of Cu 2 O on their surfaces and are therefore different colours. Typical Cu (110) single crystals with areas of 10 × 10 cm 2 were obtained. b , c , XRD patterns obtained by a 2 θ scan of the Cu (110) foil ( b ) and a ϕ scan fixed along the Cu <100> direction ( c ), confirming the single-crystal nature of the Cu (110) foil, without in-plane rotation. d , Representative EBSD maps of as-annealed Cu (110) foils (upper panel, along the [001] direction; lower panel, along the [010] direction) measured at position 1 shown in Extended Data Fig. 2a . e , Representative LEED pattern of as-annealed Cu (110) foils measured at position 1 shown in Extended Data Fig. 2a . The purple solid and dashed circles correspond to the visible and invisible diffraction points (due to the extinction rule), respectively. f , Atomically resolved STEM image of as-annealed Cu (110) foils. Inset, fast Fourier transformation (FFT) pattern of the STEM image. The results of these characterization experiments prove that the as-annealed Cu foil is a single-crystal Cu (110) foil with an area of 10 × 10 cm 2 . Full size image Using the Cu foil single crystal as the substrate, 2D hBN was synthesized by a low-pressure chemical vapour deposition (CVD) method with ammonia borane (H 3 B-NH 3 ) as the feedstock (details in Methods and Extended Data Fig. 1h ). X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, ultraviolet-visible absorption spectroscopy (UV-Vis), atomic force microscopy (AFM) and STEM were used to confirm that the as-grown sample was a 2D hBN monolayer (Extended Data Fig. 3 ). The SEM images showed a striking result: the hBN domains grown on the Cu (110) surface were all triangular, with their domains aligned unidirectionally on the Cu foil surface at the centimetre scale. The proportion of aligned domains was estimated to be about 99.5% (Fig. 2a , Extended Data Fig. 4a ). To confirm that the aligned domains had the same crystalline orientation, we performed LEED measurements. LEED patterns (Fig. 2b , Extended Data Fig. 4b ) measured at multiple positions (marked in Extended Data Fig. 2a ) confirmed that the crystalline lattice of the hBN domains were aligned in the same direction 14 . The unidirectional alignment of the hBN lattice was further proven by polarized second-harmonic generation (SHG) mapping 25 , where a dark boundary line was observed between hBN lattices of different domains when these were not aligned (Extended data Fig. 5a ). As shown in Fig. 2c , no boundary line was observed in the bulk area of coalescence of hBN domains, which also indicates a similar crystalline orientation for the two coalesced grains. Fig. 2: Unidirectional alignment and seamless stitching of hBN domains on Cu (110). a , SEM image of as-grown unidirectionally aligned hBN domains on the Cu (110) substrate (the arrow on the top right corner points to the downward growth direction of domains). b , Representative LEED pattern of as-grown hBN samples measured at position 1 shown in Extended Data Fig. 2a . Because of the triple symmetry, three diffraction points have higher intensity (solid orange circles) than the other three (dashed green circles). The black circle shows the original point in the reciprocal space. c , Polarized SHG mapping of two unidirectionally aligned hBN domains. The uniform colour without boundary lines demonstrates that the aligned hBN domains have the same lattice orientation. d , e , SEM images of as-grown hBN films after H 2 etching at 1,000 °C for 30 min. No boundaries are observed for the hBN film grown on Cu (110) ( d ), but obvious boundaries can be observed on Cu (111) ( e ). f , Low-magnification TEM image obtained at the concave corner in the area of confluence of aligned hBN domains on a monolayer single-crystal graphene (Gr) support. Shown are two partially merged hBN domains (BN-1 and BN-2) transferred onto the graphene films. The inset shows the same image at a lower magnification. g , Representative HRTEM image showing a uniform moiré pattern at the concave corner in the joint area between two unidirectionally aligned hBN domains. Inset, FFT patterns. The orange (blue) arrow points to the diffraction pattern of hBN (graphene). Full size image During the growth of a 2D material on a Cu surface, seamless stitching is expected in the coalescence area of two unidirectionally aligned grains because the perfect single-crystal lattice is always the most stable structure, as has been proven by the CVD growth of graphene 2 , 3 . To further confirm the seamless stitching of hBN domains, hydrogen (H 2 ) etching was employed to visualize the different possible boundaries on the macroscopic scale 3 , 4 . Figure 2d and Extended Data Fig. 5b show no etching line between unidirectionally aligned hBN domains; by contrast, for domains with different alignments, the etched boundary is clearly visualized (Fig. 2e , Extended Data Fig. 5c ). Ultraviolet-light oxidization was also carried out to expose possible grain boundaries, when present 2 , 3 . Similar results were obtained, leading us to the conclusion that the large-area hBN film was a single crystal (Extended Data Fig. 5d, e ). High-resolution transmission electron microscopy (HRTEM) was used to characterize the quality of the stitching line between domains on the atomic scale. The as-grown hBN samples were transferred onto specially constructed single-crystal graphene TEM grids to observe the moiré patterns in the hBN/graphene heterostructure. It is well known that even a small difference in the rotation angle can lead to a dramatic change in the moiré pattern. Consistent moiré patterns were collected at multiple places around the concave corner in the joint area of two unidirectionally aligned hBN domains (shown in Fig. 2f, g , Extended Data Fig. 6a–l ), whereas a clear difference in the moiré pattern and an obvious grain boundary were observed in the area of confluence of two misaligned hBN domains (Extended Data Fig. 6m–p ). The above verifications confirm that our unidirectionally aligned hBN domains can be seamlessly stitched into an intact piece of single-crystal film. Further angle-resolved photoemission spectroscopy (ARPES) spectra also revealed that the hBN film was high-quality single crystal (Extended Data Fig. 7 ). We further used environmental scanning electron microscopy (SEM) to study the growth dynamics in situ and to understand the kinetics of the epitaxial growth of unidirectional hBN domains on Cu (110). Our in situ observations suggested that our Cu (110) single-crystal substrate is ‘vicinal’ because of the existence of steps and that the step edges play a crucial role in the unidirectional alignment of hBN domains. SEM images (Fig. 3a, b ) confirmed that each hBN single crystal was nucleated near a step edge, with one edge of the single crystal tightly attached to the upward side of the step edge during the growth process, and the single crystal propagated rapidly on the plateau between neighbouring step edges. Once one of its edges reached a neighbouring step edge in the downward direction, the propagation of the edge was arrested temporarily 26 , 27 . Our in situ observation thus clearly confirmed that the unidirectional alignment of the hBN domains was caused by step-edge-mediated nucleation, and the truncated shape of the hBN domains was a consequence of the high energy barriers of a hBN edge passing over a step edge of the metal surface in the upward and the downward direction. Because of the presence of parallel (bunched) step edges (from the uniform surface tilt angle) on the single-crystal vicinal Cu (110) surface (see ex situ AFM images in Extended Data Fig. 8a ), this growth kinetics led to unidirectionally aligned hBN domains (Fig. 3c , Extended Data Fig. 8b–g ). The growth process is schematically illustrated in Fig. 3d . Fig. 3: In situ observation of the unidirectional growth of hBN domains. a , In situ SEM images of a hBN domain, probably formed from a nucleation centre at the surface steps, at different growth times. b , Superposition of images showing the areal growth of the domain in a . c , Shape evolution of several hBN domains, reproduced as a colour-coded superposition of outlines extracted from images recorded during 700 s. d , Schematic diagrams highlighting the unidirectional growth of hBN domains and the anisotropic growth speed on a Cu surface with steps. The arrows in a – c point to the downward growth direction of domains. Full size image The AFM phase image in Fig. 4a shows that the parallel (bunched) step edges of the Cu (110) surface are also parallel to the longest edge of the truncated triangular hBN single crystal, and the scanning tunnelling microscopic (STM) image in Fig. 4b clearly reveals that the longest hBN edge is a zigzag edge. By further comparing the reciprocal lattice information of hBN and Cu (110) (Fig. 4c, d ), we find that all the Cu step edges are along the <211> direction (schematically shown in Fig. 4e and Extended Data Fig. 9 ). To confirm this edge-coupling-guided growth theoretically, we carried out ab initio calculations based on the experimentally obtained structural information. The formation energy for hBN growing on Cu (110) with steps along Cu <211> has a single minimum-energy state at γ = 0° (shown in Fig. 4f ), where γ is the angle between the Cu <211> direction and the zigzag direction of the hBN nucleus (Fig. 4e ). In practice, step edges might not be perfectly straight. A step edge slightly deviating from Cu <211> can be perceived as many short segments of the Cu <211> step edge connected by atomic-sized kinks. As reported in a recent study, the complementary counter-kinks of the hBN edge are likely to appear at (and compensate for the kinks of) the metal step edges, and thus ensure the unidirectionally aligned growth of hBN domains 28 . In addition, γ = 0° is one of the most energetically preferred orientations between the hBN lattice and the Cu (110) facet, which indicates that interfacial coupling may also contribute to the epitaxy. Considering these results, the edge-coupling-guided growth mechanism has wide applicability and high repeatability. For example, unidirectionally aligned growth of hBN domains could be also observed on the Cu (410) facet (Extended Data Fig. 10a ); however, randomly aligned hBN domains were frequently observed on the Cu (100) facet (Extended Data Fig. 10b ), whereas anti-parallel domains were often obtained on the Cu (111) facet (Extended Data Fig. 10c ). Fig. 4: Mechanism of edge-coupling-guided epitaxial growth of hBN domains on Cu (110). a , AFM phase image of a hBN domain on Cu (110), showing that one edge of the hBN domain is parallel to the surface steps. b , Atomic-resolution STM image of a hBN domain covering the Cu (110) plateaus both upwards and downwards, showing the zigzag edge of hBN aligned parallel to the atomic step (blue line) on the Cu surface. Here, the white curve shows the height profile of the Cu step; the height of the plateau is 2 Å. c , d , LEED patterns of as-grown hBN domains and the underlying Cu (110) surface measured at position 1 in Extended Data Fig. 2a . The blue line in c corresponds to the zigzag direction of the hBN lattice in real space and the red line in d to the <211> direction of Cu in real space. The purple solid and dashed circles in d correspond to the visible and invisible diffraction points (due to the extinction rule), respectively. e , Schematic diagrams of the configuration of the hBN lattice and the atomic step on Cu (110), obtained from the STM and LEED data in b – d . γ is the angle between Cu <211> and the zigzag direction in the hBN lattice; γ = 0° for the atomic model in e . f , First-principles DFT calculations of the formation energies of various hBN edges attached to a Cu <211> step on the Cu (110) substrate. The results show that the system is energetically favoured for γ = 0° owing to the coupling between the Cu <211> step edge and the hBN zigzag edge (details in Methods). Full size image In summary, single-crystal hBN films with areas of 10 × 10 cm 2 (three orders of magnitude larger than those grown previously), were synthesized on a large-area single-crystal Cu (110) foil by annealing regular, industrially produced Cu foils. The easy preparation of the Cu substrate implies the immediate availability and wide applicability of single-crystal 2D hBN films. Moreover, the observed edge-coupling-guided growth mechanism on a vicinal surface is expected to be applicable to all non-centrosymmetric 2D materials, which will enable the growth of large-area single crystals of such materials and facilitate their use in different applications in the near future. Note added in proof: After the submission of our manuscript, two studies related to the growth of single-crystal hBN were reported by other researchers 28 , 29 . Methods Annealing of a 10 × 10 cm 2 single-crystal Cu (110) foil A commercially available polycrystalline Cu foil (25 μm thick, 99.8%; Sichuan Oriental Stars Trading Co. Ltd) was loaded into a tube furnace with a chamber of diameter 23 cm and length 50 cm (Kejing Co. Ltd). The foil was first annealed at 1,060 °C (where the Cu surface started to melt) for 2–10 min under a mixed-gas flow (Ar, 500 sccm, H 2 , 50 sccm; sccm, standard cubic centimetres per minute) at atmospheric pressure, and then the temperature was quickly decreased to 1,040 °C and the Cu foil was annealed at this temperature for 3 h. For the subsequent annealing, a small piece of as-annealed single-crystal Cu (110) foil with steps along Cu <211> was placed on the surface of a raw polycrystalline Cu foil as an artificial seed, which enhanced the yield of the single-crystal Cu (110) foil with the steps along Cu <211> to a great extent. To identify the surface index, the as-annealed Cu foil was heated in a box furnace (Tianjin Kaiheng Co. Ltd, custom-designed) at 120 °C for 1 h. Growth of 10 × 10 cm 2 single-crystal 2D hBN films The precursor ammonia borane (97%; Aldrich) was filled into an Al 2 O 3 crucible and placed at a distance of 1 m from the single-crystal Cu (110) foil substrate. First, the substrate was heated to the growth temperature (1,035 °C) under a mixed-gas flow (Ar, 500 sccm; H 2 , 50 sccm) at atmospheric pressure. The CVD system was then switched to low pressure (about 200 Pa) with Ar (5 sccm) and H 2 (45 sccm), while the precursor was heated to 65 °C within 10 min using a heating band. To visualize the individual hBN domains, the growth time was fixed at 1 h; a continuous hBN film was obtained after 3 h of growth. For imaging grain boundaries, when present, the as-grown hBN film was etched at 1,000 °C for 30 min under H 2 and Ar (Ar, 250 sccm; H 2 , 250 sccm) at atmospheric pressure. After growth or etching, the whole CVD system was cooled rapidly to room temperature. Transfer of hBN films The as-grown hBN domains were transferred onto 90-nm-thick SiO 2 /Si wafers, holey-carbon-film TEM grids (Zhongjingkeyi GIG-1213-3C) and homemade monolayer graphene TEM grids using the polymethyl-methacrylate-based transfer technique. The graphene TEM grids were prepared by transferring large-area monolayer single-crystal graphene on commercial holey-carbon-film TEM grids (Zhongjingkeyi GIG-2010-3C). In situ CVD growth observed by environmental SEM In situ CVD growth experiments were performed inside the chamber of a modified environmental SEM system (FEI Quantum 200) with a custom infrared laser heating stage and with gas supplied through a leak valve. The as-grown single-crystal Cu (110) foil was cut into small pieces (5 mm × 5 mm) and annealed at 1,000 °C under H 2 flow (10 sccm) at 25 Pa for 1 h. CVD growth was then carried out at 850–950 °C. Images were recorded by an Everhart−Thornley detector or a large-field detector during CVD growth. Computational details The formation energies of the various hBN edges that are attached to the <211> step edge of a Cu (110) surface were calculated using dispersion‐corrected density functional theory (DFT) calculations as implemented in the Vienna Ab initio Simulation Package. Exchange-correlation functions were treated using the generalized gradient approximation, and the interaction between valence electrons and ion cores was calculated by the projected augmented wave method. A force lower than 0.01 eV Å −1 on each atom with an energy convergence of 10 −4 eV was used as the criterion for structural relaxation. The formation energy of an edge attached to a <211> step edge of the Cu (110) vicinal surface is defined as ε edge/Cu = ε pri – ε B = ε pri – ( E Cu + E hBN – E T – E vdW )/ L , where ε pri and ε B are the edge energy of the pristine hBN and the binding energy of a hBN edge attached to a <211> step edge, respectively; E Cu , E hBN , E T and E vdW correspond to the calculated energies of the Cu slab model, the hBN ribbon model, the hBN ribbon attached to the Cu slab and the van der Waals interaction between the hBN ribbon and the Cu slab; and L is the length of the supercell of the calculation. In all DFT calculations, the vacuum spacing between neighbouring images was >12 Å to avoid periodic imaging interaction. The edge energy of a pristine hBN edge is defined as \({\varepsilon }_{{\rm{pri}}}=\frac{{E}_{2}-{E}_{1}-{\rm{\Delta }}{N}_{{\rm{BN}}}{\varepsilon }_{{\rm{BN}}}-{\rm{\Delta }}{N}_{{\rm{N}}}{\mu }_{{\rm{N}}}}{3\left({L}_{2}-{L}_{1}\right)}\) , where \({E}_{2}\;{\rm{and}}\;{E}_{1}\) are the energies of two hBN triangles with the same edge configurations but of different size. In this calculation, two triangles with edge length of about 1.5 nm and 2 nm were used. ε BN is the energy of a B–N pair in the hBN bulk. Δ N BN = N BN2 – N BN1 , where N BN1 and N BN2 are the numbers of B–N pairs of the two hBN triangles. Using N 2 as a reference, μ N = −8.3 eV is the calculated chemical potential for a nitrogen atom. The van der Waals interaction between the hBN and the Cu (110) surface was calculated by placing a hBN bulk on a Cu (110) slab, with the N zigzag direction along the <211> direction of the substrate. The calculated van der Waals interaction is −0.17 eV per B–N pair. In the calculations of hBN ribbons attached to a Cu surface, the four-atom-thick Cu slab model was used to represent a vicinal Cu (110) surface or a Cu (110) surface with a <211> step, and the models of various hBN ribbons had a width of about 1.0 nm. A maximum strain of less than 3.5% was adopted when building the supercells to calculate the edge energy of hBN on the Cu surface. Characterization Optical measurements . Raman spectra were obtained with an alpha300R system (WITec, Germany) with a laser excitation wavelength of 532 nm and about 1 mW power. Optical images were obtained using an Olympus BX51 microscope. SHG mapping was done using a customized system equipped with a piezo stage and controller (Physik Instrumente P-333.3CD and E-725), an ultrafast laser (Spectra Physics Inspire ultrafast OPO system; wavelength of 820 nm, pulse duration of 100 fs and repetition rate of 80 MHz) and a grating spectrograph (Princeton SP-2500i). XPS and UV-Vis spectral measurements were performed using an Axis Ultra Imaging X-ray photoelectron spectrometer and a Varian Cary 5000 UV-Vis-NIR spectrophotometer, respectively. ARPES spectra were measured in a setup comprising a SCIENTA DA30L analyser and a monochromatic helium lamp in He i (21.2 eV) mode. The size of the light beam was about 1 mm 2 , and the samples were cooled to about 10 K during measurements. EBSD, LEED, TEM, AFM and STM measurements . EBSD maps were obtained using a PHI 710 scanning Auger nanoprobe. LEED was performed using an Omicron LEED system in ultrahigh vacuum with a base pressure below 3 × 10 −7 Pa. STEM/HRTEM experiments were partly performed using FEI Titan Themis G2 300 and JEOL JEM ARM 300CF systems operated at 300 kV and 80 kV, respectively. An atomically resolved STEM image of monolayer hBN was collected using a Nion UltraSTEM 200 instrument operated at 60 kV. AFM images were acquired using a Bruker Dimensional ICON system in ambient atmosphere. STM experiments were performed with a combined nc-AFM/STM system (Createc, Germany) at 77 K with a base pressure below 7 × 10 −9 Pa. Data availability All related data generated or analysed during the current study are available from the corresponding authors on reasonable request.
What makes something a crystal? A transparent and glittery gemstone? Not necessarily, in the microscopic world. When all of its atoms are arranged in accordance with specific mathematical rules, we call the material a single crystal. As the natural world has its unique symmetry, e.g., snowflakes or honeycombs, the atomic world of crystals is designed by its own rules of structure and symmetry. This material structure has a profound effect on its physical properties as well. Specifically, single crystals play an important role in inducing a material's intrinsic properties to its full extent. Faced with the coming end of the miniaturization process that the silicon-based integrated circuit has allowed up to this point, major efforts have been dedicated to find a single crystalline replacement for silicon. In the search for the transistor of the future, two-dimensional (2-D) materials, especially graphene, have been the subject of intense research around the world. Being thin and flexible as a result of being only a single layer of atoms, this 2-D version of carbon even features unprecedented electricity and heat conductivity. However, the last decade's efforts for graphene transistors have been held up by physical restraints—graphene allows no control over electricity flow due to the lack of band gap. So then, what about other 2-D materials? A number of interesting 2-D materials have been reported to have similar or even superior properties. Still, the lack of understanding in creating ideal experimental conditions for large-area 2-D materials has limited their maximum size to just a few millimeters. Scientists at the Center for Multidimensional Carbon Material (CMCM) within the Institute for Basic Science (IBS) have presented a novel approach to synthesize on a large scale, silicon-wafer-size, single crystalline 2-D materials. Prof. Feng Ding and Ms. Leining Zhang in collaboration with their colleagues at Peking University, China and other institutions have found a substrate with a lower order of symmetry than that of a 2-D material that facilitates the synthesis of single crystalline 2-D materials in a large area. "It was critical to find the right balance of rotational symmetries between a substrate and a 2-D material," notes Prof. Feng Ding, one of corresponding authors of this study. The researchers successfully synthesized hBN single crystals of 10 x 10 cm2 by using a new substrate: a surface near Cu(110) that has a lower symmetry of (1) than hBN with (3). (a-c), schematic of edge-coupling-guided hBN growth on a Cu (110) vicinal surface with atomic step edges along the <211> direction. (b) shows the top view and (c) shows a side view. Credit: IBS Why does symmetry matters? Symmetry, in particular rotational symmetry, describes how many times a certain shape fits on to itself during a full rotation of 360 degrees. The most efficient method to synthesize large-area and single crystals of 2-D materials is to arrange layers over layers of small single crystals and grow them upon a substrate. In this epitaxial growth, it is quite challenging to ensure all of the single crystals are aligned in a single direction. Orientation of the crystals is often affected by the underlying substrate. By theoretical analysis, the IBS scientists found that an hBN island (or a group of hBN atoms forming a single triangle shape) has two equivalent alignments on the Cu(111) surface that has a very high symmetry of (6). "It was a common view that a substrate with high symmetry may lead to the growth of materials with a high symmetry. It seemed to make sense intuitively, but this study found it is incorrect," says Ms. Leining Zhang, the first author of the study. Previously, various substrates such as Cu(111) have been used to synthesize single crystalline hBN in a large area, but none of them were successful. Every effort ended with hBN islands aligning in several different directions on the surfaces. Convinced by the fact that the key to achieve unidirectional alignment is to reduce the symmetry of the substrate, the researchers made tremendous efforts to obtain vicinal surfaces of a Cu(110) orientation; a surface obtained by cutting a Cu(110) with a small tilt angle. It is like forming physical steps on Cu. As an hBN island tends to position in parallel to the edge of each step, it gets only one preferred alignment. The small tilt angle lowers the symmetry of the surface as well. The researchers eventually found that a class of vicinal surfaces of Cu (110) can be used to support the growth of hBN with perfect alignment. On a carefully selected substrate with the lowest symmetry (or the surface will repeat itself only after a 360 degree rotation), hBN has only one preferred direction of alignment. The research team of Prof. Kaihui Liu at Peking University has developed a unique method to anneal a large Cu foil, up to 10 x 10 cm2, into a single crystal with the vicinal Cu (110) surface, and with it, they have achieved the synthesis of hBN single crystals of the same size. (a) large area single-crystal copper foil with a low symmetric surface, a vicinal surface of Cu(110) orientation, namely V-(110). (b) the growth of large number of unidirectional aligned hBN islands on the vicinal Cu(110) surface. (c) SEM and AFM images of hBN islands on vicinal Cu (110). Credit: IBS Besides flexibility and ultrathin thickness, emerging 2-D materials can present extraordinary properties when they are enlarged as single crystals. "This study provides a general guideline for the experimental synthesis of various 2-D materials. Besides the hBN, many other 2-D materials could be synthesized with large area single crystalline substrates with low symmetry," says Prof. Feng Ding. Notably, hBN is the most representative 2-D insulator, which is different from the conductive 2-D materials, such as graphene, and 2-D semiconductors, such as molybdenum disulfide (MoS2). The vertical stacking of various types of 2-D materials, such as hBN, graphene and MoS2, would lead to a large number of new materials with exceptional properties and can be used for numerous applications, such as high-performance electronics, sensors, or wearable electronics."
10.1038/s41586-019-1226-z
Medicine
Study links autism to changes in micro-RNAs
Ye E Wu et al. Genome-wide, integrative analysis implicates microRNA dysregulation in autism spectrum disorder, Nature Neuroscience (2016). DOI: 10.1038/nn.4373 Journal information: Nature Neuroscience
http://dx.doi.org/10.1038/nn.4373
https://medicalxpress.com/news/2016-09-links-autism-micro-rnas.html
Abstract Genetic variants conferring risk for autism spectrum disorder (ASD) have been identified, but the role of post-transcriptional mechanisms in ASD is not well understood. We performed genome-wide microRNA (miRNA) expression profiling in post-mortem brains from individuals with ASD and controls and identified miRNAs and co-regulated modules that were perturbed in ASD. Putative targets of these ASD-affected miRNAs were enriched for genes that have been implicated in ASD risk. We confirmed regulatory relationships between several miRNAs and their putative target mRNAs in primary human neural progenitors. These include hsa-miR-21-3p, a miRNA of unknown CNS function that is upregulated in ASD and that targets neuronal genes downregulated in ASD, and hsa_can_1002-m, a previously unknown, primate-specific miRNA that is downregulated in ASD and that regulates the epidermal growth factor receptor and fibroblast growth factor receptor signaling pathways involved in neural development and immune function. Our findings support a role for miRNA dysregulation in ASD pathophysiology and provide a rich data set and framework for future analyses of miRNAs in neuropsychiatric diseases. Main ASD is a group of clinically heterogeneous neurodevelopmental disorders characterized by deficits in social functioning and the presence of repetitive and restricted behaviors or interests 1 , 2 . ASD also manifests substantial genetic heterogeneity; hundreds of genomic loci have been implicated 1 , 2 . Dozens of rare Mendelian disorders, including fragile X syndrome, neurofibromatosis, Rett syndrome, tuberous sclerosis complex and structural chromosomal variants, confer a high risk for ASD 1 , 2 . Recent studies have also revealed the contribution of rare, de novo single-nucleotide mutations, none of which account for more than a small fraction of ASD cases 1 , 2 . Thousands of common variants are also estimated to contribute to ASD, although the effect size of individual loci is small 1 , 2 , 3 , 4 . Despite this remarkable heterogeneity, ASD-associated mutations have been suggested to target a few convergent biological processes, including synaptic function and neuronal activity, postsynaptic density protein metabolism, neuronal cell adhesion, WNT signaling, and chromatin remodeling during neurogenesis 1 , 2 . In contrast, much less is known about the contribution of post-transcriptional regulatory mechanisms to ASD. miRNAs, which are small non-coding regulatory RNAs that mediate mRNA destabilization and/or translational repression 5 , represent a sparsely studied class of putative contributors to ASD pathophysiology. Each miRNA can target up to hundreds of genes; collectively, miRNAs are predicted to target >60% of the transcriptome, establishing them as potential regulators of complex gene networks 5 , 6 . miRNAs have been shown to regulate processes that are pivotal to brain development and function, including neurogenesis, neuronal maturation and synaptic plasticity 7 . To assess the potential role of miRNAs in ASD, we performed genome-wide miRNA expression profiling in post-mortem brains from individuals with ASD and controls. We found a shared pattern of miRNA dysregulation in a majority of ASD-affected brains. The targets of ASD-associated miRNAs were enriched in ASD risk genes. Using bioinformatic and gene network analyses, we were able to link these perturbations with transcriptomic changes in ASD-affected brain. Results Differential expression of miRNAs in ASD brain We profiled miRNAs in 242 post-mortem brain tissue samples from 55 ASD cases and 42 controls (CTL) ( Fig. 1a and Supplementary Table 1 ) using Illumina small RNA sequencing (sRNA-seq; Online Methods ). Up to three brain regions from each individual were assessed: frontal cortex (FC, Brodmann area (BA) 9), temporal cortex (TC, BA41/42/22) and cerebellar vermis ( Fig. 1a ), all of which have been implicated in ASD 8 . After quality control ( Supplementary Fig. 1a,b ; Online Methods ), mature miRNAs documented in miRBase release 20 ( ) were identified and quantified using the miRDeep2 algorithm 9 (Online Methods ). We also included in our analysis previously unknown miRNAs that were identified in a recent study based on 94 human sRNA-seq data sets and supported by experimental evidence 10 , as well as previously unknown miRNAs predicted from our sequencing data with high confidence using two different methods (Online Methods and Supplementary Table 2 ). 699 miRNAs (552 in miRBase 20 and 147 previously unknown) were detected (Online Methods and Supplementary Table 2 ). Figure 1: miRNA expression changes in post-mortem ASD cortex. ( a ) Flow chart of the overall approach. ( b ) miRNA expression fold changes (>0 if higher in ASD, <0 if lower in ASD) between ASD and control cerebral cortex, plotted against the percentile rank of mean expression levels across 95 cortex samples (47 samples from 28 ASD cases and 48 samples from 28 controls) used for DGE analysis. Differentially expressed (linear mixed-effects model, FDR < 0.05) miRNAs are highlighted in red. ( c ) Comparison of miRNA expression fold changes in the temporal and the frontal cortex. Green dots, 58 miRNAs differentially expressed (FDR < 0.05) in 95 combined cortex samples; gray dots, non-differentially expressed miRNAs; black line, regression line between fold changes in the temporal and the frontal cortex for the differentially expressed miRNAs; red line, y = x . The Pearson correlation coefficient ( R ) and P value are also shown. ( d ) Dendrogram showing hierarchical clustering of 95 cortex samples based on top differentially expressed (FDR < 0.05, |log 2 (fold change)| ≥ 0.3) miRNAs. Information on diagnosis, age, sex, brain region, co-morbidity of seizures, psychiatric medication, RIN and brain bank is indicated with color bars below the dendrogram according to the legend on the right. Heat map on the bottom shows scaled (mean extracted and divided by s.d.) expression values (color-coded according to the legend on the right) for miRNAs used for clustering. Full size image The miRNA expression profiles were very similar between the frontal and temporal cortex, but were distinct in the cerebellum ( Supplementary Fig. 2a–f ), consistent with previous observations for mRNAs 11 , 12 . We therefore combined 95 covariate-matched samples (47 samples from 28 ASD cases and 48 samples from 28 controls; Supplementary Fig. 1c and Supplementary Table 1 ) from the FC and TC for differential gene expression (DGE) analysis, comparing ASD and CTL using a linear mixed-effects regression framework to control for potential confounders (Online Methods ). We identified 58 miRNAs showing significant (false discovery rate (FDR) < 0.05) expression changes between ASD and CTL: 17 were downregulated and 41 were upregulated in ASD cortex ( Fig. 1b and Supplementary Table 2 ). The fold changes for the differentially expressed miRNAs were highly concordant between the FC and TC (Pearson correlation coefficient R = 0.96, P < 2.2 × 10 −16 ; Fig. 1c ). To ensure the robustness of the signal, we performed resampling (Online Methods ), finding that the fold changes for all miRNAs were highly concordant between the resampled and the original sample sets (Pearson's R = 0.93–0.97, P < 2.2 × 10 −16 ; Supplementary Fig. 3a ). To confirm that the result was not biased by a small number of samples with low RNA Integrity (RIN) or high post-mortem interval (PMI), or from ASD cases with chromosome 15q11-13 duplication syndrome, we also performed DGE analysis after removing these samples and found that the expression changes were concordant with those observed in all samples combined (Pearson's R = 0.99, 0.98 and 0.99, P < 2.2 × 10 −16 , for samples with RIN ≥ 5, PMI ≤ 30 h and after removal of 15q11-13 duplication samples, respectively; Supplementary Fig. 3b–d ). Hierarchical clustering based on the top differentially expressed miRNAs (FDR < 0.05, |log 2 (fold change)| ≥ 0.3) revealed distinct clustering for the majority of ASD cortex samples; 37 of 47 ASD samples from 23 of 28 cases grouped together, and confounders such as age, sex, brain region, seizures, medication, RIN and brain bank did not drive the clustering ( Fig. 1d ), suggesting a shared miRNA dysregulation signature among the majority of ASD samples. We further validated ten differentially expressed miRNAs using quantitative reverse transcription PCR (qRT-PCR) and confirmed sequencing-detected changes ( Supplementary Fig. 3f,g ). Together, these results support the robustness and reproducibility of our data. In addition, we performed DGE analysis for 47 covariate-matched cerebellum samples (21 samples from ASD cases and 26 samples from controls; Supplementary Fig. 1e , Supplementary Table 1 and Online Methods ), and observed a similar trend in differential expression compared with the cortex (Pearson's R = 0.86, P < 2.2 × 10 −16 for miRNAs differentially expressed in the cortex; Supplementary Fig. 3e ; 16 miRNAs differentially expressed at FDR < 0.05 in both cortex and cerebellum). Perturbation of miRNA coexpression modules in ASD brain To further gain a systems-level understanding of the relationship between miRNA expression changes and disease status, we next applied weighted gene coexpression network analysis (WGCNA) using 109 cortex samples and a method that is robust to outliers ( Supplementary Figs. 1d and 4a ; Online Methods ) to assign individual miRNAs to coexpression modules 13 , 14 , 15 . We identified 11 modules, summarized by their first principal component (PC1), or module eigengene (ME) 14 ( Fig. 2a and Supplementary Table 2 ). We then assessed ME relationship to ASD status and identified four modules that were significantly correlated with disease status (Pearson's correlation, FDR < 0.05) and not any of the technical confounders: two downregulated (brown and salmon) and two upregulated (yellow and magenta) in ASD samples ( Fig. 2b–j ). Figure 2: miRNA coexpression modules dysregulated in post-mortem ASD cortex. ( a ) Dendrogram showing miRNA coexpression modules defined in 109 cortex samples. Color bars below indicate original module assignment, consensus module assignment based on 200 rounds of bootstrapping, Pearson correlation coefficients with diagnosis and other potential confounders or covariates (all treated as numeric variables), and expression level for each miRNA. Arrows indicate three modules (brown, magenta and yellow) that were significantly correlated with diagnosis. ( b – d ) Pearson correlation between module eigengenes and different covariates in 109 cortex samples ( b ), 47 cortex samples from subjects aged 15–30 years ( c ), and 42 cortex samples from subjects >30 years ( d ). Correlation coefficients ( R ) and FDRs (Online Methods ) are shown where FDR < 0.05. ( e – j ) Scaled module eigengene values across 109 cortex samples ( e , g , i ) and network plots ( f , h , j ) for the brown ( e , f ), magenta ( g , h ), and yellow ( i , j ) modules. In e , g and i , samples are plotted in groups according to disease status and sex and color-coded, as indicated above the graphs, and ordered by age as indicated below the graphs. In f , h and j , miRNAs with kME (Online Methods ) ≥ 0.5 are plotted according to multidimensional scaling (MDS) of miRNA correlations. Edge thickness is proportional to the positive correlation between the two connected miRNAs and node size is proportional to node connectivity. Enriched transcription factors (TFs) and chromatin regulators (CRs) (Fisher's exact test, FDR < 0.05) are listed below the plot. Underlined, TF/CR differentially expressed ( P < 0.05) in ASD cortex. Full size image WGCNA permits direct assessment of the relationship of modules to important experimental covariates, such as age, sex, brain region and technical confounders, including RIN, PMI and brain bank 13 , 14 , 15 . The salmon and yellow modules showed significant correlations with age ( Fig. 2b ). Further inspection of the salmon module revealed that it could be driven by younger non-aged-matched samples in cases compared with controls ( Fig. 2c,d , Supplementary Fig. 1d and Online Methods ), so it was excluded from subsequent analysis. The yellow and magenta modules showed a different pattern of age dependence, whereby their disease association was more prominent in the younger age-matched set (15–30 years) relative to the older set (>30 years) ( Fig. 2c,d,g,i and Supplementary Fig. 4b,c ). In contrast, the downregulation of the brown module persisted throughout age ( Fig. 2c–e ). To determine whether the coexpression structure is similar between ASD and CTL samples, we constructed networks using ASD or CTL samples only and performed module preservation tests 16 , observing that all three ASD-associated modules were preserved across the ASD and CTL networks ( Supplementary Fig. 4d,e ). Similar analysis revealed that the ASD-associated modules were also preserved across the FC and TC ( Supplementary Fig. 4f,g ). Furthermore, these coexpression relationships were also observed in two independent data sets from human brain (Online Methods and Supplementary Fig. 4h,i ). These results support the robustness and reproducibility of the three ASD-associated modules. Coexpression among miRNAs may arise from co-regulation by common transcription factors (TFs) and/or chromatin regulators (CRs). To test this possibility, we obtained genome-wide high-confidence binding sites for 61 human TFs and CRs expressed in the cerebral cortex 17 , and identified miRNAs that are potentially regulated by each TF or CR (Online Methods ). We then assessed enrichment of these potential targets in the three ASD-associated miRNA modules, observing distinct patterns of TF and CR enrichment in each module, consistent with their differential co-regulation ( Supplementary Table 3 ). Notably, the magenta module was enriched (Fisher's exact test, FDR < 0.05) for potential targets of SMARCC1, a core component of the BAF complex that has been implicated in ASD via mutational and network analysis of brain gene expression data 13 , 18 ( Fig. 2h and Supplementary Table 3 ). Enrichment for ASD risk genes among miRNA targets Previous transcriptomic analyses have revealed convergent molecular pathology in ASD, which is characterized by downregulation of genes involved in neuronal and synaptic function, concomitant with upregulation of genes involved in immune-inflammatory response 11 . We hypothesized that miRNAs that are differentially expressed in ASD may contribute to these perturbations by repressing their mRNA targets. Alternatively, they might function as compensatory mechanisms that mitigate the existing mRNA dysregulation. We bioinformatically predicted the mRNA targets of top differentially expressed miRNAs and hub miRNAs in the ASD-related modules (Online Methods ), applying the well-established and widely used algorithm TargetScan, which searches for miRNA target sites in mRNA 3′ UTRs and evaluates the targeting efficacy and evolutionary conservation (while controlling for background conservation) of the target sites 6 , 19 , 20 , 21 (Online Methods ). We then selected the top targets expressed in the temporal and frontal cortex (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations) using two different criteria: (1) the strongest targets, which have the highest predicted targeting efficacy and are shared by two or more miRNAs (Online Methods ), and (2) the most conserved target sites, which are more likely to have conserved physiological roles, but may not include newly evolved targets with species-specific functions (Online Methods ). Overall, these two methods identified comparable numbers of targets, many overlapping, and some that were unique ( Supplementary Table 4 and Online Methods ). To test the validity of the bioinformatic predictions, we overexpressed a well-studied miRNA, hsa-miR-21-5p, in human neural progenitor cells (hNPCs; Online Methods ). We observed significant (one-sided t test, P < 2.2 × 10 −16 ) downregulation of hsa-miR-21-5p mRNA targets predicted by our methods, with a magnitude comparable to those supported by previous experimental evidence (miRTarBase release 4.5, 22 ; Supplementary Fig. 5 and Supplementary Table 5 ). To explore the relationship of the ASD-affected miRNAs to genes that have been previously implicated in ASD, we systematically assessed whether targets of the differentially expressed miRNAs or ASD-associated miRNA modules are enriched for these genes. We first tested enrichment for a set of ASD risk genes from the Simons Foundation Autism Research Initiative (SFARI) AutDB database 23 (ASD SFARI; Online Methods ), which have been implicated via common variant association, candidate gene studies, copy number variation (CNV) and genetic syndromes. For comparison, we also examined genes implicated in monogenic forms of intellectual disability 13 . We found that top targets of the differentially expressed miRNAs, as well as the brown and magenta modules, showed significant (Fisher's exact test, FDR < 0.05; Online Methods ) enrichment for SFARI ASD genes, but not for intellectual disability genes ( Fig. 3a and Supplementary Table 6 ), suggesting that targets of ASD-affected miRNAs are enriched for genes that are causally connected with ASD, but less so for genes that are related solely to intellectual disability. Figure 3: Enrichment of ASD risk genes among the top targets of ASD-affected miRNAs and miRNA modules. ( a ) Heat map showing enrichment (Fisher's exact test) of ASD risk genes from SFARI or implicated by rare variants (ASD rare variants), intellectual disability genes (ID all), genes encoding transcripts bound by FMRP (FMRP targets), genes encoding proteins in the postsynaptic density (PSD), genes expressed preferentially in human embryonic brains (Embryonic), and genes encoding chromatin modifiers. ASD/ID overlap, the overlap between the ASD SFARI and ID all sets; ASD only and ID only, non-overlapping ASD SFARI and ID genes, respectively. ( b ) Heat map showing enrichment (logistic regression) of genes affected by DNVs, including LGD, missense, synonymous, and recurrent (recurMutation) mutations, in ASD-affected probands (prb, all probands; prbM, male probands; prbF, female probands) and unaffected siblings (sib). Severe_recurMutation, genes targeted by protein-disrupting recurrent mutations; DNV_LGDs_SCZ, LGD DNVs in individuals with schizophrenia. ( c ) Enrichment for overlap with linkage disequilibrium–based independent genomic regions associated with ASD (from Autism Genetic Resource Exchange (AGRE) or Psychiatric Genomics Consortium (PGC)), Alzheimer's disease or schizophrenia in GWAS among the strongest miRNA targets. The empirical and multiple-testing corrected P values calculated using the INRICH program are shown where the corrected P < 0.10. ( d ) Heat map showing enrichment (Fisher's exact test) for ASD-associated developmental gene coexpression modules in human cortex. In a , b and d , enrichment odds ratios (OR) and FDR corrected P values (Online Methods ) are shown for enrichments with FDR < 0.05. Full size image We further examined additional classes of ASD-relevant genes: genes whose transcripts are bound by the fragile X mental retardation protein (FMRP) 24 , 25 , genes encoding postsynaptic density (PSD) proteins 24 , 26 , genes encoding chromatin modifiers 24 and genes expressed preferentially during embryonic brain development 24 , 27 . Enrichment for FMRP targets and embryonically expressed genes (Fisher's exact test, FDR < 0.05; Online Methods ) was observed for most target groups ( Fig. 3a and Supplementary Table 6 ). The most conserved targets of the upregulated miRNAs and miRNA modules significantly (Fisher's exact test, FDR < 0.05; Online Methods ) overlapped with PSD genes ( Fig. 3a and Supplementary Table 6 ). The most conserved targets of the upregulated miRNAs were also enriched (Fisher's exact test, FDR < 0.05; Online Methods ) for chromatin modifiers ( Fig. 3a and Supplementary Table 6 ). Recent whole-exome sequencing studies have identified high-confidence, likely-gene-disrupting (LGD, including nonsense, splice site and frame shift) de novo variants (DNVs) in ASD 1 , 2 . We next asked whether these independently identified ASD risk genes are over-represented in the targets of ASD-affected miRNAs. For comparison, we also examined LGD DNVs in unaffected siblings, missense and synonymous DNVs, and LGD DNVs in individuals with schizophrenia 24 (Online Methods ). Given that DNV frequency has been shown to be linearly correlated with gene length 24 , we applied logistic regression that incorporates gene coding region length covered in exome sequencing 24 to assess enrichment (Online Methods ). Notably, the top targets of the ASD-affected miRNAs and miRNA modules showed specific enrichment (FDR < 0.05; Online Methods ) for genes harboring LGD DNVs in ASD probands, but little enrichment for genes with LGD DNVs in unaffected siblings or individuals with schizophrenia, or missense/synonymous DNVs ( Fig. 3b and Supplementary Table 6 ). We also observed enrichment (FDR < 0.05; Online Methods ) for a shorter list of genes hit by severe recurrent mutations (two or more LGD mutations, small indels, and mutations that remove start or stop codons 24 ; Fig. 3b and Supplementary Table 6 ). We also found that the top targets of the upregulated miRNAs and miRNA modules were enriched (Fisher's exact test, FDR < 0.05; Online Methods ) for a list of 65 ASD risk genes that have been implicated through rare and de novo variations (ASD rare variants; Fig. 3a ) 28 . The observation that the targets of the downregulated miRNAs were enriched for ASD risk genes affected by LGD DNVs ( Fig. 3b ), which are expected to disrupt protein function, suggests that these miRNAs might be part of a compensatory, or adaptive, mechanism, as their downregulation would favor target mRNA upregulation. We next asked whether targets of the ASD-affected miRNAs are enriched for common genetic variants associated with ASD in genome-wide association studies (GWAS) 29 , 30 , 31 . We applied the INRICH method 32 , 33 to assess the overlap between ASD-associated, linkage disequilibrium-based independent genomic intervals and the miRNA target genes, observing enrichment (multiple testing-corrected P < 0.10 via nested permutation) for ASD GWAS signals near the strongest targets of the differentially expressed miRNAs ( Fig. 3c and Online Methods ). Notably, the most substantial enrichment was in the targets of the downregulated miRNAs, further suggesting that these might be compensatory changes. As a control, we performed the same analysis using GWAS data for Alzheimer's disease 34 and schizophrenia 33 and did not observe significant enrichment ( Fig. 3c ). The lack of enrichment in the control data sets was unlikely a result of fewer single-nucleotide polymorphisms (SNP) or intervals tested, as all data sets included comparable numbers of SNPs and intervals ( Supplementary Table 6 ). A previous study examining the temporal expression trajectories of ASD genes in the developing human cortex identified gene coexpression modules that are enriched for ASD risk genes 13 . We therefore tested for over-representation of these modules in the targets of ASD-affected miRNAs. This analysis highlighted module M16, which is upregulated during early cortical development and enriched for genes that have been implicated in neural development and synaptic function 13 ; genes in M16 were over-represented (Fisher's exact test, FDR < 0.05; Online Methods ) in most miRNA target groups ( Fig. 3d and Supplementary Table 6 ). Because the TargetScan algorithm searches for miRNA target sites in the 3′ UTR regions of mRNAs, we also used logistic regression to assess enrichment while controlling for 3′ UTR length and observed a similar enrichment for ASD-related genes ( Supplementary Fig. 6a–c ). Together, these results suggest a functional involvement of miRNAs perturbed in ASD in its molecular pathology. We also assessed shared functions among the targets of ASD-affected miRNAs by enrichment for Gene Ontology (GO) terms (GO-Elite software; Online Methods ). Targets of the upregulated miRNAs and the magenta module were enriched (FDR < 0.10) for genes related to neural development and several signaling pathways ( Supplementary Fig. 7a,b ). The top GO terms ( P < 0.01) for the targets of the downregulated miRNAs and miRNA module concerned both neuronal processes and immune function ( Supplementary Fig. 7a,b ). Relationship between miRNA and mRNA expression changes To directly assess the role of miRNA dysregulation in ASD-associated mRNA level alterations, we next examined the relationship between miRNA and mRNA expression changes. We used mRNA expression data generated in a separate study that investigated mRNA expression changes in post-mortem ASD brain using RNA-seq in 163 cortex (frontal and temporal) samples from ASD cases and controls (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations), 101 of which overlapped with our study. We evaluated the relationship between miRNA and mRNA expression changes in the same set of individuals and samples, while at the same time recognizing that ASD-associated mRNA level perturbations are likely driven by several regulatory mechanisms, including TFs and epigenetic changes. DGE analysis for mRNAs using 106 covariate-matched samples identified 1,156 genes that were differentially expressed (FDR ≤ 0.05) between ASD and CTL cortex: 574 upregulated and 582 downregulated (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations). Consistent with previous findings, the downregulated set was enriched for genes involved in neuronal and synaptic function, and the upregulated set was enriched for microglia- and astrocyte-related genes involved in immune-inflammatory function 11 . We first compared the fold changes of differentially expressed mRNAs that are predicted targets of the differentially expressed miRNAs or miRNA modules to those of non-targets. Overall, we observed a trend consistent with a negative effect of miRNAs on target mRNA level; mRNAs targeted by the upregulated miRNAs or modules showed lower ( P < 0.05) fold changes than the non-targets, whereas mRNAs targeted by the downregulated miRNAs or module showed higher ( P < 0.05) fold changes than the non-targets ( Supplementary Fig. 8a–h ). We further assessed the relationship between mRNA and miRNA differential expression signatures by examining the correlations between the PC1s of differentially expressed miRNAs and differentially expressed mRNAs that were predicted targets. We observed significant (Pearson's correlation, P < 0.005) negative correlations ( Fig. 4a–c ), suggestive of negative regulation of the ASD-affected mRNAs by the ASD-associated miRNAs in the brain. Notably, it is unlikely that the result was driven by separate association of the miRNAs and mRNAs to ASD status, as we regressed out disease status from the expression data when deriving the PC1s ( Fig. 4a–c ) or computed the correlations in CTL samples or ASD samples alone ( Supplementary Fig. 9a–c ). Figure 4: Relationship between miRNA and mRNA expression changes in post-mortem ASD cortex. ( a – c ) Correlations between the PC1s of differentially expressed miRNAs (FDR < 0.05, |log 2 (fold change)| ≥ 0.3) and differentially expressed mRNAs (FDR < 0.05) that are predicted targets (after regressing out disease status) in 101 cortex samples. ( a ) All differentially expressed miRNAs versus all differentially expressed mRNAs. ( b ) Upregulated miRNAs versus downregulated mRNAs. ( c ) Downregulated miRNAs versus upregulated mRNAs. Pearson correlation coefficients ( R ) and P values are shown below the plots. ( d ) Negative correlations between the PC1s of top miRNAs in the ASD-associated miRNA modules and their predicted targets in the ASD-associated mRNA models. mRNA modules are represented with network plots showing the top 20 most connected module genes. Pearson correlation coefficients ( R ) and P values are indicated. Correlations for the magenta and yellow miRNA modules were calculated using 45 younger samples (ages between 15 and 30 years), given their stronger disease association at younger ages relative to older ages (>30 years). Full size image In a separate study, we identified gene coexpression modules significantly correlated with ASD status that were enriched for genes differentially expressed between ASD and control (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations). Consistent with previous findings 11 , three modules (M4, M10, M16) that were downregulated in ASD were related to neuronal and synaptic function, as revealed by GO analysis, whereas two modules (M9, M19) that were upregulated in ASD were related to immune-inflammatory response and enriched for genes highly expressed in astrocytes and microglia (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations). To explore the potential regulatory relationship between ASD-affected miRNA and mRNA modules, we first asked whether the top miRNAs in the ASD-associated miRNA module were negatively correlated with their predicted targets in the ASD-associated mRNA modules. We observed significant (Pearson's correlation, P < 0.05) negative correlations between the upregulated magenta and yellow miRNAs modules and their predicted targets in the downregulated mRNA modules, and between the downregulated brown miRNA module and its predicted targets in the upregulated M9 and M19 mRNA modules ( Fig. 4d ). We further assessed the enrichment of the predicted targets of the ASD-affected miRNA modules for ASD-affected mRNAs and mRNA modules and observed that the targets of the yellow and magenta miRNA modules were enriched (Fisher's exact test, FDR < 0.05) for the downregulated mRNAs and M16 mRNA module, whereas targets of the brown miRNA module were enriched (Fisher's exact test, FDR < 0.05) for the upregulated mRNAs and M9 mRNA module ( Fig. 5a , Supplementary Table 6 and Online Methods ). These data, combined with the enrichment for ASD risk genes in the predicted miRNA targets, are consistent with a functional involvement of the ASD-affected miRNAs in the molecular pathology of ASD; the upregulated miRNAs and miRNA modules may contribute to the downregulation of neuronal and synaptic genes in ASD brain, whereas the downregulated miRNAs and miRNA module may contribute to the upregulation of immune-inflammatory genes. However, some of the downregulated miRNAs may also have a compensatory role given the enrichment of their targets for rare LGD and common genetic variants associated with ASD ( Fig. 5b ). Figure 5: Enrichment for ASD-affected mRNAs and mRNA modules in the top targets of ASD-affected miRNAs. ( a ) Heat map showing enrichment (Fisher's exact test) for ASD-affected mRNAs and mRNA modules in the top targets of ASD-related miRNA modules. P values were FDR corrected across six target groups for each mRNA group. Odds ratios and FDRs are shown for enrichments with FDR < 0.05. ( b ) Model for the role of miRNA dysregulation in ASD molecular pathology. The upregulated miRNAs and miRNA modules may have a contributory role by repressing ASD risk genes and neuronal/synaptic genes downregulated in ASD. The downregulated miRNAs and miRNA module may contribute to the upregulation of immune/inflammatory genes in ASD, but might also have a compensatory role given the enrichment of their targets for rare protein-disrupting and common genetic variants associated with ASD. ( c , d ) Enrichment (Fisher's exact test) for ASD-affected mRNAs and mRNA modules in the strongest ( c ) or the most conserved ( d ) targets of individual candidate miRNAs. Full size image To prioritize candidates for further characterization, we next examined whether the targets of individual candidate miRNAs were enriched for mRNAs or mRNA modules that were anti-correlated with the miRNAs in ASD. We found that the targets of several downregulated miRNAs were enriched (Fisher's exact test, P < 0.05) for the upregulated mRNAs and M9 mRNA module, whereas targets of several upregulated miRNAs showed enrichments (Fisher's exact test, P < 0.05) for the downregulated mRNAs and the M4 and M16 mRNA modules ( Fig. 5c,d and Supplementary Table 6 ). hsa-miR-21-3p targets neuronal genes downregulated in ASD We next experimentally tested the predicted downregulation of target mRNAs by miRNAs in ASD via in vitro perturbation of several dysregulated miRNAs in hNPCs. We first focused on hsa-miR-21-3p, which was the second most upregulated miRNA in ASD cortex (1.5-fold increase, FDR < 0.05; Supplementary Fig. 10a ) and whose predicted targets exhibited prominent enrichment (Fisher's exact test, P < 0.01) for downregulated mRNAs and the M16 mRNA module ( Fig. 5c,d and Supplementary Table 6 ). hsa-miR-21-3p is conserved across vertebrates ( Supplementary Fig. 10b ) and widely expressed in different human brain regions throughout development ( Supplementary Fig. 10d,e ). However, its role in the CNS has not yet been explored. We overexpressed hsa-miR-21-3p in hNPCs and examined the consequent mRNA changes by RNA-seq ( Supplementary Table 5 ). We found that predicted hsa-miR-21-3p target genes showed significant downregulation (one-sided t test, P < 2.2 × 10 −16 ) as a group compared with non-targets ( Fig. 6a ), again validating the bioinformatic predictions. Figure 6: hsa-miR-21-3p targets neuronal genes downregulated in ASD. ( a ) Distributions (left) and cumulative distributions (right) of mRNA log 2 (fold change) in response to overexpression of hsa-miR-21-3p in hNPCs. Statistical significance between target groups and non-targets was assessed using one-sided t tests assuming unequal variance. ( b – e ) Heat maps showing enrichment of validated hsa-miR-21-3p targets for ASD-related gene lists ( b – d ) as well as ASD-affected mRNAs and mRNA modules ( e ). ( b ) Logistic regression, ( c – e ) Fisher's exact test. Odds ratios and P values are shown where P < 0.05. ( f ) A partial list of validated hsa-miR-21-3p target genes associated with ASD. Full size image We next performed a series of enrichment analyses to characterize the overlap between validated hsa-miR-21-3p targets and ASD-related genes. hsa-miR-21-3p targets showed enrichment for genes harboring LGD and recurrent DNVs in ASD-affected individuals (logistic regression, P < 0.05), but little enrichment for missense or synonymous DNVs, or LGD DNVs in unaffected siblings or schizophrenia-affected individuals ( Fig. 6c,f and Supplementary Table 6 ). ASD SFARI genes, ASD rare variants, FMRP targets, chromatin modifiers, embryonically expressed genes, and the ASD-related M3 and M16 brain developmental modules 13 were also enriched (Fisher's exact test, P < 0.05), but genes associated with intellectual disability were not ( Fig. 6b,d,f and Supplementary Table 6 ). Enrichment (Fisher's exact test, P < 0.005) was also observed for genes downregulated in post-mortem ASD cortex, particularly the M16 mRNA module ( Fig. 6e,f and Supplementary Table 6 ), the ME of which was negatively correlated with hsa-miR-21-3p (Pearson's R = −0.50, P = 8.5 × 10 −8 ). Notably, hsa-miR-21-3p overexpression downregulated (one-sided t test, P < 0.05) several hub genes in M16, including PAFAH1B1 / LIS1 (which is critical for neuronal migration and is causally linked to lissencephaly 35 ), DLGAP1 (a PSD scaffold protein that binds to SHANK3, an ASD risk gene 1 ), and ATP2B1 / PMCA1 (a calcium transporter). In addition, hsa-miR-21-3p repressed the levels of ATP1B1 (a PSD-localized Na + /K + transporter harboring ASD-associated LGD DNVs 24 ), DYNC1I1 (which is critical for neuronal migration 35 ), NEEP21 (which is involved in synaptic transmission 36 ), SV2B (a neurotransmitter release regulator 37 ) and several genes in the ubiquitin-proteasome pathway ( UCHL5 , USP33 , USP7 , UBE2K , FBXO11 and KIAA0368 ) ( Fig. 6f and Supplementary Table 5 ). Also of interest, hsa-miR-21-3p overexpression led to a pronounced decrease (38%; one-sided t test, P = 0.0001) in PCDH19 , mutations in which cause epilepsy and ASD in females 38 . hsa-miR-21-3p and PCDH19 levels were also negatively correlated in post-mortem cortex (Pearson's R = −0.48, P = 4.0 × 10 −7 ), consistent with the in vitro observation. Together, these results implicate hsa-miR-21-3p in regulating the mRNA levels of neuronal and synaptic genes and suggest a role for hsa-miR-21-3p in the downregulation of these genes in ASD. hsa_can_1002-m targets genes in the EGFR and FGFR pathways Another candidate of particular interest was the predicted miRNA hsa_can_1002-m, one of the most downregulated miRNAs in ASD cerebral cortex (2.5-fold decrease, FDR < 0.05; Supplementary Fig. 10a ) and the top hub miRNA of the brown module ( Fig. 2f and Supplementary Table 2 ). Overexpression of hsa_can_1002-m in hNPCs led to significant decreases (one-sided t test, P < 10 −7 ) in its predicted targets ( Fig. 7a and Supplementary Table 5 ), supporting its function. Notably, the predicted hsa_can_1002-m precursor is located in a genomic region that is only present in primates, and not in lower vertebrates ( Supplementary Fig. 10c ). Moreover, RNA-seq in the mouse cortex did not detect the hsa_can_1002-m sequence. We further performed qRT-PCR using a primer for the human mature sequence, detecting robust expression in human, chimpanzee and rhesus macaque cerebral cortices, but not the mouse cortex ( Fig. 7b ), confirming our finding from the genome sequence that hsa_can_1002-m is a primate-specific miRNA. RNA-seq data from the BrainSpan project revealed that hsa_can_1002-m is broadly expressed, but is developmentally regulated in the human brain from infancy to adulthood ( Supplementary Fig. 10f,g ). Figure 7: hsa_can_1002-m is primate specific and targets genes in the EGFR and the FGFR signaling pathways. ( a ) Distributions (left) and cumulative distributions (right) of mRNA log 2 (fold change) in response to overexpression of hsa_can_1002-m in hNPCs. Statistical significance between target groups and non-targets was assessed using ones-sided t tests assuming unequal variance. ( b ) qRT-PCR in human, chimpanzee, macaque and mouse cortices using primers designed for human hsa_can_1002-m. RNU6-2 was used as control. The experiment was repeated twice and the result was reproducible. ( c ) Top GO terms for validated hsa_can_1002-m targets. Uncorrected P values are shown. ( d ) Direct protein-protein interaction network between validated hsa_can_1002-m targets. Nodes are colored based on the P values of the seed proteins (the probability that by chance the seed protein would be as connected as is observed) according to the legend on the right. ( e ) A partial list of validated hsa_can_1002-m target genes. Full size image Validated hsa_can_1002-m targets did not show enrichment for known ASD risk genes. Nonetheless, the top GO categories (GO-Elite software, P < 0.001; Online Methods ) implicated the epidermal growth factor receptor (EGFR) and the fibroblast growth factor receptor (FGFR) signaling pathways ( Fig. 7c–e and Supplementary Table 6 ), which have been implicated in neuron and glia proliferation, differentiation, and survival in both the developing and adult brain, as well as inflammatory/immune processes 39 , 40 . Notably, all implicated targets are involved in the activation of these pathways, suggesting that hsa_can_1002-m downregulation would lead to an increase in pathway activity. These targets form a more highly connected local protein-protein interaction network than would be expected by chance (DAPPLE software, P < 0.05; Online Methods ), providing independent confirmation of co-regulation at the protein level ( Fig. 7d ). Of particular interest are EPS8 (a signaling adaptor protein regulating dendritic spine density and synaptic plasticity), ADAM12 (a metalloprotease required for EGFR ligand processing), CHUK (a kinase critical for NF-kappa-B activation) and RUNX1 (a transcription factor essential for immune cell development and activation), all of which were upregulated ( P < 0.05) in ASD post-mortem cortex (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations) ( Supplementary Table 5 ). In addition, the validated strongest hsa_can_1002-m targets were enriched for genes in the immune-related, upregulated M9 mRNA module (Fisher's exact test, odds ratio = 1.9, P = 0.02; Fig. 7e ), consistent with prediction ( Fig. 5c ). Together, these findings implicate a previously unknown, primate-specific miRNA hsa_can_1002-m in regulating the EGFR and the FGFR signaling pathways, shedding light on its potential role in neuronal and glial development and function, as well as in ASD molecular pathology involving neural-immune interactions. Experimental characterization of other candidate miRNAs We also experimentally validated the targets of several other candidate miRNAs, including hsa-miR-103a-3p in the yellow module and hsa-miR-143-3p and hsa-miR-23a-3p in the magenta module, the predicted targets of which were enriched (Fisher's exact test, P < 0.05) for downregulated mRNAs and the M16 mRNA module ( Fig. 5c,d and Supplementary Table 6 ). Notably, hsa-miR-23a-3p has also been reported to be upregulated in lymphoblasts in ASD 41 , and hsa-miR-143-3p has been recently shown to be regulated by a primate-specific long non-coding RNA and has been implicated in neural progenitor proliferation 42 . Overexpression of these miRNAs in hNPCs led to significant reductions (one-sided t test, P < 2.2 × 10 −16 ) in the predicted target mRNAs ( Fig. 8a–c and Supplementary Table 5 ). Consistent with bioinformatic predictions, enrichment analysis revealed that the validated targets of hsa-miR-103a-3p and hsa-miR-143-3p were enriched (Fisher's exact test, P < 0.05) for downregulated mRNAs and/or the M16 mRNA module ( Fig. 8d,f and Supplementary Table 6 ). Validated targets of hsa-miR-23a-3p were enriched (Fisher's exact test, P < 0.05) for ASD SFARI genes and ASD rare variants ( Fig. 8e–f and Supplementary Table 6 ). These data are consistent with a potential functional involvement of these miRNAs in ASD, making them interesting candidates for further functional manipulation in model systems. Figure 8: Experimental validation of targets of other candidate miRNAs. ( a – c ) Distributions (left) and cumulative distributions (right) of mRNA log 2 (fold change) in response to overexpression of hsa-miR-103a-3p ( a ), hsa-miR-143-3p ( b ) and hsa-miR-23a-3p ( c ) in hNPCs. Statistical significance between target groups and non-targets was assessed using ones-sided t tests assuming unequal variance. ( d ) Enrichment (Fisher's exact test) of validated targets of hsa-miR-103a-3p and hsa-miR-143-3p for downregulated mRNAs and the downregulated M16 mRNA module. ( e ) Enrichment (Fisher's exact test) of validated targets of hsa-miR-23a-3p for ASD SFARI genes and ASD risk genes implicated by rare variants. ( f ) A partial list of validated target genes. Full size image Discussion Our genome-wide, integrative analysis provides new insights into the role of miRNAs in ASD pathophysiology. We observed a miRNA differential expression signature that was shared by a majority of ASD cortex samples. Within the targets of the ASD-affected miRNAs and miRNA coexpression modules, we observed enrichment for ASD risk genes that have been implicated by multiple forms of genetic variation, and much less so for variants associated with intellectual disability, schizophrenia or Alzheimer's disease. This suggests that ASD risk genes are highly dosage sensitive; we surmise that miRNA dysregulation provides an alternative pathway for gene-disrupting mutations to perturb key transcript levels, thereby potentially contributing to ASD susceptibility ( Fig. 5b ). This model is supported by the negative correlation between the expression changes of ASD-affected miRNAs and mRNAs, and our experimental validation showing regulation of mRNA targets by several top candidate miRNAs. Collectively, our findings suggest that ASD-associated transcriptomic changes may be partially attributable to miRNA dysregulation, with the upregulated miRNAs potentially contributing to the downregulation of neuronal and synaptic genes and the downregulated miRNAs contributing to the upregulation of immune-inflammatory genes, as well as possible compensatory changes ( Fig. 5b ). A few studies have also examined miRNA expression changes associated with ASD, but assessed a limited number of miRNAs (using qRT-PCR or microarray), had a small sample size and/or used non-neuronal tissues and cells 41 , 43 , 44 . Our genome-wide analysis using the most relevant tissue and a better-powered sample, along with integration with multiple gene sets and expression data, provides the most robust and comprehensive assessment of miRNA dysregulation in ASD brain to date. Most of the differentially expressed miRNAs that we identified have not been reported. Notably, several miRNAs, including hsa-miR-107, hsa-miR-106a-5p, hsa-miR-10a-5p, has-miR-136-5p and has-miR-155-5p, overlapped with findings from previous studies and would be interesting candidates for further experimental investigation. In addition to providing a systems-level view of the miRNA expression landscape in post-mortem ASD brains, we also experimentally characterized the targets of several top candidate miRNAs in hNPCs. We found that transcripts regulated by hsa-miR-21-3p, an upregulated miRNA of unknown function in the nervous system, showed enrichment for ASD candidate genes and genes downregulated in ASD cortex. Its connection with the M16 mRNA module, which is enriched for neuronal and synaptic genes, suggests a role in regulating neuronal development and synaptic function and a link with the neuronal and synaptic defects in ASD. We also found that hsa_can_1002-m, a previously unknown, primate-specific miRNA that is downregulated in ASD, regulates several transcripts involved in the EGFR and the FGFR signaling pathways. This is intriguing from an evolutionary point of view given the important roles of these pathways in regulating neural stem cell proliferation in the brain, as a rapid increase in brain size resulting from increased neural stem cell division has been suggested as a critical step in primate brain evolution 45 . In addition, early postnatal brain overgrowth followed by relative growth normalization has been repeatedly observed in ASD-affected children and is thought to reflect abnormal early neurodevelopment 46 . It is possible that negative regulatory mechanisms, such as miRNAs, have evolved to restrict increased cell proliferation in the primate brain, as uncontrolled proliferation would disrupt brain development and function. The observation that the cerebellum showed a similar trend of miRNA differential expression to the cortex was somewhat unexpected, given previous findings 11 that very few mRNAs are differentially expressed in the cerebellum. Our recent RNA-seq analysis (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations) revealed that some mRNAs showed similar trends of differential expression in the cerebellum, albeit of substantially smaller magnitude, compared with the cortex (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations). We speculate that this might be a result of the differences in cell types or some other aspect of the molecular milieu of the cerebellum that renders it resilient to the mRNA changes observed in the cortex. One limitation of gene expression studies using post-mortem brains is that, although ASD likely arises from abnormalities during early brain development, the majority of available samples are from adults. Although there was no clear association of miRNA or mRNA 11 (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations) perturbations with medication history, some observed changes almost certainly reflect the consequences (rather than the causes) of having the disorder or compensatory responses, whereas some early gene expression perturbations that have a causal role may not be captured in adult brain. In this regard, it is interesting to note that the two miRNA modules upregulated in ASD showed stronger disease association at younger ages than at older ages, suggesting that they might be more related to early pathogenic features. Future studies using brain samples from younger patients and patient-derived brain organoids that model early brain development 47 can provide more insights into early transcriptomic perturbations. Another limitation is the cellular heterogeneity of post-mortem tissue; cell-type-specific miRNA expression data in the human brain are still lacking. The observed expression changes could occur in a single or multiple cell types, or reflect changes in cell composition. Future studies using transcriptomic profiling of isolated cell types and single-cell sequencing 48 could help to resolve these possibilities. Collectively, our genome-wide, integrative analysis provides a framework for assessing the functional involvement of miRNA in ASD. By integrating several ASD candidate gene sets and correlating with mRNA expression data, we provide multiple lines of evidence for a functional role of miRNA dysregulation in ASD, either as contributory or compensatory factors. These analyses also identify a rich set of ASD-associated candidate miRNAs for further study. Methods Brain tissue samples. Brain tissue samples were acquired from the Autism Tissue Program (ATP) brain bank at the Harvard Brain and Tissue Bank, the National Institute for Child Health and Human Development (NICHD) Eunice Kennedy Shriver Brain and Tissue Bank for Developmental Disorders, the UK Brain Bank for Autism and Related Developmental Research (BBA), and the MRC London Neurodegenerative Diseases Brain Bank. Up to three brain regions from each individual were assessed: frontal cortex (FC, Brodmann area (BA) 9), temporal cortex (TC, BA41/42/22), and cerebellar vermis. For some individuals, not all three regions were included due to limited tissue availability. Metadata for all 242 samples, including age, sex, brain region, brain bank, medical history, and sample quality metrics are summarized in Supplementary Table 1 . Individuals defined as autistic had either a confirmed ADI-R diagnosis, duplication 15q syndrome with confirmed ASD, or a diagnosis of autism supported by other evidence such as clinical history. Dissections of frozen samples were performed on dry ice in a dehydrated dissection chamber, and randomized for balance of age, sex, brain region and diagnostic status. RNA extractions, library preparation and small RNA sequencing. Total RNA was extracted from approximately 100 mg of frozen tissue using the miRNeasy kit (Qiagen). RNA integrity number (RIN) of the extracted total RNA was measured using an Agilent Bioanalyzer. For 195 of the 242 samples, rRNAs were depleted from 2 μg total RNA with the Ribo-Zero Gold kit (Illumina). Remaining RNA was size selected with AMPure XP beads (Beckman Coulter). Small RNAs (including miRNAs) in the supernatant (250 μl) were precipitated with 2 μl glycogen (Roche), 2 μl 0.1x Pellet Paint NF Co-precipitant (Merck Millipore), 25 μl 3 M NaOAc (pH 5.2), and 700 μl 100% ethanol (−20 °C), and resuspended in 5 μl RNase-free water for subsequent library preparation. For the other 47 samples, 0.7 μg total RNA was directly used for library preparation, due to the lack of sufficient material for rRNA depletion. We compared sequencing results on 4 brain tissue samples for which both library preparation methods were used and found the results on the same sample to be highly positively correlated (Pearson correlation coefficients = 0.93-0.97, P < 2.2 × 10 −16 ). Small RNA libraries were prepared in batches of 12–48 samples using the TruSeq Small RNA library Preparation Kits (Illumina) according to the manufacturer's protocol. Library preparation was randomized for balance of diagnostic status (ASD versus CTL), age, sex and brain region. Libraries were then validated using an Agilent Bioanalyzer and quantified with the Qubit dsDNA HS assay (Life Technologies). 19–29 libraries barcoded with Illumina TruSeq small RNA indexes were pooled and sequenced in each lane on a Illumina HiSeq2500 instrument using the high output mode with 50 bp single-end reads. Investigators were blinded during dissection, RNA extraction, library preparation and sequencing to all metadata information about the samples. Quality control (QC). Sequencing reads were demultiplexed using CASAVA v1.8.2 (Illumina) and sequencing adaptors were removed using the fastx_clipper function in the FASTX-Toolkit ( ). Sequencing quality (including quality scores, GC content, sequence length distribution, duplication levels, overrepresented sequences, and Kmer content) was examined using FastQC v0.10.1 ( ). All 242 samples sequenced for small RNAs were also sequenced for mRNAs using RNA-seq (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations). Genotypes for sites that are heterozygous or homozygous for the minor allele relative to the reference genome were called from RNA-seq data 49 (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations). Genotypes were then coded as NA (homozygous for the major allele or not enough depth to detect), 1 (detected heterozygous) or 2 (homozygous for the minor allele). Pairwise Spearman correlations between samples were calculated, based on which the samples were clustered. Any sample that did not cluster with other samples from the same individual was further examined for possible contamination or sample mix-up and excluded from the downstream analyses if sample mix-up was not resolvable. 20 samples from 18 individuals were removed using this criterion. In addition, 6 samples from 2 individuals who were not diagnosed with ASD, but had other conditions (one with idiopathic epilepsy, one with copy number variation) were also removed to avoid confounding effects. This resulted in a total of 216 samples that passed QC. miRNA quantification and prediction. For quantification of mature miRNAs documented in miRBase release 20, sequencing reads were first mapped to the hg19 reference genome using the mapper.pl script in the miRDeep2 package 9 , 50 with reads < 18 nt discarded. The quantifier.pl script was then used with default settings to quantify the number of reads mapped to mature miRNAs, allowing 0 mismatch. For the prediction of novel miRNAs, two methods were used. First, the miRDeep2.pl script in the miRDeep2 package was run with default settings and mature miRNAs in Pan troglodytes , Gorilla gorilla , Pans paniscus , and Pongo pygmaeus (miRBase release 20) provided as related species. miRDeep2 examines the position and frequency of reads aligned to the genome (signature) with respect to a putative RNA hairpin and scores miRNA candidates employing a probabilistic model based on miRNA biogenesis. The score reflects the energetic stability of the putative hairpin and the compatibility of the observed read distribution with miRNA cleavage. The higher the score, the more likely that the miRNA candidate is a true positive. We only kept predictions with a score ≥ 4 (corresponding to a true positive probability of 78 ± 2%; names of novel miRNAs predicted using this method start with “hsa_chr”). Second, we applied miRanalyzer 51 , 52 , which predicts putative mature miRNAs and precursors based on mapped reads and folding energy and employs 5 different Random Forest models to calculate the probability that a candidate is a true miRNA. We only kept candidates with positive predictions in at least 4 out of all 5 models (names of novel miRNAs predicted using this method start with hsa-P). In addition, we included novel miRNAs identified in a recent study, which were predicted using miRDeep2 based on 94 human sRNA-seq data sets 10 (names start with hsa_can). Many of these were supported by various levels of experimental evidence, including interaction with Ago1/2, interaction with DGCR8, response to silencing of the miRNA biogenesis machinery, and/or interaction with target mRNAs in CLASH experiments 10 . Duplicate predictions in the above three novel miRNA sets were collapsed. Predicted miRNAs > 25 nt were removed according to the normal size range of miRNAs. The quantifier.pl script was used to quantify the number of reads mapped to novel mature miRNAs with 0 mismatch. miRNAs (from miRBase release 20 or predicted) with read counts ≥ 3 in at least 50% of samples in each region, sex, or diagnostic status (ASD versus CTL) group were kept for further analysis. 699 miRNAs (552 in miRBase 20 and 147 predicted) in our data set met this criterion. This step helps remove miRNAs that are supported by only a few reads and likely expressed at very low levels. Such low levels of expression are unlikely to provide reliable, statistically significant differential expression results. Expression value normalization and adjustment of covariates. Raw expression data for all 242 samples were normalized for library size using the estimateSizeFactors function in the DESeq R package 53 . For each miRNA in each sample, a ratio is calculated by dividing the read count by the geometric mean across all samples. A scaling factor for each sample is then calculated as the median of this ratio for all miRNAs in the sample. The raw read counts in each sample are then divided by the scaling factor to generate the library size-normalized data. The effect of GC content was normalized using the CQN R package 54 . Library preparation batch effects were normalized using the ComBat function in the sva R package 55 . The effects of other technical covariates, including RIN, PMI, and brain bank, were normalized together using a linear model. Additionally, mRNAs prepared from the same total RNA samples used in this study were also analyzed using RNA-seq in a separate study (N.N.P., V. Swarup, T.G.B., M. Irimia, G. Ramaswami, unpublished observations). We observed that the proportion of mRNA reads mapped to exons also showed significant correlation with the miRNA expression data, as it likely reflected the ratio between cytoplasmic (where miRNAs are in the processed, mature form) and nuclear (where miRNAs are in the precursor form) RNAs. Therefore we also included this technical variable in the linear regression model along with other technical variables. The log 2 transformed expression data normalized for library size and technical covariates were used for subsequent differential gene expression analysis and weighted gene coexpression network analysis. Definition of sample sets for analysis. In exploratory data analysis, we performed principle component analysis (PCA) and hierarchical sample clustering for 216 samples that passed QC (180 samples prepared using the rRNA depletion method and 36 samples prepared using total RNA) to examine the relationship between the expression data and different covariates. We observed that library preparation method (rRNA depletion versus total RNA) and brain region (cortex versus cerebellum) had major effects on the expression profile, both having strong correlations with PC1 and PC2 ( Supplementary Fig. 2a,b ). Accordingly, samples prepared with the two different methods or from different brain regions (cortex versus cerebellum) were clustered into distinct branches in hierarchical clustering ( Supplementary Fig. 2c ). In addition, brain bank, age, RIN, and PMI also showed significant correlations with PC1–5 ( Supplementary Fig. 2a ). To avoid the confounding effects of library preparation method and brain region, we divided our samples into different subsets based on these two covariates, and first focused on cortex samples prepared using the rRNA depletion method (116 samples) for our main analysis. We performed outlier detection using a standard method 12 , 13 , 56 . Specifically, we calculated the connectivity between samples based on the biweight midcorrelation of miRNA expression using the fundamentalNetworkConcepts function In the WGCNA R package 57 , and removed samples with connectivity more than 3 s.d. away from the mean as outliers. This process was then repeated on the remaining samples until no more outliers were detected. Using this method, we removed 2 samples as outliers. We also removed 5 samples (from three individuals) for which PMI information was not available, resulting in 109 samples, which were used for the main WGCNA analysis. For the main DGE analysis, to avoid the potential confounding effect of age, we further removed 14 ASD samples for which there were no age-matched control samples, including 13 samples with age ≤ 11 years and 1 sample with age of 67 years (ages of ASD and CTL groups were both between 15 - 60 years after removing these samples), resulting in 95 samples. The WGCNA analysis relies on correlation between gene expression levels across samples and is not as critically affected by unmatched covariates as the DGE analysis. It also permits direct assessment of the relationship of modules to experimental variables or confounders. Therefore we used all 109 cortex samples to maximize the robustness of the WGCNA analysis. We combined samples from the frontal cortex and the temporal cortex, as PCA analysis and hierarchical sample clustering indicated that miRNA expression profiles were very similar between these two regions, but distinct in the cerebellum ( Supplementary Fig. 2a–f ). The cortex samples prepared using the total RNA method (31 samples) were imbalanced in brain bank between ASD and CTL; 10 of 16 ASD samples were from UK BBA while none of the 15 CTL samples was from this brain bank. To avoid potential confounding effects, this set was not used for DGE analysis, but instead for evaluating miRNA module preservation, which relies on miRNA coexpression relationships across samples and should not be critically affected by unmatched covariates. For DGE analysis in the cerebellum, we selected 47 samples prepared using the rRNA depletion method as follows. From 64 samples that passed QC, four samples were removed as outliers as described above. We also removed three samples with no PMI information, one sample with very high PMI (50 h), three young ASD samples (age ≤ 5 years) with no age-matched controls, and six ASD samples with RINs lower than the lowest RIN in CTL. Differential gene expression (DGE) analysis. Differential expression between ASD and CTL was calculated for each miRNA in the cortex using a linear mixed-effects (LME) model using the R package nlme, as more than one brain region from the same individuals were included ( Supplementary Code 1 ). The model treated diagnosis, age, sex, and region as fixed effects (numeric or factor variables) and individual brain ID as a random effect: lme(expression level ∼ diagnosis + age + sex + region, rand = ∼ 1|brainID). Effect sizes (Beta values) and P values for diagnosis were calculated from the model for all miRNAs, and P values were adjusted for multiple comparisons using Benjamini-Hochberg correction to assess false discovery rate (FDR). To test if the result is robust to resampling, we performed random resampling by dividing the samples into ten age deciles and randomly picking seven deciles in each round of resampling, thus ensuring that age is still matched between ASD and CTL. We found that miRNA fold changes were highly concordant between the resampled sets and the original set ( Supplementary Fig. 3a ). To ensure that the P values were not skewed by the distribution of the data, we also computed P values from 100 rounds of permutation (randomizing the ASD/CTL status of all samples from the same individual each time), and found that the P value rankings of genes were highly concordant with those obtained with the original sample set (Pearson's R = 0.99, P < 2.2 × 10 −16 ). For DGE analysis in the cerebellum, a linear model that included diagnosis, age, and sex was used (as only one region from each individual was included): lm(expression level ∼ diagnosis + age + sex). Log 2 transformed expression data that have been normalized for library size and technical covariates were used. Weighted gene coexpression network analysis (WGCNA) and further network characterization. The WGCNA R package 14 , 57 was used for building signed coexpression networks ( Supplementary Code 2 ). Briefly, biweight midcorrelation was first used to calculate pair-wise correlations (values between −1 and 1) between miRNAs. Next, pair-wise topological overlap (TO, values between 0 and 1) between miRNAs was calculated with a power of 8 based on a fit to scale-free topology. To construct the network in a way that is robust to outliers, we performed 200 rounds of bootstrapping and computed the TO matrix for each of the resampled networks. We then took the medians of the TOs to generate a consensus TO matrix. Coexpression modules comprised of positively correlated miRNAs with high consensus TO were then identified using the cutreeDynamic function in the dynamicTreeCut R package, which employs a “dynamic hybrid” algorithm 58 , with the following parameters: method = “hybrid”, deepSplit = 3, pamStage = T, pamRespectsDendro = T, minClusterSize = 10. The algorithm first detected preliminary clusters based on the merging information of the dendrogram. miRNAs unassigned in the first step were next assigned to the preliminary clusters if they were sufficiently close to the clusters. In this step the dendrogram was ignored and only dissimilarity information is used. The expression of each module was summarized by the module eigengene (ME, defined as the first principle component of all miRNAs in a module). Modules whose eigengenes were highly correlated were further merged using the mergeCloseModules function in the WGCNA R package. The module membership of each miRNA in each module (kME) is defined as the biweight midcorrelation to the ME. Pearson correlations between MEs and diagnosis, age, sex, brain region, RIN, PMI, and brain bank were calculated. P values were FDR adjusted across 11 modules for each covariate using Benjamini-Hochberg correction. We identified four modules that were significantly (FDR < 0.05) correlated with disease status, two (brown and salmon) downregulated and two (yellow and magenta) upregulated in ASD samples. Further inspection of the salmon module revealed that it was not significantly (FDR < 0.05) correlated with disease status in subsets of younger (15 - 30 years) and older (>30 years) age-matched samples ( Fig. 2c,d ). In addition, none of the miRNAs in this module was differentially expressed (FDR < 0.05) in our DGE analysis using the 95 age-matched cortex samples. These observations suggest that the disease association of the salmon module was likely driven by younger non-aged-matched samples in cases compared with controls ( Supplementary Fig. 1d ), so it was excluded from subsequent analysis. Network plots were generated with the igraph R package 59 . To examine whether the coexpression structure is similar between ASD and CTL samples, between FC and TC samples, or in independent data sets, we performed module preservation analysis using the modulePreservation function in the WGCNA R package 16 , which calculates for each module the Z summary statistic, a measure that combines module density and intramodular connectivity metrics. 2 < Z < 10 indicates weak to moderate preservation and Z > 10 indicates high preservation. Transcription factor (TF)/chromatin regulator (CR) binding site analysis. Genome-wide high-confidence binding sites for a set of 61 human TFs/CRs expressed in the cortex were obtained from a previous study 17 . miRNAs located within 20 kb of the binding sites of each TF/CR were identified using bedtools 60 , 61 and defined as potential targets. Fisher's exact tests were then performed using the fisher.test R function to assess enrichment of the targets of each TF/CR for miRNAs with kME ≥ 0.5 in the brown, magenta, and yellow modules. All 699 expressed miRNAs were used as background. P values were FDR adjusted for 61 TFs/CRs tested for each module using the Benjamini-Hochberg correction. Prediction of miRNA targets. For prediction of miRNA targets, the stand-alone version of TargetScan (release 6.2) 6 , 19 , 20 , 21 was used. 3′ UTR sequences of RefSeq genes were download from the TargetScan website ( ). For each miRNA target site, a branch length score, which evaluates target site conservation while controlling for 3′ UTR conservation, and a context+ score, which measures targeting efficacy irrespective of conservation, were calculated. To select top differentially expressed miRNAs and top miRNAs in ASD-associated modules for target prediction, we used the following criteria: (1) for top differentially expressed miRNAs, FDR < 0.05 and |log 2 (fold change)| ≥ 0.3 in 95 cortex samples; (2) for top miRNAs in ASD-associated modules, kME ≥ the median for each module (kME ≥ 0.63, 0.60, and 0.51 for the brown, magenta, and yellow modules, respectively) and differentially expressed in 95 cortex samples (FDR < 0.05; for the brown module) and/or 47 younger (15–30 years) cortex samples ( P < 0.05; for the yellow and magenta modules as they show stronger ASD-association in younger individuals). We also performed manual inspection to ensure relatively uniform 5′ termini and precise 5′ sequence prediction, as the seed region (nucleotides 2-8) is critical for target prediction by TargetScan. In a few cases where two miRNAs are 3′ isoforms, we used only one isoform for target prediction to prevent duplicate predictions. Using these criteria, 10 downregulated miRNAs, 24 upregulated miRNAs, 5 brown module miRNAs, 4 magenta module miRNAs, and 7 yellow module miRNAs were selected for target prediction. We used two approaches to select top targets as recommended by the developers of the TargetScan algorithm 6 , 20 , 21 . First, we identified all predicted miRNA target sites in a given mRNA 3′ UTR and calculated the summed context+ score for all sites, as miRNA targeting at different sites has been shown to have non-cooperative effects in most cases 20 . We then selected the strongest targets that are hit by two or more miRNAs in each group (≥2 for downregulated miRNAs or miRNAs in the brown, magenta, and yellow modules, ≥ 4 for upregulated miRNAs due to the larger number of miRNAs in this group) and have a summed context+ score of ≤ −0.2 (the more negative the context+ score, the stronger the predicted targeting efficacy). Second, we selected the top 25% most conserved target sites (based on branch length scores) with context+ score ≤ −0.1. For individual miRNAs, we define “the strongest” targets as those with a summed context+ score of ≤ −0.1, and “the most conserved” targets as those with a branch length score in the top 25% and a context+ score of ≤ −0.05. Gene set enrichment analysis. Gene set enrichment analyses were performed using two-sided Fisher's exact tests with the fisher.test R function, except for enrichment for de novo variants (DNVs). For DNVs, a previous study showed a near linear relationship between gene length and de novo mutation frequency 24 . Therefore, for assessing enrichment of genes affected by DNVs in miRNA targets, we applied logistic regression, in which the probability of a gene being hit by a specific category of DNVs was coded as a function of gene coding region length covered in exome sequencing and whether the gene belongs to a certain miRNA target group. In addition, we also applied logistic regression to assess enrichment while controlling for gene 3′ UTR length (and also for gene coding region length for DNVs) in Supplementary Figure 6 . P values were FDR adjusted across 10 target groups for each gene list using Benjamini-Hochberg correction. The background gene lists were defined as follows: (1) for the strongest targets of differentially expressed miRNAs and ASD-associated miRNA modules, the intersection between (a) protein-coding genes expressed in the cortex and (b) the targets of all 699 miRNAs that are hit by two or more miRNAs and have a summed context+ score of ≤ −0.2; (2) for the most conserved targets of differentially expressed miRNAs and ASD-associated miRNA modules, the intersection between (a) protein-coding genes expressed in the cortex and (b) the targets of all 699 miRNAs with a branch length ≥ the lowest branch length for the targets of the ASD-related miRNA groups and a summed context+ score of ≤ −0.1; (3) for the strongest targets of individual candidate miRNAs, the intersection between (a) protein-coding genes expressed in the cortex and (b) the targets of all 699 miRNAs with a summed context+ score of ≤ −0.1; (4) for the most conserved targets of individual candidate miRNAs, the intersection between (a) protein-coding genes expressed in the cortex and (b) the targets of all 699 miRNAs with a branch length in the top 25% for each miRNA and a context+ score of ≤ −0.05; (5) for the strongest or the most conserved targets of individual candidate miRNAs that were validated in the hNPCs, the background used in (3) or (4) intersected with all genes expressed in the hNPCs as detected by our RNA-seq analysis. Enrichment of common variants from genome-wide association studies. GWAS data for ASD, schizophrenia, and Alzheimer's disease were obtained from the Autism Genome Resource Exchange/Children's Hospital Philadelphia (AGRE/CHOP), the Psychiatry Genomics Consortium (PGC), and the International Genomics of Alzheimer's Project (IGAP), respectively. Linkage disequilibrium (LD)-based SNP clumping was performed using PLINK (version 1.07) with the following parameters:–clump-p1 0.001–clump-p2 0.05–clump-r2 0.50–clump-kb 250. LD information was obtained from AGRE or HapMap (release 23). Overlap between disease-associated SNP clumps and miRNA targets plus 20 kb flanking regions was then assessed using INRICH (ref. 32 ) with the following parameters: -w 20 -r 10000 -q 5000 -d 0.1 -p 0.1. INRICH takes a genomic permutation approach that accounts for linkage disequilibrium, SNP number and density, and gene density to calculate empirical P -values for each gene set 32 . It performs multiple testing correction via a second, nested round of permutation to assess the null distribution of the minimum empirical P value across all tested gene-sets 32 . We used the same background gene lists as in gene set enrichment analysis. Gene ontology analysis. Gene ontology analysis was performed using GO-Elite (version 1.2.5), which uses a Z-score approximation of the hypergeometric distribution to assess term enrichment, with default settings and 5000 permutations 62 . False-discovery rate adjusted P values were calculated using Benjamini-Hochberg correction. We used the same background gene lists as in gene set enrichment analysis. Quantitative RT-PCR. 200 ng total RNA treated with RNase-free DNase I (Qiagen) was reverse-transcribed using the miScript II RT Kit (Qiagen). Real-time PCR was performed using the miScript Primer Assays (Qiagen) and miScript SYBR Green PCR Kit (Qiagen) on a Roche LightCycler 480 instrument. Human RNU6B was used as internal control. miRNA overexpression in hNPCs and RNA-seq. Primary human neural progenitor cells (hNPCs) were generated in a previous study and cultured as described 63 . The cells were free of mycoplasma contamination based on DAPI staining. At the fourth passage, hNPCs were seeded in 6-well plates at 250,000 cells per well. 24 h later, mimics of mature miRNAs (GE Healthcare) were transfected at a final concentration of 50 nM using the HiPerFect Transfection Reagent (Qiagen). The miRDIAN microRNA Mimic Negative Control 1 (GE Healthcare), which is based on cel-miR-67 and has minimal sequence identity with human miRNAs, was used as negative control. Transfection for each miRNA mimic was performed in triplicate. 48 h after transfection, total RNA was extracted using the miRNeasy kit (Qiagen). RNA integrity number (RIN) was measured using an Agilent Bioanalyzer and all samples had a RIN > 9. Overexpression of miRNAs was confirmed with quantitative RT-PCR. 1.5 μg total RNA was then converted to mRNA libraries using the Illumina TruSeq Stranded mRNA Library Preparation Kit with poly-A selection. ERCC ExFold RNA Spike-In Mixes (Life Technologies) were added as internal controls. Libraries were then validated on an Agilent 2200 TapeStation system and quantified with the Quant-iT PicoGreen assay (Life Technologies). 12 libraries barcoded with Illumina TruSeq indexes were pooled into one lane and sequenced 3 times on an Illumina HiSeq2500 instrument using the rapid run mode with 69-bp paired-end reads. After demultiplexing with CASAVA v1.8.2 (Illumina), reads were mapped to the GRCh37.75 reference genome using TopHat2 (ref. 64 ). Sequencing quality was then examined using Picard Tools version 1.128 (commands CollectMultipleMetrics, CollectRnaSeqMetrics, and CollectGcBiasMetrics) and the flagstat command in SAMtools (version 1.2) 65 . 41.0–77.9 million QC-passed reads were mapped to the reference genome, with 84.8–87.6% mapped to mRNAs, for each sample. Gene expression levels were then quantified using HTSeq (version 0.6.1) 66 with a union exon model. Genes with 10 or more counts in at least 2 samples in any miRNA overexpression or negative control group were kept for further analysis. Gene expression levels in samples within the same group were highly correlated ( R 2 ≥ 0.99). The expression data were then normalized using the DESeq R package for library size 53 and/or the CQN R package for GC content 54 . Differential gene expression analysis was performed using one-sided t test assuming unequal variance. Uncorrected P values were used to define differentially expressed genes, as the sample size was small ( n = 3 for each group). Protein-protein interaction (PPI) analysis. PPI analysis was performed using DAPPLE (ref. 67 ) v2.0, which uses the InWeb database 68 and applies a within-degree within node permutation methods. 10,000 permutations were used. Summary of statistical methods. Blinding. Tissue dissection, RNA extraction, library preparation, and sequencing were performed blind to all metadata information about the samples. Data analysis was not performed blind to metadata information about the samples. Randomization. Tissue dissection, library preparation, and sequencing were randomized for balance of diagnostic status, age, sex, and brain region. Sample sizes . No statistical methods were used to pre-determine sample sizes as effect sizes were not known a priori, but our sample sizes are larger than those reported in previous publications that detected miRNA changes 41 , 43 , 44 . Parametric tests . For DGE analyses using linear mixed-effects and linear models ( Fig. 1b,c , Supplementary Fig. 3a–e and Supplementary Table 2 ), normality was not formally tested for each miRNA. For the main DGE analysis of 95 cortex samples, we also computed permutation-based P values and found that the P value rankings of miRNAs were highly concordant with those observed in the original sample set (Pearson's R = 0.99, P < 2.2 × 10 −16 ). For calculation of Pearson correlations ( Figs. 1 , 2 , 4 and Supplementary Figs. 2a,d , 3a–e and 9a–c ), normality was not formally tested. One-sided t -tests were used for Figures 6 , 7 , 8 , and Supplementary Figure 5 because the distributions are approximately normal and sample sizes were reasonably large ( n = 88-11695). For Supplementary Figure 3f,g , normality was tested by the Shapiro-Wilk test. All groups except for the CTL group for hsa-miR-10a-5p ( P = 0.005) and the ASD group for hsa_can_1155-m ( P = 0.03) meet the normality assumption ( P > 0.05). Two-sided t -tests were used for all groups. For differential gene expression analysis after miRNA overexpression in hNPCs ( Supplementary Table 5 ), one-sided t -tests were used but normality was not formally tested for each gene. For all t -tests, equal variances were not formally tested, and so all tests were performed assuming unequal variance. For DGE analyses using linear mixed-effects and linear models, equal variances were not formally tested for each miRNA. Non-parametric tests . One-tailed Wilcoxon rank sum tests were performed in Supplementary Figure 8a–h because the distributions do not appear to be normal. A Supplementary Methods Checklist is available. Code availability. The R code for the DGE analysis using a linear mixed-effects model and the WGCNA analysis is provided in Supplementary Code 1 and 2 . Data availability. Brain sample metadata, miRNA raw read counts, miRNA DGE analysis data, miRNA WGCNA data, and mRNA DGE analysis data following miRNA overexpression in hNPCs are provided in Supplementary Tables 1, 2 and 5 . Raw small RNA-seq and RNA-seq data from brain samples from ASD cases and controls have been deposited to the PsychENCODE Knowledge Portal ( ). Raw RNA-seq data from hNPCs following miRNA overexpression are available from the corresponding author upon request. Accession codes. Raw small RNA-seq and RNA-seq data from brain samples from ASD cases and controls have been deposited to the PsychENCODE Knowledge Portal ( ).
In an important new study, scientists at UCLA have found that the brains of people with autism spectrum disorders show distinctive changes in the levels of tiny regulator molecules known as microRNAs, which control the activities of large gene networks. The study is the first to demonstrate the broad importance of microRNAs in autism disorders. The researchers found evidence that the individual microRNAs implicated in the study regulate many genes previously linked to autism. The study thus brings researchers closer to understanding the causes of autism disorders, and in particular why the activities of so many genes are abnormal in these disorders. In principle, microRNAs or related molecules could someday be targeted with drugs to treat or prevent autism. "These findings add a new layer to our understanding of the molecular changes that occur in the brains of patients with autism spectrum disorders, and give us a good framework for more detailed investigations of microRNAs' contributions to these disorders," said Dr. Daniel Geschwind, principal investigator and the Gordon and Virginia MacDonald Distinguished Professor of Human Genetics in the David Geffen School of Medicine at UCLA. The new research, published online in Nature Neuroscience, is by far the most comprehensive autism-related study of microRNAs, small molecules made of single-stranded RNA (ribonucleic acid), DNA's more primitive cousin. Almost nothing was known about microRNAs before 2001, but researchers have since determined that hundreds of different microRNAs exist in human cells, and collectively regulate the activity of most of our genes. Because a typical microRNA reduces the activities of dozens to hundreds of genes, too much or too little of that microRNA can disrupt the normal workings of many cellular processes at once. Unsurprisingly, microRNA abnormalities have already been linked to a variety of disorders, including Alzheimer's and cancers. "Autism is in a sense a good place to look for microRNA abnormalities, because prior studies from our laboratory and others have linked autism disorders to changes in the expression levels of a large number of genes," said Geschwind, who is also a professor of neurology and psychiatry. For the study, Geschwind and colleagues measured levels of nearly 700 microRNAs in samples of brain tissue taken during autopsies of 55 people with autism spectrum disorders, and 42 control subjects without autism disorders. The analysis focused on samples from the cortex, which in most of the autism spectrum disorder cases showed a distinctive "signature" of abnormalities, involving 58 microRNAs—17 with lower than normal levels and 41 with higher than normal levels. Looking at groupings or "modules" of microRNAs that seem to work together in cells, the team found another autism spectrum disorder signature: two distinct modules whose microRNAs were at abnormally high levels in the autism spectrum disorder samples, and one whose microRNAs were at abnormally low levels. The affected microRNAs are thought collectively to regulate hundreds of different genes. Among them, the scientists found a disproportionately large number that are already considered "autism risk" genes—typically because mutations or uncommon variants of those genes have been linked to autism spectrum disorders. The genes thought to be regulated by the autism-linked microRNAs also include many whose activity is known to be abnormal in autism, even when there is no obvious autism-risk mutation. Geschwind's team selected several of the most strongly ASD-linked microRNAs, and confirmed with experiments in cultured brain cells that altering their levels—in the direction seen in the autism spectrum disorder samples—caused the kinds of changes in gene activity that were also seen in these samples. "From all this it seems likely that abnormalities in the levels of these microRNAs contribute to the broad gene expression changes we see in the brain in autism," said Emily Wu, a postdoctoral scholar in the Geschwind Laboratory who was first author of the study and performed most of the experiments. The study employed advanced RNA sequencing techniques and was thorough enough to uncover, and link to the autism disorder cases, several microRNAs that had never been described before. One of them, hsa_can_1002-m, turned out to be specific for primates and thus couldn't have been detected in mouse studies. The team plans to follow up by studying these ASD-linked microRNAs in more detail, to better characterize the effects of their altered levels on gene activity, brain development, cognition and behavior. "It would be interesting to test whether manipulating the levels of these microRNAs in animal models of autism can reverse autism-related signs," Wu said. Any clinical payoff from the new research is many years away at best. But success in targeting microRNAs in animal model studies might eventually lead to the development of autism spectrum disorder treatments or even preventive measures. Autism spectrum disorders currently affect about one in 40 boys and one in 200 girls in the United States, and there are no specific therapies.
10.1038/nn.4373
Medicine
Mamba venom holds promise for pain relief
Black mamba venom peptides target acid-sensing ion channels to abolish pain, Nature (2012) doi:10.1038/nature11494 Journal information: Nature
http://dx.doi.org/10.1038/nature11494
https://medicalxpress.com/news/2012-10-mamba-venom-pain-relief.html
Abstract Polypeptide toxins have played a central part in understanding physiological and physiopathological functions of ion channels 1 , 2 . In the field of pain, they led to important advances in basic research 3 , 4 , 5 , 6 and even to clinical applications 7 , 8 . Acid-sensing ion channels (ASICs) are generally considered principal players in the pain pathway 9 , including in humans 10 . A snake toxin activating peripheral ASICs in nociceptive neurons has been recently shown to evoke pain 11 . Here we show that a new class of three-finger peptides from another snake, the black mamba, is able to abolish pain through inhibition of ASICs expressed either in central or peripheral neurons. These peptides, which we call mambalgins, are not toxic in mice but show a potent analgesic effect upon central and peripheral injection that can be as strong as morphine. This effect is, however, resistant to naloxone, and mambalgins cause much less tolerance than morphine and no respiratory distress. Pharmacological inhibition by mambalgins combined with the use of knockdown and knockout animals indicates that blockade of heteromeric channels made of ASIC1a and ASIC2a subunits in central neurons and of ASIC1b-containing channels in nociceptors is involved in the analgesic effect of mambalgins. These findings identify new potential therapeutic targets for pain and introduce natural peptides that block them to produce a potent analgesia. Main In a screen to discover new blockers of ASIC channels from animal venoms, we identified the venom of black mamba ( Dendroaspis polylepis polylepis ) as a potent and reversible inhibitor of the ASIC1a channel expressed in Xenopus oocytes ( Fig. 1a ). After purification, two active fractions were collected ( Supplementary Fig. 1a, b ). A partial amino-acid sequence was used to clone by degenerated PCR the corresponding complementary DNA (cDNA) from lyophilized venom ( Supplementary Fig. 1c ). Two isopeptides were identified and named mambalgin-1 and mambalgin-2. They are composed of 57 amino acids with eight cysteine residues, and only differ by one residue at position 4 ( Supplementary Fig. 1d ). Figure 1: Mambalgins represent a new class of three-finger toxins targeting ASIC channels. a , Black mamba venom (0.1 mg ml −1 ) reversibly inhibits rat ASIC1a current expressed in Xenopus oocytes. b , Three-dimensional model of mambalgin-1 (disulphide bridges in yellow). c , Electrostatic properties of mambalgin-1 and human ASIC1a channel (on the basis of the three-dimensional structure of chicken ASIC1a 29 ) with positive and negative isosurfaces in blue and red, respectively. d , e , Inhibition of rat ASIC channels expressed in COS-7 cells ( n = 4–15; peptides applied before the pH drop as in a ). Hill coefficients of 0.7–1 suggest a 1:1 stoichiometry between mambalgins and channels. f , Effect of mambalgin-1 on the pH-dependent activation and inactivation of ASIC1a current recorded in Xenopus oocytes (protocol shown in inset). g , h , Inhibition of native ASIC currents in dorsal spinal cord neurons ( g ) and DRG neurons ( h ) by mambalgin-1 (674 nM) and PcTx1 (20 nM), n = 14–34. Right panels: effect of mambalgin-1 on neurons expressing no PcTx1-sensitive current. ( V h = −60 mV in a , d , e and −50 mV in f – h .) Mean ± s.e.m. PowerPoint slide Full size image Mambalgins belong to the family of three-finger toxins 12 ( Supplementary Fig. 1d, e ). They have no sequence homology with either PcTx1 or APETx2, two toxins that we previously identified to target ASIC channels 13 , 14 . A three-dimensional model of mambalgin-1 has been established from a pool of five templates with known structures ( Supplementary Fig. 1e ), which shows a triple-stranded and short double-stranded antiparallel β-sheets connecting loops II and III, and loop I, respectively, the three loops emerging from the core of the toxin like fingers from a palm ( Fig. 1b ). The model structure presents a concave face commonly found in neurotoxins and is stabilized by four disulphide bonds with a pattern identical to that observed in the crystal structure template (Cys 1–Cys 3, Cys 2–Cys 4, Cys 5–Cys 6 and Cys 7–Cys 8; Fig. 1b and Supplementary Fig. 1f ). Mambalgins show a strong positive electrostatic potential that may contribute to binding to negatively charged ASIC channels ( Fig. 1c ). Mambalgins have the unique property of being potent, rapid and reversible inhibitors of recombinant homomeric ASIC1a and heteromeric ASIC1a + ASIC2a or ASIC1a + ASIC2b channels, that is, all the ASIC channel subtypes expressed in the central nervous system 15 , 16 , 17 , 18 , with a similar potency for both isopeptides and IC 50 values (the toxin concentration half-maximally inhibiting the current) of 55 nM, 246 nM and 61 nM, respectively ( Fig. 1d ). Mambalgins also inhibit human ASIC channels ( Supplementary Fig. 2a ). The peptides inhibit ASIC1b and ASIC1a + ASIC1b channels that are specific of sensory neurons 19 with IC 50 values of 192 nM and 72 nM, respectively ( Fig. 1e ). Mambalgins, which bind to the closed and/or inactivated state of the channels ( Supplementary Fig. 2b ), modify the affinity for protons (pH 0.5act shifted from 6.35 ± 0.04 to 5.58 ± 0.02, and pH 0.5inact shifted from 7.10 ± 0.01 to 7.17 ± 0.01, n = 4, P < 0.001 and n = 5, P < 0.05, respectively; Fig. 1f ) and act as gating modifier toxins. They potently inhibit native ASIC currents in spinal cord, hippocampal and sensory neurons ( Fig. 1g, h and Supplementary Fig. 3 ). In central spinal cord neurons, mambalgin-1 (674 nM) decreased ASIC current amplitude to 13.0 ± 2.0% ( n = 14) of the control ( Fig. 1g ) and reduced the excitability in response to acidic pH without unspecific effect on basal neuronal excitability (resting potential) or on the threshold and the shape of evoked or spontaneous action potentials and on spontaneous postsynaptic currents ( Supplementary Fig. 4 ). Mambalgins had no effect on ASIC2a, ASIC3, ASIC1a + ASIC3 and ASIC1b + ASIC3 channels, as well as on TRPV1, P2X2, 5-HT 3A , Nav1.8, Cav3.2 and Kv1.2 channels ( Supplementary Figs 5 and 6 ). Most three-finger toxins, such as α-neurotoxins that block nicotinic acetylcholine receptors 20 , evoke neurotoxic effects in animals. This is not the case of mambalgins, which do not produce motor dysfunction ( Supplementary Fig. 7 ), apathy, flaccid paralysis, convulsions or death upon central injections (intrathecal or intracerebroventricular) in mice, but instead induce analgesic effects against acute and inflammatory pain ( Fig. 2 ) that can be as strong as morphine but resistant to naloxone, with much less tolerance ( Fig. 3a ) and no respiratory distress ( Fig. 3b ). In the tail-flick and paw-flick tests, intrathecal injection of mambalgin-1 (or mambalgin-2) increased the latency for the tail and paw withdrawal reflex from 8.8 ± 0.4 s and 8.0 ± 0.8 s to 23.2 ± 1.3 s and 19.8 ± 1.6 s, respectively, 7 min after injection ( n = 15–22, P < 0.001) ( Fig. 2a, b ). The effects were completely lost in ASIC1a knockout mice ( Fig. 2a, b ), demonstrating the essential implication of ASIC1a-containing channels. The key involvement of ASIC channels present in central neurons in the analgesic effect of mambalgins was confirmed using intracerebroventricular injections of the peptides ( Supplementary Fig. 8 ). Mambalgin-1 also suppressed inflammatory heat hyperalgesia and produced a strong analgesia evaluated in the paw-flick test after intraplantar injection of carrageenan ( Fig. 2c ), and drastically decreased acute (phase I) and inflammatory (phase II) pain assessed in the formalin test ( Fig. 2d ), with a potency similar to morphine ( Fig. 2c, d ). These effects were not significantly decreased by naloxone. Figure 2: Intrathecal injections of mambalgin-1 exert potent naloxone-resistant and ASIC1a-dependent analgesia in mice. a , b , Effects on acute thermal pain (46 °C) determined using tail-immersion ( a , n = 13–22) and paw-immersion ( b , n = 5–15) tests showing a large increase in response latencies, which is lost in ASIC1a –/– mice. c , Effects on inflammatory hyperalgesia induced by intraplantar carrageenan (grey bar) ( n = 8–16). Right panels in a – c : area under curve (AUC, s × min) calculated from each mouse. d , Effects on first (0–10 min) and second (10–45 min) phase of formalin-induced spontaneous pain behaviour ( n = 11–20). Naloxone was injected subcutaneously 10 min before the peptide. Comparisons are versus vehicle unless specified. Mean ± s.e.m. *** P < 0.001; ** P < 0.01; * P < 0.05; NS, P > 0.05. PowerPoint slide Full size image Figure 3: The central analgesic effect of mambalgin-1 shows reduced tolerance compared with morphine, no respiratory depression and involves the ASIC2a subunit. a , Repeated intrathecal injections of mambalgin-1 induce less tolerance than morphine at concentrations giving the same analgesic efficacy ( n = 10, comparison with vehicle (*) or morphine (#)). b , Mambalgin-1 (i.t., intrathecal) induces no respiratory depression unlike morphine (i.t., intrathecal or i.p., intraperitoneal), 0.01 and 0.4 mg per mouse, respectively; n = 4–7, comparison with vehicle unless specified). c , Paw-flick latency before (control) and after treatment with non-specific siRNA (si-CTR, n = 24), siRNA against ASIC2a and ASIC2b (si-ASIC2a/2b, n = 27), or siRNA against ASIC2a (si-ASIC2a, n = 19) with or without naloxone. Comparison with si-CTR (#) or untreated control (*), unless specified. d , Paw-flick area under curve calculated after intrathecal injection of mambalgin-1 or vehicle in siRNA-treated mice ( n = 5–27; protocol shown in inset above; s.c., subcutaneous). Comparison with mice injected with vehicle after treatment by si-CTR, unless specified. Mean ± s.e.m. *** or ###, P < 0.001; ** or ##, P < 0.01; * P < 0.05; NS, P > 0.05. PowerPoint slide Full size image Mambalgins, unlike the spider peptide PcTx1 (refs 5 , 14 ), inhibit not only homomeric ASIC1a channels but also heteromeric ASIC1a + ASIC2 channels, which are abundantly expressed in central neurons 15 , 16 , 17 , 21 , 22 . This led us to analyse the participation of ASIC2 in central analgesia evoked by mambalgins. Intrathecal injections of short interfering RNA (siRNA) to silence either ASIC2 (both variants a and b) or ASIC2a ( Supplementary Fig. 9 ) induced an analgesia that was partly (ASIC2) or fully (ASIC2a) resistant to naloxone ( Fig. 3c ). In the presence of naloxone, central injection of mambalgin-1 in these knockdown mice had a decreased effect ( Fig. 3d ), consistent with a contribution of ASIC2a in the pain suppressing effect of the peptide. Compensation by homomeric ASIC1a channels 15 , 16 , which are also blocked by mambalgin-1, as well as incomplete in vivo knockdown ( Supplementary Fig. 9 ) account for the residual analgesic effect that is observed. Because mambalgins are able to target different ASIC channel subtypes expressed in nociceptors, we tested their peripheral effect after intraplantar injections. Mambalgin-1 (unlike PcTx1) has a significant analgesic effect on acute heat pain ( Fig. 4a ) and reverts or prevents inflammatory hyperalgesia ( Fig. 4b, c ). However, this effect is clearly different from the previously described effect of central (intrathecal) injection of the peptide because it is still present in ASIC1a knockout mice ( Fig. 4a–c ) contrary to the central effect ( Fig. 2a, b ). If ASIC1a is not involved, what are then the mechanisms that support the peripheral effect of mambalgins? ASIC1b is specifically expressed in nociceptors 19 , 23 but its role in pain is not known. A functional expression of ASIC1b-containing channels in dorsal root ganglion (DRG) neurons is demonstrated by the effect of mambalgin-1, which blocks both ASIC1a and ASIC1b, and has more potent effects than PcTx1, which only blocks ASIC1a channels ( Fig. 1h ). Moreover, silencing of the ASIC1b subunit in nociceptors of ASIC1a knockout mice (that is, where only ASIC1b is present; Supplementary Fig. 9 ) mimicked the analgesic effect of peripheral injection of mambalgin-1 on both acute pain and inflammatory hyperalgesia, and largely decreased the effect of subsequent intraplantar injection of the peptide ( Fig. 4d, e ), supporting the specific participation of ASIC1b in the peripheral effect of mambalgin-1. Figure 4: Intraplantar injections of mambalgin-1 evoke peripheral analgesic effects through ASIC1b-containing channels. a , Effects of mambalgin-1 on acute thermal pain (46 °C) in wild-type (WT) and ASIC1a –/– mice ( n = 10–23). b , Effects of mambalgin-1 on carrageenan-induced hyperalgesia when injected before (open symbols) or 2 h after (filled symbols) carrageenan ( n = 7–20). Right panels: area under curve (after carrageenan in b ). Comparison with vehicle unless specified. c , Mambalgin-1 prevents inflammatory hyperalgesia when co-injected with carrageenan ( n = 7–20, comparison with control). d , Paw-flick latency before (control) and after treatment with si-CTR ( n = 13) or siRNA against ASIC1a and ASIC1b (si-ASIC1a/1b, n = 10) in ASIC1a –/– mice upon normal and inflammatory conditions. Comparison with si-CTR (#) or untreated control (*), unless specified. e , Paw-flick area under curve calculated after injection of mambalgin-1 or vehicle in siRNA-treated ASIC1a –/– mice ( n = 7–11). Comparison with mice injected with vehicle after treatment by si-CTR, unless specified. Mean ± s.e.m. *** or ###, P < 0.001; ** or ##, P < 0.01. PowerPoint slide Full size image Our results indicate that mambalgins have analgesic effects by targeting both primary nociceptors and central neurons, but through different ASIC subtypes ( Supplementary Fig. 12 ). After demonstrating the important role of ASIC3 in peripheral pain and sensory perception in the skin 3 , 4 , 24 , we now show that ASIC1b, but not ASIC1a, is important for cutaneous nociception and inflammatory hyperalgesia. In the central nervous system, injections of mambalgins evoke a naloxone-insensitive analgesia through an opioid-independent pain pathway involving ASIC1a + ASIC2a channels. Central injections of PcTx1, instead, evoke a naloxone-sensitive analgesia through its specific action on homomeric ASIC1a 5 , and probably heteromeric ASIC1a + ASIC2b channels ( Fig. 3c and ref. 22 ). In addition, mambalgins, unlike PcTx1 (ref. 5 ), maintain a potent analgesia in mice deficient for the preproenkephalin gene ( Supplementary Fig. 10 ). These results taken together indicate that different pathways involving different subtypes of ASIC channels can lead to different types of central analgesia (opioid-sensitive or insensitive) ( Supplementary Fig. 12 ). They also indicate that despite their capacity in vitro to inhibit homomeric ASIC1a and heteromeric ASIC1a + ASIC2b channels, in vivo , mambalgin central analgesic action is mainly targeted to neurons expressing ASIC1a + ASIC2a channels ( Supplementary Fig. 11 ). It is essential to understand pain better to develop new analgesics 25 , 26 . The black mamba peptides discovered here have the potential to address both of these aims. They show a potent role for different ASIC channel subtypes in both the central and peripheral pain pathways, providing promising new targets for therapeutic interventions against pain, and they are themselves powerful, naturally occurring, analgesic peptides of potential therapeutic value. Methods Summary Mambalgins were purified from Dendroaspis polylepis polylepis venom (Latoxan) using cation exchange and reverse phase chromatography steps. The molecular mass and peptide sequence were determined by Edman degradatation and/or tandem mass spectrometry sequencing, and used to design primers for cloning the full-length mambalgin-1 cDNA from venom. The three-dimensional structure was modelled from five templates of three-finger snake toxins. Recordings of recombinant ASIC currents were done after expression in Xenopus laevis oocytes 14 and COS-7 cells, and patch-clamp recordings of native ASIC currents were obtained from primary cultures of mouse dorsal spinal cord neurons 16 , hippocampal neurons and rat DRG neurons. Pain behaviour experiments were performed in C57BL/6J mice after intrathecal (or intracerebroventricular) injection of mambalgins (10 µl at 34 µM), PcTx1 (10 µl at 10 µM) or morphine (Cooper, 10 µl at 3.1 mM), in the absence or in the presence of naloxone (Fluka, 2 mg kg −1 ), and after intraplantar injection of mambalgins (10 µl at 34 µM) and PcTx1 (10 µl at 10 µM). The effects of mambalgins on acute pain were also tested in ASIC1a –/– (ref. 27 ) and Penk1 –/– (ref. 28 ) mice. Inflammation was evoked by intraplantar 2% carrageenan (Sigma) or 2% formalin. In vivo ASIC1 and ASIC2 gene silencing experiments were performed by repeated intrathecal injections (2 µg per mouse twice a day for 3 days) of siRNAs targeting ASIC1a + b, ASIC2a + b or only ASIC2a, mixed with i-Fect (Neuromics) as previously described 4 . Online Methods Electrophysiology in Xenopus laevis oocytes Venom fractions were tested on rat ASIC1a expressed in Xenopus oocytes as previously described 14 , applied 30 s before the acid stimulation. Purification, peptide sequencing and mass spectrometry The venom of the black mamba Dendroaspis polylepis polylepis (Latoxan) was purified by gel filtration and cation exchange 30 . The active fraction was loaded on a reversed-phase column (C18 ODS, Beckman) and eluted with a linear gradient of acetonitrile containing 0.1% TFA. Molecular mass and peptide sequence were determined by matrix-assisted laser desorption/ionization–time of flight (MALDI–TOF)/TOF-MS (Applied Biosystems). Protein identification was performed with mascot ( ) at 50 p.p.m. mass tolerance against NCBI (non-redundant) and Swiss-Prot databases. Data were analysed using the GRAMS386 software. Partial sequence was obtained by amino (N)-terminal Edman degradation and protease digestion (V8 protease and trypsin) followed by tandem mass spectrometry sequencing. Peptide analysis was performed using a nano-high-performance liquid chromatography offline (Dionex, U3000) coupled with a 4800 MALDI-TOF/TOF mass spectrometer. Cloning of the mambalgin-1 cDNA Mambalgin-1 cDNA was cloned from the black mamba venom 31 . Lyophilized venom (Sigma) was reconstituted in lysis/binding buffer and polyadenylated mRNAs were captured on oligo(dT25) magnetic beads (Dynal). After first-strand cDNA synthesis, PCR-amplification was done with degenerated sense (TGITTYCARCAYGGIAARGT) and antisense (YTTIARRTTICGRAAIGGCAT) primers designed from the partial peptide sequence obtained from biochemical purification. A specific sense primer (ACACGAATTCGCTATCATAACACTGGCATG) was designed from the new sequence and used with an unspecific poly-dT30 antisense primer (ACACGAATTCdT30) to amplify the 3′-coding and uncoding sequences of mambalgin-1. Using the very strong conservation of the 3′- and 5′-uncoding sequences among snake toxins 32 , we have designed a sense (ACACGAATTCTCCAGAGAAGATCGCAAG ATG ) and an antisense (ACACGAATTC-ATTTAGCCACTCGTAGAG CTA ) primer to amplify the complete open reading frame of the toxin precursor. Template-based three-dimensional modelling of mambalgin-1 We modelled the mambalgin-1 protein using the semi-automatic pipeline of the webserver @TOME version 2.1 (ref. 33 ). The amino-acid sequence was submitted to the server to perform the fold recognition and detect structural homologue templates from the Protein Data Bank. Active fold-recognition tools were HHSEARCH 34 , SP3 35 , PsiBlast 36 and Fugue 37 . Five templates were selected among snake venom toxins with four disulphide bonds and aligned with Muscle 38 . The homology modelling of mambalgin-1 was performed with Modeller 9v8. The overall quality of models was estimated by calculating the one- and three-dimensional compatibility TITO score 39 , by analysing the Ramachandran by MolProbity and comparing it with scores of the templates 40 and by visual inspection. Electrostatic potential calculation Electrostatic properties of mambalgin-1 (isosurfaces of +3 k B T / e c ( ∼ +77 mV) and −3 k B T/e c ( ∼ −77 mV)) and human ASIC1a channel (isosurfaces of +10 k B T/e c ( ∼ +256 mV) and −10 k B T/e c ( ∼ −256 mV)) have been calculated with the Adaptive Poisson-Boltzmann Solver 41 . Electrophysiology in COS cells and neurons COS-7 cells were transfected with pCI-ASICs mixed with pIRES2-EGFP and jet-PEI. Primary cultures of dorsal spinal neurons were obtained from C57Bl6J mice embryos (embryonic day (E)14) 16 . Primary cultured hippocampal neurons were prepared from C57Bl6J mice (P3–P5) as previously described for rats 42 . Primary cultured sensory neurons were prepared from dorsal root ganglia of Wistar rats (5–7 weeks) as previously described 43 . Data were recorded in the whole-cell configuration, sampled at 3.3 kHz and low-pass filtered at 3 kHz using pClamp8 software (Axon Instruments). The pipette solution was (in mM) KCl 140, NaCl 5, MgCl 2 2, EGTA 5, HEPES-KOH 10 (pH 7.4); the bath solution was (in mM): NaCl 140, KCl 5, MgCl 2 2, CaCl 2 2, HEPES-NaOH 10 (pH 7.4). MES was used instead of HEPES for pH from 6 to 5. The bath solution for neurons was supplemented with 10 mM glucose, and 20 µM CNQX/10 µM kynurenic acid for central neurons. The pipette solution for neurons contained (in mM) KCl 140, ATP-Na 2 2.5, MgCl 2 2, CaCl 2 2, EGTA 5, HEPES 10 (pH 7.3, pCa estimated to 7). Toxins were perfused at pH 7.4 with bovine serum albumin (0.05%) to prevent non-specific adsorption. Concentration–response curves were fitted by the Hill equation: I = I max + ( I min − I max ) ( C /( C + IC 50 )) where I is the amplitude of relative current, C is the toxin concentration, IC 50 is the toxin concentration half-maximally inhibiting the current, and n H is the Hill coefficient. Plethysmography Respiratory frequency (breaths per minute) was recorded from 7 to 67 min after intrathecal injection of vehicle, mambalgin-1 or morphine-HCl (according the same protocol than for pain behaviour) or intraperitoneal injection of morphine-HCl (24.8 mM, 50 µl) with a whole-body plethysmograph (Emka Technologies). Pain behaviour experiments Experiments were performed on awake 7- to 11-week-old (20–25 g) male C57BL/6J, ASIC1a –/– 27 and Penk1 –/– 28 mice following the guidelines of the International Association for the Study of Pain and were approved by the local ethics committee (agreements NCA/2007/04-01 and NCE/2011-06). Mambalgin-1 (34 µM), mambalgin-2 (20 µM), PcTx1 (10 µM) and morphine-HCl (3.1 mM; Cooper) dissolved in vehicle solution (in mM: NaCl 145, KCl 5, MgCl 2 2, CaCl 2 2, HEPES 10, pH 7.4, 0.05% BSA for intrathecal injection, and NaCl 154, 0.05% BSA for intraplantar injection) were injected intrathecally (10 µl) between spinal L5 and L6 segments or intraplantarly (10 µl). Naloxone (Fluka, 2 mg kg −1 in NaCl 0.9%, 50 µl) was subcutaneously (dorsal injection) injected 10 min before intrathecal injection. Inflammation was evoked by intraplantar injection in the left hindpaw of 2% carrageenan (Sigma-Aldrich) (20 µl) 2 h before intrathecal or intraplantar injection of peptides, morphine or vehicle. Knockdown experiments Locally designed siRNAs targeting ASIC1 (si-ASIC1a/1b, GCCAAGAAGUUCAACAAAUdtdt), ASIC2 (si-ASIC2a/2b, UGAUCAAAGAGAAGCUAUUdtdt) and ASIC2a (si-ASIC2a, AGGCCAACUUCAAACACUAdtdt) have been validated in vitro in COS-7 cells transfected with myc-ASIC1a, ASIC1b, myc-ASIC2a or myc-ASIC2b, and the relevant siRNA or a control siRNA (si-CTR, GCUCACACUACGCAGAGAUdtdt) with TransIT-LT1 and transIT-TKO (Mirus), respectively. Cells were lysed 48 h after transfection and processed for western blot analysis to assess the amount of ASIC1a protein with the anti-Myc A14 antibody (1:500; Santa Cruz Biotechnology) or the anti-ASIC1 antibody (N271/44; 1:300; NeuroMab) and a monoclonal antibody against actin (AC-40; 1:1,000; Sigma) as a loading control. siRNAs were intrathecally injected into mice (2 µg per mouse at a ratio of 1:4 (w/v) with i-Fect (Neuromics)) twice a day for 3 days. After 3 days of treatment, the paw-flick latency was measured and the residual effect of mambalgin-1 (intrathecal or intraplantar, 34 µM) or the carrageenan (intraplantar, 2%)-induced hyperalgesia was tested. For validation of the in vivo effect of the siRNAs, lumbar DRGs or lumbar dorsal spinal cord were removed after the last siRNA injection for total RNA isolation (RNeasy kits, Qiagen) followed by cDNA synthesis (AMV First-Strand cDNA synthesis kit (Invitrogen) and High Capacity RNA-to-cDNA Kit, (Applied Biosystems)). The relative amounts of ASIC transcripts were evaluated by quantitative reverse-transcription PCR in a Light-Cycler 480 (Roche Products). Pre-designed and validated TaqMan assays for ASIC1 (ASIC1a and ASIC1b; Mm01305998_mH), ASIC1a (Mm01305996_m1), ASIC2 (ASIC2a and ASIC2b; Mm00475691_m1), ASIC3 (Mm00805460_m1) and 18S ribosomal RNA (Mm03928990_g1) were from Applied Biosystems. Each cDNA sample was run in triplicate and results were normalized against 18S and converted to fold induction relative to control siRNA treatment. Data analysis Data were analysed with Microcal Origin 6.0 and GraphPad Prism 4. Areas under the time course curves (response latency in seconds × time after injection in minutes) were calculated for each mouse (over the first 37 min for tail-flick and the entire time range for paw-flick) and expressed as mean ± s.e.m. After testing the normality of data distribution, the statistical difference between two different experimental groups was analysed by unpaired Student’s t -test, and between more than two different experimental groups by a two-way analysis of variance followed by a Newman–Keuls multiple comparison test when P < 0.05. For data in the same experimental group, a paired Student’s t -test was used. *** or ###, P < 0.001; ** or ##, P < 0.01; * P < 0.05; NS, P > 0.05. Accession codes Primary accessions GenBank/EMBL/DDBJ JX428743 Data deposits Mambalgin-1 cDNA and mambalgin-1 and -2 protein sequences have been deposited in GenBank and UniProt Knowledgebase under accession numbers JX428743 , B3EWQ5 and B3EWQ4, respectively.
Scientists have used the venom of Africa's lethal black mamba to produce a surprising outcome in mice which they hope to replicate in humans—effective pain relief without toxic side effects. French researchers wrote in the journal Nature Wednesday that peptides isolated from black mamba venom may be a safer pain killer than morphine. In mice at least, the peptides bypass the receptors in the brain that are targeted by morphine and other opioid compounds which sometimes cause side-effects like breathing difficulties or nausea. Nor do the peptides pose the same risk of addiction or drug abuse. "We have identified new natural peptides, mambalgins, from the venom of the snake Black Mamba that are able to significantly reduce pain in mice without toxic effect," study co-author Anne Baron of France's Centre national de la recherche scientifique (national research institute) told AFP. "It is remarkable that this was made possible from the deadly venom of one of the most venomous snakes," she said of the study published in the journal Nature. "(It) is surprising that mambalgins, which represent less than 0.5 percent of the total venom protein content, has analgesic (pain-relief) properties without neurotoxicity in mice, whereas the total venom of black mamba is lethal and among the most neurotoxic ones." Morphine is often regarded as the best drug to relieve severe pain and suffering, but it has several side effects and can be habit-forming. The black mamba's venom is among the fastest acting of any snake species, and a bite will be fatal if not treated with antivenom—the poison attacking the central nervous system and causing respiratory paralysis. Mice are among the agile adder's favourite prey in the wild in eastern and southern Africa. Baron said researchers were confident the peptides would also work in humans "and are very interesting candidates as painkillers", but much work remains to be done. A patent has been issued and a pharmaceutical company is examining the possibilities, she said.
doi:10.1038/nature11494
Medicine
Looking at the role epigenetics plays in the ways cancer behaves
Jacob Househam et al, Phenotypic plasticity and genetic control in colorectal cancer evolution, Nature (2022). DOI: 10.1038/s41586-022-05311-x Timon Heide et al, The co-evolution of the genome and epigenome in colorectal cancer, Nature (2022). DOI: 10.1038/s41586-022-05202-1 Journal information: Nature
https://dx.doi.org/10.1038/s41586-022-05311-x
https://medicalxpress.com/news/2022-10-role-epigenetics-ways-cancer.html
Abstract Genetic and epigenetic variation, together with transcriptional plasticity, contribute to intratumour heterogeneity 1 . The interplay of these biological processes and their respective contributions to tumour evolution remain unknown. Here we show that intratumour genetic ancestry only infrequently affects gene expression traits and subclonal evolution in colorectal cancer (CRC). Using spatially resolved paired whole-genome and transcriptome sequencing, we find that the majority of intratumour variation in gene expression is not strongly heritable but rather ‘plastic’. Somatic expression quantitative trait loci analysis identified a number of putative genetic controls of expression by cis -acting coding and non-coding mutations, the majority of which were clonal within a tumour, alongside frequent structural alterations. Consistently, computational inference on the spatial patterning of tumour phylogenies finds that a considerable proportion of CRCs did not show evidence of subclonal selection, with only a subset of putative genetic drivers associated with subclone expansions. Spatial intermixing of clones is common, with some tumours growing exponentially and others only at the periphery. Together, our data suggest that most genetic intratumour variation in CRC has no major phenotypic consequence and that transcriptional plasticity is, instead, widespread within a tumour. Main Genetic intratumour heterogeneity (gITH) is an inevitable consequence of tumour evolution 2 . Extensive gITH has been documented across human cancer types 1 , and its precise pattern within an individual cancer is a direct consequence of the evolutionary dynamics driving the development of the tumour 3 . Consequently, clones that undergo positive, negative or neutral selection can be identified through analysis of gITH 4 , 5 , 6 . However, clonal selection in cancer operates on the phenotypic characteristics of a cell—for example, the ability of a cancer cell to evade predation by the immune system 7 or to survive in oxygen-poor environments 8 , 9 and can be modulated by spatial competition 9 , 10 , 11 , 12 , 13 , 14 . Knowledge of the genotype–phenotype map of cancer cells is limited and thus, while genomics offers us a window into determination of which clones are selected, the methodology provides limited information on precisely why they are selected. Interrelatedly, the extent to which subclonal mutations in tumours lead to phenotypic change is unclear. RNA sequencing (RNA-seq) enables high-throughput profiling of phenotypic characteristics of cancer cells by quantitative measurement of gene expression levels 15 . Historically, studies have focused on intertumour differences in gene expression patterns and have led to the identification of gene expression signatures that correlate with clinical outcomes. In colorectal cancer (CRC), the focus of this study, consensus molecular subtypes (CMS) 16 or cancer cell-intrinsic gene expression subtypes (CRIS) 17 exemplify this approach. Because the transcriptome is a feature of the cancer cell phenotype, it is natural to view changes in expression, and the pattern of transcriptomic intratumour heterogeneity (tITH), as ‘functional’ and the substrate for tumour evolution. Potentially tITH could be driven entirely by underlying heritable (epi)genetic variation that evolves during tumour growth. However, the observation that local invasion is polyclonal in both CRC 18 and early breast cancer 19 challenges the notion that cancer cell phenotype (here, the ability to invade) is driven solely by the accrual of genetic mutations. Furthermore, observations of rapid transcriptional shifts following treatment (for example, in melanoma 20 ) and, in CRC, variation in subclone proliferation rates through serial retransplantation despite largely stable patterns of genetic alterations 21 , discount the notion that transcriptomic phenotypes are determined solely by clonal replacement. It has previously been determined that most driver mutations are clonal in metastatic CRC, meaning that intratumoral transcriptional variation often happens in the absence of the acquisition of new key driver mutations 22 . Collectively, these studies suggest that phenotypic characteristics are at least partially plastic—they can vary without acquiring a new heritable (epi)genetic alteration to drive expression changes, for instance as a response to the cellular environment. In patient samples we cannot measure longitudinally the exact same clones or cells, and so here we define a trait as plastic if it varies independently of evolutionary history. Conversely, non-plastic traits are fixed through tumour evolution. Here we analyse spatially resolved paired genomic (whole-genome sequencing), epigenomic (assay for transposase-accessible chromatin using sequencing, or ATAC-seq), and transcriptomic (whole-transcript RNA-seq) profiling, coupled with computational modelling, to characterize the evolution of phenotypic heterogeneity in CRC. Paired DNA–RNA data enable assessment of the interrelationship between genetic evolution and gene expression patterns, and of the functional consequence of gene expression change for cancer evolution. We analysed our spatially resolved, multiomic, single-gland profiling dataset from primary CRCs 23 that were part of our Evolutionary Predictions in Colorectal Cancer (EPICC) study. Single-gland profiling allowed multimodal DNA, chromatin and RNA characterization of the same small clonal unit of tissue (glands or crypts). We focused our analysis on 297 samples from 27 CRCs (mean, 11 samples per tumour; range, 1–38) in which we had obtained high-quality, full-transcript RNA-seq data. Paired deep and shallow whole-genome sequencing and chromatin accessibility analysis by ATAC-seq were available for a subset of these samples. An analysis of the ATAC-seq data is available in the associated paper 23 . Expression heterogeneity in CRC First, we explored the heterogeneity of gene expression within and between CRCs. We clustered a filtered set of 11,401 genes (including removal of very lowly expressed genes and those significantly negatively correlated with purity; Methods ) using both the mean and variance of gene expression within each tumour (Fig. 1a ), and separated the dendrogram into four groups ( Methods ): group 1 had high average expression and relatively low variance in gene expression (‘highly expressed, limited heterogeneity’); groups 2 and 3 had progressively lower average gene expression and high variance in expression, whereas group 4 genes had low average gene expression and low variability between samples from the same tumour (Fig. 1b,c and Supplementary Table 1 ). Meta-pathway analysis showed weak, non-significant enrichment for pathways involved in cell growth and death in group 1, and significant enrichment for cancer-related genes in group 2 and pathways related to replication and repair in group 3 (Fig. 1d ). Group 4 was weakly and non-significantly enriched for signalling pathways but, due to generally low expression, it was excluded from further analyses. We confirmed that transcriptional heterogeneity evident in group 2 genes in tumours was less prominent in an equivalent analysis of normal colon single-cell RNA-seq (scRNA-seq) data, thus excluding the possibility that the gene expression variation we observed was simply the natural transcriptional noise of colon cells ( Methods and Supplementary Figs. 1 and 2 ). Fig. 1: Heterogeneity of gene expression and phylogenetic signal in CRC. a , Heatmaps showing clustering of genes by expression level across tumours (left) and expression variation within tumours (right). Hierarchical clustering showed four distinct groups, groups 1–4. Units are scaled by column in each heatmap. b , c , Summary box plots per gene group (group 1, 891 genes; group 2, 2,444 genes; group 3, 5,033 genes; group 4, 3,033 genes). Mean expression level ( b ) and intratumour heterogeneity of expression ( c ) per group, as measured by s.d. d , Meta-KEGG pathway analysis showing which pathway categories are most over-represented in each group (after removal of ‘infectious disease: bacterial’ and ‘neurodegenerative disease’—most significant in group 1). e , f , Phylogenetic trees and heatmaps of genes with evidence of phylogenetic signal (at P < 0.05) for tumours C551 ( e ) and C554 ( f ). g , Heatmap of genes with recurrent phylogenetic signal across tumours (those which were found to have evidence of phylogenetic signal in at least three tumours). h , Results of chi-squared test showing whether gene groups were enriched for phylogenetic genes (those with evidence of phylogenetic signal in at least one tumour—“Phylo”) compared to all other genes (“Non-phylo”). i , Enrichment of KEGG PPAR signalling pathway for recurrently phylogenetic genes. j , Example phylogenetic tree and pathway enrichment heatmap for tumour C559. Pathways are ordered by decreasing significance of phylogenetic signal. k , Heatmap showing recurrence of phylogenetic signal of pathways across tumours. Pathways are ordered by decreasing recurrence. Refer to pathway key in Extended Data Fig. 4 for pathway names. *P < 0.05, ** P < 0.01, ***P < 0.001; Mean norm., mean gene expression in normal samples; Mean mean exp., mean of mean gene expression per tumour; Mean var., mean standard deviation of gene expression; MedPval, median P -value from forest of 100 trees; MedLambda, median λ value from forest of 100 trees; NumRec, number of tumours in which gene has evidence of phylogenetic signal; Num Sig, number of tumours in which pathway has evidence of phylogenetic signal; d.f., degrees of freedom. Source data Full size image We repeated the clustering analysis using hallmark pathways 24 ( Methods ) rather than individual genes (Extended Data Fig. 1a ), and separated the dendrogram into four groups of pathways based on the degree and heterogeneity of enrichment score within and between cancers, respectively (Extended Data Fig. 1b,c ). Hallmark pathways were grouped into ‘classes’ according to their biological mechanism (oncogenic, immune, stromal and so on) 25 . Homogeneously enriched pathways (pathway group 1) showed moderate but not significant enrichment for cellular stress response; heterogeneously enriched pathways (pathway group 2) were moderately but not significantly enriched for oncogenic signalling (Extended Data Fig. 1d ), congruent with the gene-level result. Pathway group 4 (low average pathway enrichment and high heterogeneity) contained two pathways, epithelial–mesenchymal transition and angiogenesis; these were both classed as stromal, meaning that pathway group 4 was enriched for stroma-related pathways (Extended Data Fig. 1d ). Consensus molecular subtypes 16 and CRIS 17 are useful approaches in classification of CRC by gene expression patterns. We investigated the intratumour heterogeneity of these classifiers. For CMS, only 2 out of 17 tumours with sufficient samples for analysis were homogeneously classified (both CMS3; Extended Data Fig. 2a ). For CRIS, only a single tumour was homogeneously classified (CRIS-A; Extended Data Fig. 2b ). CRIS classification exhibited higher intratumour expression heterogeneity than CMS (Extended Data Fig. 2a,b ), and heterogeneity remained when the analysis was limited to only those samples that could be subtyped with high accuracy (Extended Data Fig. 2e–h ). Correspondence between CRIS and CMS type calls was weak (Extended Data Fig. 2c ). We note that others have published data showing the heterogeneity of molecular subtypes in CRC 26 , 27 and the discordance between CRIS and CMS classifications 17 , 28 . The genes used for both CMS and CRIS classification were depleted for highly homogeneously expressed genes (group 1; Extended Data Fig. 2d ). Consequently, both CRIS and CMS classifiers exhibited extensive ITH. Together, these analyses showed that gene expression programmes that define cancer cell biology and interactions with the surrounding tumour microenvironment were not uniformly expressed across CRCs. Evolution of expression heterogeneity We sought to understand the genetic determinants of the observed tITH. If variability in gene expression was caused by genetic change within the tumour (that is, if tITH is caused by gITH), then gene expression variability should mirror genetic ancestry. Phylogenetic signal is a statistical method derived from evolutionary biology that measures the degree to which phenotypic (dis)similarity between species is explained by genetic ancestry, and can be quantified by Pagel’s λ statistic 29 , 30 (Supplementary Fig. 3 ). We assessed the phylogenetic signal of gene expression heterogeneity in each of our CRCs with sufficient paired RNA-seq whole-genome sequencing (WGS) data (114 samples from eight tumours; median 11 samples per tumour, range 6–31). Phylogenetic trees for each tumour were constructed from WGS data ( Methods ) and terminal nodes overlaid with gene expression profiles (Fig. 1e,f and Extended Data Fig. 3 ). Pagel’s λ was computed for 8,368 genes from groups 1–3 (as defined in Fig. 1a ), with group 4 genes removed due to low average expression. Within each tumour a median of 166 genes (range 67–2,335) had expression levels with detectable phylogenetic signal ( P < 0.05), though with the exception of cancer C559 no associations remained after multiple testing correction. The number of genes with phylogenetic signal (at P < 0.05) did not significantly correlate with the number of samples per tumour ( P = 0.25; Supplementary Fig. 4 ). The above analyses were rerun using standard log-normalization of gene expression and there was a high overlap between genes with evidence of phylogenetic signal, indicating that the normalization method has a negligible impact on results (Supplementary Fig. 5 ). Adjustment of expression for tumour content (purity) before running phylogenetic signal analysis was also found to have a minimal impact on results ( Methods and Supplementary Fig. 6 ). Post hoc power analysis indicated that our dataset was sufficiently sized to enable detection of the heritability of early subclonal, large-effect changes in gene expression (Supplementary Fig. 7 ); the expression of most genes did not show this pattern of heritability. Only 61 genes had expression patterns that recurrently mirrored phylogenetic ancestry in at least three tumours (Fig. 1g ). Group 1 genes (highly expressed, limited heterogeneity) were enriched for phylogenetic signal whereas group 3 genes (moderately expressed, moderate heterogeneity) were significantly depleted for phylogenetic signal (Fig. 1h ). Interestingly, the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway peroxisome proliferator-activated receptor (PPAR) signalling, involved in prostaglandin and fatty acid metabolism 31 was statistically over-represented in this recurrently phylogenetic set of genes (false discovery rate (FDR) = 0.0075, STRINGdb analysis; Fig. 1i ). Links between PPAR metabolism and CRC have previously been reported 32 , 33 . Analogous assessment of phylogenetic signal at the level of gene expression pathways (Fig. 1j and Extended Data Fig. 4 ; at P < 0.05, only cancer C559 showed associations after correction for multiple testing) showed two pathways with recurrent evidence of phylogenetic signal in at least three tumours: (1) fatty acid metabolism, related to the PPAR signalling pathway, which was identified in the gene-level analysis, and (2) MYC_TARGETS_V2 that contains genes regulated by MYC signalling (Fig. 1k ). Phylogenetic signal at pathway level was not related to pathway class (as used in Extended Data Fig. 1a,d ). Thus, in our dataset, the expression of most pathways was not strongly related to genetic ancestry. We defined phenotypic plasticity as gene expression changes that occurred independently of evolutionary history, possibly as a consequence of external stimulus from the tumour microenvironment. To examine this, phylogenetic trees and expression-based dendrograms were compared, showing few instances in which genetic history mirrored current levels of gene expression (Extended Data Fig. 5 and Supplementary Fig. 8 ). Across the cohort, the level of genetic intermixing of clones across tumour spatial regions was uncorrelated with the level of gene expression heterogeneity between regions (Supplementary Fig. 9 ). To specifically examine the influence of tumour microenvironment, we tested whether gene expression of tumour glands was clustered by tumour region (Supplementary Fig. 10 ), observing significant clustering in 4 of 11 tumours (FDR < 0.05; Methods and Supplementary Fig. 11 ). We used CIBERSORTx 34 to quantify immune cell infiltration in our samples and tested for association between the degree of infiltration and overall difference in gene expression, finding a significant but weak association ( R 2 = 0.21; Methods and Supplementary Fig. 12 ), with the caveat that there is inherent uncertainty in RNA-seq deconvolution in general. Together, in support of previous research studying how the microenvironment can determine gene expression 35 , 36 , these analyses provided evidence that the tumour microenvironment could influence plastic gene expression programmes in tumour cells irrespective of accrued genetic changes in those cells. Genetic determinants of gene expression Somatic mutations altering gene expression are a potential mechanistic explanation of phylogenetic signal. We used a simple linear regression framework ( Methods ), inspired by the expression quantitative trait loci (eQTL) used in human population genetics 37 , to detect cis associations between inter- and intratumour somatic genetic heterogeneity and gene expression. In total, 5,927 genes had cis somatic genetic variation in at least two samples ( n = 167 samples with matched RNA-seq and WGS data and at least two samples per tumour), comprising n = 2,422 non-synonymous genic mutations (mutations were single nucleotide variation (SNVs) or indels), n = 20,790 non-genic (enhancer) mutations and extensive somatic copy number alterations (SCNAs). Of these genes, 1,529 (25.8%) had expression significantly correlated with inter- or intratumour somatic genetic variation (including both mutations and copy number alterations; FDR < 0.01, Storey’s π = 0.1007; Fig. 2a and Supplementary Table 2 ), which we termed eQTL genes. A higher FDR cut-off of 10% was assessed, but this had only a negligible impact on results. Fig. 2: Genetic control of expression with eQTL. a , The number of genes with significant models for each data type. b , Distribution of regression coefficients (effect sizes) for each data type. c , d , Volcano plots highlighting selected genes significant for SCNA ( c ) and Mut eQTLs ( d ) (linear regression two-sided t -tests; P adj , FDR-adjusted P values). e , In comparison with non-synonymous mutations (NS), enhancer (Enh) mutations tended to have large effect sizes and a higher proportion of positive effect sizes. f , The proportion of subclonal mutations associated with detectable changes in cis gene expression was significantly lower than for clonal eQTL mutations. g , Visualization of Fisher’s exact tests showing that gene–mutation combinations were more likely to be eQTLs if they were associated with recurrent phylogenetic genes (genes found to have evidence of phylogenetic signal in at least three tumours) for subclonal mutations, and that this was not significant for clonal mutations. Phylo and Non-phylo indicate whether a gene had evidence of phylogenetic signal in the tumour in which the mutation was present. Two-sided Fisher’s exact tests, P values not corrected for multiple testing. Source data Full size image Somatic copy number alterations contributed to expression changes of 1,163 out 1,529 (76.1%) eQTL genes (Fig. 2b,c and Supplementary Table 2 ), but the magnitude of the effect on expression was generally small (Fig. 2b ; median effect size 0.30 s.d. in expression change per allele copy). A positive correlation between copy number and expression was observed for 1,082 genes but, interestingly, a negative correlation was observed for 81. Positive correlations were enriched at loci with total copy number one and four (Supplementary Fig. 13a,d ) whereas negative correlations were disproportionately more common at genes with total copy number two or three (copy number two includes cases with copy-neutral loss of heterozygosity, copy number three includes unbalanced gains; Supplementary Fig. 13b,c ). Consequently, we speculate that negative correlations between copy number and expression are due to dominant-negative activity of the amplified allele. We note that this idea is consistent with cell line research which found that single-chromosomal gains can function as tumour suppressors 38 . Mutations, both coding and non-coding, were associated with gene expression variation in 508 eQTL genes (Fig. 2b,d ) and, typically, the magnitude of the association was much greater than for SCNAs (mean effect size 1.92 versus 0.30 s.d. for mutation versus single-copy number change; Fig. 2b ). For coding somatic mutations, approximately equal numbers of mutations associated with an increase versus decrease in expression were observed (33 coding mutations with increasing expression versus 27 with decreasing expression, P = 0.4). Non-coding enhancer somatic mutations were associated with the greatest changes in gene expression observed in our cohort, and were more likely to be associated with increases in expression (486 increases versus 258 decreases, P = 6.3 × 10 –17 ; Fig. 2e ). The expression of 175 genes was significantly associated with both SCNAs and mutations, indicating how the combination of somatic mutation and copy number alterations can potentially determine the gene expression phenotype of cancer cells. We used the Hartwig metastatic CRC cohort 39 to validate eQTL results: 22 eQTL mutations had sufficient variation present in the Hartwig cohort to detect associations of the magnitude observed in our cohort and, of these, 9 (41%) were validated (Supplementary Fig. 14 ). The unexplained gene expression variation for the remaining 13 variants could be due to germline, trans or other epigenetic effects. A post hoc power analysis found that we were powered to detect (that is, at least 80% power) effect sizes greater than 0.94 (s.d. in expression change; Supplementary Fig. 15 ). Assessment of germline SNPs showed some outliers that may have had a small impact on our eQTL analysis, and this could possibly be due to variations in patient genetic ancestry ( Methods and Supplementary Fig. 16 ). With this in mind, and because we did not examine trans effects, we emphasize that eQTLs are only associations and not proof of a mechanistic link. In a separate subgroup analysis of mutations in microsatellite stability (MSS) versus microsatellite instability (MSI) cases, mutations in MSS tumours were more frequently associated with large effects on gene expression (Supplementary Fig. 17 ) whereas the addition of MSI status as a cofactor had minimal impact on tumour eQTL associations (correlation of R 2 values between original and MSI-added analysis, P < 1.1 × 10 −16 , R 2 = 0.855; Supplementary Figs. 18 and 19 ). Overall, only 2.4% (89 out of 3,705) of subclonal mutations in which eQTL status could be investigated were associated with detectable changes in cis gene expression, compared with 3.6% (688 out of 19,256)—many more in absolute numbers—of clonal eQTL variants ( P = 3.7 × 10 −4 ; Fig. 2f ). Genes associated with subclonal eQTL mutations were enriched for phylogenetic signal (odds ratio (OR) = 3.5, P = 0.02; Fig. 2g ), and this significant enrichment was absent for genes associated with clonal mutations (OR = 1.7, P = 0.11; Fig. 2g ). Thus, whereas most somatic mutations did not result in a detectably large direct change in cis gene expression, each tumour contained a small number of subclonal genetic variants (median 1) significantly associated with altered gene expression. We emphasize that finding variants associated with gene expression changes does not necessarily imply that those variants underwent selection within the tumour. Selection on cancer driver mutations Cancer genomics studies have established that only a few genes actually contribute directly to cancer evolution, and these genes are termed drivers 40 . We therefore focused on understanding the evolutionary consequences of putative CRC driver mutations on tumour expansion. We used our extensive single-gland, multi-region WGS data (deep WGS, median depth 35×, between 3 and 15 samples per patient (median, 8) and low-pass WGS (median depth 1.2×, between 1 and 22 samples per patient, median 8) for accurate identification of clonal and subclonal somatic variants ( from ref. 23 ) and to call somatic copy number alterations in each tumour (note that this included additional tumours lacking RNA-seq data). We specifically examined the clonality of 69 genes (excluding PARP4 , LRP1B and KMT2C , which we excluded due to a high number of false-positive low-frequency variants in these genes) on the IntOGen list 41 of putative CRC driver genes ( Methods and Fig. 3a ). The most frequently mutated drivers in colorectal cancer, such as APC , KRAS , TP53 and SOX9 , as well as other known drivers including PTEN , EGFR , CCDC6 , PCBP1 , ATM and CTNNB1 , were invariably clonal in cancers, except for one tumour with a subclonal KRAS mutation and another with a subclonal TP53 mutation. These findings are consistent with previous multi-region sequencing studies 42 but contradict claims of frequent subclonality of these genes in single-sample bulk data 43 , highlighting the need for methods to identify functional intratumour heterogeneity 44 . Fig. 3: Phylogenetic driver analysis. a , Non-synonymous somatic mutations and indels in IntOGen CRC driver genes, with clonality status indicated. b , c , dN/dS analysis of clonal versus subclonal driver gene mutations, divided between MSS ( b ) (7 adenomas and 24 cancers) and MSI ( c ) (1 advanced adenoma and 6 cancers). Error bars indicate 95% confidence intervals. MMR, mismatch repair (genes). Source data Full size image We used analysis of the ratio of non-synonymous to synonymous substitutions (dN/dS) 45 , which quantifies the excess of non-synonymous mutations in a gene, to detect selection across the complete set of cancer drivers ( Methods ). We found clear evidence of positive selection (dN/dS greater than 1) for clonal missense and truncating mutations in IntOGen driver genes in MSS cancers (Fig. 3b ), and dN/dS values were higher for the IntOGen list than for a second, pan-cancer driver list 45 , confirming that the IntOGen list was enriched for true CRC drivers. For subclonal variants, we found evidence of subclonal selection of truncating variants and missense mutations with dN/dS higher than 1 for CRC-specific IntOGen variants but not for the pan-cancer driver list 45 , suggesting that a subset of putative subclonal CRC driver mutations were under positive selection in growing tumours. For MSI tumours, subclonal selection was less evident from dN/dS, probably due to the higher mutation rate generating a much larger number of neutral mutations in cancer driver genes and thus diluting the dN/dS signal but, nevertheless, selection for clonal missense and truncating mutations was significant in MSI cancers (Fig. 3c ). We then examined dN/dS values for each of the IntOGen driver genes in a larger dataset, combining our data with The Cancer Genome Atlas (TCGA) colon and rectal cancer cohorts and additional data 46 , 47 ( n = 1,253 CRCs). Most genes in the list showed no evidence of selection, with the majority of the top significant genes being the ‘usual suspects’ in CRC drivers 42 (Extended Data Fig. 6 ). For an orthogonal assessment of driver gene function we turned to the DepMap dataset 48 that assesses the functional consequence of gene knockouts across a large panel of cell lines ( Methods ). Most CRC candidate drivers showed no evidence of essentiality (a measure of cell viability following gene perturbation) across the CRC cell lines of the DepMap dataset, whereas the two most likely under strong selection in our cohort, KRAS and PIK3CA, were significantly essential in many CRC cell lines and were found to be significantly differentially essential when contrasting mutant versus wild-type (WT) CRC cell lines (Student’s t -test P < 10 −6 ; Supplementary Fig. 20 ). Thus, surprisingly, these analyses indicated that even putative driver mutations in CRCs sometimes have limited phenotypic consequence when measured in terms of subclonal selection. The lack of detectable selection on CRC driver mutations is consistent with previous reports of widespread neutral subclonal evolution within CRCs 5 , 49 , 50 . Evolutionary dynamics within tumours We assessed the evolutionary dynamics of individual driver mutations on a tumour-by-tumour basis through assessment of phylogenetic tree shape and the related clonal structure of the tumour (Fig. 4 and Extended Data Fig. 7 ; Methods ). ‘Balanced’ trees, in which similar branch lengths are found across tumour samples and regions, are consistent with effectively neutral evolution and were observed for a large proportion of tumours. A clear outlier was tumour C539, in which the tree contained a particularly large clade that spanned multiple geographical regions of the tumour (all A and part of B). This ‘unbalanced’ tree was suggestive of subclonal selection 51 , and indeed, the expanded clade contained a KRAS G12C mutation (Fig. 4h ). We used BaseScope, a commercial in situ RNA-based mutation detection technique 52 ( Methods ), to visualize subclones containing a putative driver alteration. We tested the KRAS G12C subclonal variant in C539 (Fig. 4h and Supplementary Fig. 21 ) and the PIK3CA E545K subclonal variant in C537 (Fig. 4i and Supplementary Fig. 22 ). This analysis confirmed the spatial segregation of subclones, showing heterogeneity in a subset of the blocks, whereas we also found complete absence of the clone in a large proportion of other areas of the tumour (Supplementary Table 3 ). Furthermore, and consistent with our previous reports 4 , 49 , tumours could be split into two groups characterized by subclonal intermixing between spatially distinct regions (16 out of 28, 57% of tumours) versus strict segregation by geography (Supplementary Fig. 23 ). Fig. 4: Spatial phylogenomics of colorectal cancer. a , In this MSI tumour (C516) the cancer (regions A and B) and macroscopically diagnosed advanced adenoma (regions C and D) formed a large mass and were physically adjacent to one another. Photo indicates sampling quadrant, not precise location. b , The advanced adenoma shared multiple drivers with the cancer but showed early divergence. c , Tumour C551 presented with a cancer and a concomitant adenoma that were very distant, indicating two independent events. d , The phylogenetic tree was characterized by clonal intermixing of diverging lineages collocated in the same region (for example, some lineages from regions A, B and C were genetically close). Subclonal drivers of unknown significance were present, including a non-expressed variant in USP6 and an ARID1A mutation. Early divergence between the cancer and adenoma F was evident, with no shared drivers between the two lesions. e , Tumour C561 presented with a large cancer mass and multiple small concomitant adenomas. f , Again, there was no notable somatic alteration in common between the different lesions. The cancer showed clonal amplification of MYC and only a benign subclonal mutation in FAT4. g , Phylogenetic reconstruction of four further tumours with annotated driver events. h , i , Phylogenetic trees with matched in situ mutation detection with BaseScope for the KRAS G12C subclonal variant in C539 ( h ) and the PIK3CA E545K subclonal variant in C537 ( i ). Staining by haematoxylin and eosin (H&E) and BaseScope were each performed once; scale bars, 50 μm. Full size image We assessed the functional consequence of 38 subclonal putative driver mutations from the IntOGen list that were detected in MSS cancers. PolyPhen 53 scores showed that 8 out of 38 (21%) mutations were putatively benign mutations (marked in grey in Fig. 4 and Extended Data Fig. 7 ). Paired RNA-seq showed only wild-type reads for 5 out of 38 (13%) putative driver mutations (also marked in grey). We could not assess mutant transcript expression for 25 out of 38 mutations (66%) because of missing RNA-seq data or lack of reads covering the variant location. Of those, 13 out of 25 (52%) were in genes with dN/dS approximately 1 in the TCGA cohorts; COAD and READ. Six out of 38 (16%) variants were identified as deleterious by PolyPhen and were also found to be expressed in matched RNA-seq (marked in bold). At the individual tumour and mutation level these analyses showed that, of the large number of putative driver events identified in our cohort (Fig. 3a ), many showed no evidence of being under selection: 14 out of 38 (37%) variants were either benign or not expressed in the cancer (although we note that expression could not be assessed for two-thirds of variants), and a further 10 out of 38 (26%) variants were in genes with dN/dS of approximately 1 in the external cohorts. However, positive dN/dS values for pooled cases suggested that some of these subclonal variants were under selection. To identify these, we designed a spatial inference framework able to detect and measure subclonal selection in our dataset. Spatial inference of growth dynamics We decided to further probe for evidence of evolutionary consequence of heritable alterations in individual tumours. Computational models allow the simulation of different types of spatial growth dynamics and have provided insights into tumour evolution and the effect of spatial constraints 8 , 9 , 10 , 11 , 12 , 13 , 14 . Here we used computational modelling in combination with approximate Bayesian computation (ABC) to infer subclonal selection and the impact of spatial effects from our spatially resolved WGS data. For this, we extended our previous model based on cell replication, death and mutation 51 to incorporate more realistic spatial growth conditions and branch overdispersion (Extended Data Fig. 8a and Methods ). We note that we did not specifically model interactions between subclones. We simulated the genome-wide accrual of somatic mutations in each lineage, including both neutral mutations (Extended Data Fig. 8b–d , bottom) and selected (driver) mutations (Extended Data Fig. 8b–d , top), showing characteristic patterns caused by subclonal selection. Furthermore, distinct clonal patterning was observed for peripheral versus exponential growth (governed by the width of the growing outer rim of cells ( d push ); Extended Data Fig. 8e and Supplementary Fig. 24 ), in which clonal intermixing was greater in the exponential case. To compare the model with data, we simulated our empirical spatial sampling scheme (Fig. 4a,c,e , ref. 23 and Supplementary Fig. 1 ) on our virtual tumours (Extended Data Fig. 8f ). This generated realistic whole-genome sequencing synthetic data that we used to reconstruct a (synthetic) phylogenetic tree, thus comparing real data (Fig. 5a ) and the corresponding matched simulation (Fig. 5b and Extended Data Fig. 8g ). The corresponding spatial patterns of subclonal heterogeneity could be visualized from the simulation (Fig. 5c ). Bayesian inference (sequential Monte Carlo, or ABC–SMC 54 ) of model parameters was performed on a patient-by-patient basis by matching synthetic and empirically observed trees, making use of regularization with the Akaike information criterion (AIC) for model selection 55 (Fig. 5d and Extended Data Fig. 8h ; see Methods and Supplementary Note with for details). Specifically, the number of parameters ( k ) is used to regularize the negative log-likelihood (NLL) of the models, calculate AIC and, more importantly to estimate the confidence in model selection, the ΔAIC value (difference in AIC between compared models). ΔAIC greater than 4 is considered to represent strong support for one model over another 55 , this was the threshold used to identify strongly preferred models. The relationship between AIC and critical distance of summary statistics between real and simulated trees is reported in Fig. 5e . Generally good agreement between simulated and observed phylogenetic tree structures was observed despite the relative simplicity of our model, with quantitative assessment of the goodness of fit confirmed by likelihood and posterior predictive P value distribution (Fig. 5f ). For example, C539 was predicted to contain a selected subclone (Fig. 5a–f ) and carried a KRAS G12C mutation that presumably drove the clonal expansion (Fig. 5a ). Tumour C548 was inferred to be neutrally evolving (Fig. 5g–l ) and thus predicted to carry no strongly selected subclonal driver mutations, despite there being subclonal mutations in putative driver genes in this case. Fig. 5: Inference of evolutionary dynamics in individual tumours. a , b , Target tree for C539 ( a ) versus best simulated tree for C539 ( b ). c , Spatial patterns and sampling of the simulation. d , Model selection considering the number of parameters ( k ) and NLL to calculate AIC. AIC differences (ΔAIC) greater than 4 indicate strong preference of a model. e , AIC value with respect to distance from the data ( ε ) for each of the models. Dotted line indicates final distance of ABC–SMC; dashed line indicates distance of trees with added random uniform noise (0.5–2.0). f , Posterior predictive P value (one-sided). Dashed line indicates average distance between target and simulated trees. g , Target (real) tree for C548. h , Simulated tree for C548 identified during the inference. i , Spatial patterns of simulation that generated the data. j , Model selection for C548. k , AIC value versus ε . l , Posterior predictive P value (one-sided). m , Proportion of instances in which models were selected by model selection. AIC and ΔAIC values are reported, the latter indicating the proportion of tumours that can be explained by both models. n , Inference of selection (AIC) was not associated with a higher number of samples per tumour (one-sided bootstrap test, n = 15 neutral and n = 12 non-neutral). o , Subclonal dN/dS values for carcinoma with and without selection (AIC). Numbers of tumours per group: 3 neutral MSI, 3 selected MSI, 12 neutral MSS and 9 selected MSS carcinomas. Error bars indicate 95% confidence intervals. p , Marginal posterior distributions of parameters, split by neutral (green), selected (orange) and selected ×2 (purple). Source data Full size image Across the whole cohort (see for a supplementary inference result booklet), we found strong evidence of subclonal selection in 7 out of 27 tumours (ΔAIC greater than 4; Fig. 5m ). In four of these seven tumours, a putative subclonal driver mutation was present in the selected clade and the variant was expressed in the RNA (subclone drivers are listed in Supplementary Table 4 and reported in Figs. 3a and 4 and Extended Data Fig. 7 ). These included (1) C518, with subclonal selection in A and B driven by PTEN missense mutation C136R; (2) C531, with subclonal selection in B driven by SMAD4 missense mutation A118V; (3) C538, with subclonal selection in D driven by RNF43 nonsense mutation Q153*; and (4) C539, with subclonal selection in A and part of B driven by KRAS missense mutation G12C. In five additional tumours we detected a weak preference for the subclonal selection model. These included (1) C524, in which subclonal selection in B appeared to be driven by a PIK3CA C378R mutation, and (2) C525, in which subclonal selection in C appeared to be driven by a PIK3CA Q546P mutation. The selective advantage of PIK3CA and KRAS mutations agrees with our orthogonal assessment of CRC driver genes using the DepMap database (Supplementary Fig. 20 ). Evidence of selection in the phylogenetic trees included a significantly longer branch containing the selected event (for example, Fig. 5a , selection event 1), or two distinct regions having a more recent common ancestor with respect to the others (for example, Fig. 5a , selection event 2). In the remaining 15 out of 27 tumours the preferred subclonal growth model was neutral (Fig. 5m ). The number of samples per tumour (that is, more extensive tumour sampling) did not confound model selection (Fig. 5n ). Notably, orthogonal dN/dS analysis on the IntOGen driver gene list confirmed the computational modelling results. Specifically, putative subclonal driver gene mutations in tumours predicted to be neutrally evolving showed a dN/dS value of 1 whereas the point estimate was appreciably higher than 1 for driver genes in tumours predicted to experience subclonal selection (Fig. 5o ). This also supported the absence of subclonal selection, even in small clades that may not have undergone sufficient expansion to be detectable by our inference method. Aside, these results illustrate that our spatial inference framework could be used for accurate assessment of the evolutionary consequence of putative driver mutations. Full parameter estimation is reported in Fig. 5p : overdispersion of edge length ( D ), mutation rate per division ( m ), width of the growing outer rim of cells ( d push ), growth rate of the first and second subclones ( λ 2 and λ 3 , respectively) and population size at their introduction ( t 2 and t 3 , respectively). The increased growth rate of selected subclones was inferred to be as much as 20 times higher than that of the background clone, and most selected clones originated relatively early during tumour expansion (tumour size fewer than 50,000 cells). Inferred mutation rates were 9.8 × 10 −9 and 46.6 × 10 −9 mutations per base pair per division in MSS and MSI tumours, respectively, consistent with previous measurements 56 . Tumours were delineated by either exponentially growing (high d push ) or growing more slowly at the periphery only (low d push ). Notably, exponential growth was over-represented in neutrally evolving tumours (Fisher’s exact test, P = 0.022). Epigenome and transcriptome of subclones Subclone evolution within a cancer is a natural ‘competition experiment’ between human cells with similar genetic background in the same microenvironment that facilitates delineation of phenotypic differences between subclones and the consequences of driver alterations. We examined matched ATAC-seq and RNA-seq data from selected subclones versus background clones in six and five, respectively, out of seven tumours with strong selection for which we had sufficient matched ‘omics’ data. Enrichment analysis of differentially expressed genes between the subclone and background clone highlighted consistent dysregulation of focal adhesion pathways for C531, C542 and C559. The epithelial–mesenchymal transition programme was upregulated in C542 whereas MYC + E2F targets were upregulated in C531 (see Supplementary Fig. 25a for gene-level analysis and Supplementary Fig. 26 for pathway analysis). Analogous analysis of somatic chromatin accessibility alterations showed promoter loss of accessibility of PPP2R5C , a regulator of TP53 and ERK in C542, which had no known genetic driver mutation in the selected clade (Supplementary Fig. 25b ). Finally, we assessed whether heritable changes in gene expression were indicative of subclonal selection. There were eight tumours in which both adequate phylogenetic signal analysis and assessment of subclone selection were possible. There was no association between the number genes with some evidence of phylogenetic signal and the presence of subclone selection (Wilcoxon P = 0.686; Supplementary Fig. 27a ), nor for spatial segregation versus intermixing of subclones ( P = 0.393; Supplementary Fig. 27b ). Furthermore, the percentage of tested eQTL genes that were significant in each tumour was not associated with neutral evolutionary dynamics ( P = 0.968; Supplementary Fig. 27c ), nor was the magnitude of heritable gene expression changes ( P = 0.195; Supplementary Fig. 27d ). Together this suggests transcriptional variation even within a selected clone. A visual schematic illustrating the main results is shown in Extended Data Fig. 9 . Discussion Heterogeneity in gene expression is common, both between and within patients. Leveraging the fact that clone ancestry is encoded by somatic mutations in the genome, here we determined that only a small proportion of the observed subclonal transcriptomic variation shows strong evidence of heritability through tumour evolution (under 1% of expressed genes and under 5% of hallmark pathways). This points towards phenotypic plasticity—the ability of a cancer cell to change phenotype without underlying heritable (epi)genetic change—as a common phenomenon in CRC. We previously considered that the observation of infrequent stringent selection for subclones within CRCs is consistent with the notion that phenotypic plasticity is established within cancer cells at the outset of cancer growth 50 . Here our explicit analysis of transcriptomic variation supports this hypothesis. Nevertheless, we do find a evidence of heritable changes in gene expression in all CRCs examined. Of 29,949 associations between somatic mutations and gene expression, only 796 (702 clonal) were associated with significant changes in cis gene expression and so can be thought of as potentially functional mutations. In any individual tumour we detected a median of 1 (maximum, 34) subclonal mutation that putatively affected gene expression and, notably, the presence of heritable changes in gene expression was not necessarily related to whether the cell lineage with the variant was undergoing subclonal selection. This emphasizes that phenotypic changes do not necessarily correlate with changes in fitness—the newly induced expression of a particular gene may have no relevance to the ability of that cell to survive or grow in its current microenvironment, and indeed across species most genetic ‘tinkering’ is near neutral or even deleterious 57 . Thus, at least some of the observed tITH is part of the standing phenotypic variation in the tumour but is not selected at the time of the expansion of the primary tumour, even if it is the consequence of the accumulation of mutations during tumour growth. Care should be taken not to conflate transcriptional variation with evidence of important variation in tumour cell biology. We suggest that this variation could partially be a consequence of tumour evolution being ‘out of equilibrium’, in which an expanding population with high genomic and phenotypic instability generates widespread variation that stabilizing selection has not yet had time to prune. Nevertheless, such variation may be important for future tumour evolution, such as in response to treatment. We emphasize that the limited size of our cohort reduced the power to detect the many small associations between genetics and expression that may occur within tumours, and also means that we were unlikely to observe recurrent events across cancers. Future single cell analyses, rather than the tumour glands used here, are likely to be better powered to reveal DNA–RNA associations 58 . However, we argue that the large effects, which we were generally powered to see, are those most likely to be relevant for tumour biology. We emphasize that our analysis reports only correlations and is not proof of a mechanistic link, and that there are other potential confounders including patient genetic background, epigenetic effects and unexplored trans effects. Aside from the foregoing, we show that assessment of intratumour heterogeneity can serve as a ‘controlled experiment’, enabling quantitative measurement of ongoing evolutionary competition within the human body between different lineages with distinct subclonal mutations, providing a platform for function assessment of the ‘driverness’ of putative driver mutations in vivo in human malignancies. Ongoing collection of associated relapses and metastatic deposits will allow assessment of those subclones and drivers responsible for disease progression. Our study makes progress in elucidating the role of genetic control and clonal evolution within primary untreated CRC, suggesting that phenotypic plasticity is widespread and underlies pervasive transcriptional heterogeneity. Methods Sample preparation and sequencing The method of sample collection and processing is described in a companion article (ref. 23 ). Sequencing and basic bioinformatic processing of DNA-, RNA- and ATAC-seq data are included there as well. Gene expression normalization and filtering The number of non-ribosomal protein-coding genes on the 23 canonical chromosome pairs used for quality control was 19,671. Raw read counts uniquely assigned to these genes were converted into both transcripts per million (TPM) and variance stabilization transformed (VST) counts via DESeq2 v.1.24.0 (ref. 59 ). A list of expressed genes ( n = 11,667) was determined by filtering out those for which less than 5% of tumour samples had at least ten TPM. To concentrate on tumour epithelial cell gene expression, genes were further filtered out if they negatively correlated with purity as estimated from matched DNA-seq data (see associated ref. 23 for methodology of purity estimation). Specifically, for the 157 tumour samples that had matched DNA-seq and therefore accurate purity estimates, a linear mixed-effects model of ‘expression (VST) ~ purity + (1 | patient)’ (where ‘~’ represents ‘is distributed as’) was compared via a chi-squared test to ‘expression ~ (1 | patient)’. The linear mixed-effects models were built with lmer from the lme4 R package v.1.1-28 (ref. 60 ). Genes with a negative coefficient for purity in the first model and FDR-adjusted P < 0.05—suggesting that purity had significantly affected expression—were filtered out; this led to a filtered list of 11,401 expressed genes. Gene expression clustering For each tumour with at least five tumour samples ( n = 17 tumours; note that, except for the large advanced C516 adenoma, adenomas used in ref. 23 did not undergo RNA-seq), mean expression and s.d. of expression were calculated for every filtered expressed gene ( n = 11,401) using DESeq2 VST normalized counts (inspired by ref. 61 ). Euclidean distance matrices of mean expression and s.d. of expression were calculated based on non-MSI tumours. Distance matrices were combined with ‘fuse’ from the analogue R package v.0.17-6 (ref. 62 ) with equal (50/50) weighting, and complete linkage hierarchical clustering was performed. Four gene groups were determined using ‘cutree’ ( k = 4) from the dendextend R package v.1.15.2 (ref. 63 ). For plotting of Fig. 1a,b , tumours were clustered with the approach described above and both mean expression and s.d. of expression matrices were scaled by columns. Conversion to entrez gene IDs and gene symbols was carried out in biomaRt v.2.50.3 (ref. 64 ) using Ensembl v.90. Where IDs were missing, newer Ensembl versions and manual curation were used (the complete list of gene information is available in Supplementary Table 2 ). For the KEGG meta-pathway analysis, pathways and pathway categories were downloaded from . Enrichment of KEGG pathways for each gene group was determined with enrichKEGG from ClusterProfiler v.4.2.2 (ref. 65 ), and pathways enriched at FDR < 0.1 were input into ‘enricher’ to determine pathway category enrichment (FDR < 0.1). Pathway categories ‘Neurodegenerative disease’ and ‘Infectious disease: bacterial’ were removed due to their irrelevance to CRC cell biology. Analysis of normal colon scRNA-seq A scRNA-seq dataset derived from healthy intestine was accessed from Elmentaite et al. 66 . scRNA-seq data for colon gut epithelium were downloaded from and filtered for cells from the colon in ‘Healthy adults’. This left seven donors with a mean of 5,516 cells per donor (range, 1,410–16,828). Expression data were normalized with Seurat v.4.1.0 (ref. 67 ) and mean expression within each donor was calculated. The mean and s.d. of each gene’s expression within each donor was calculated. Genes were then filtered and grouped according to the groups identified in Fig. 1a , and plots were produced analogously to Fig. 1b,c . Pathway enrichment clustering Hallmark pathways were downloaded from MSigDB (msigdbr R package v.7.2.1) 24 with unrelated pathways (SPERMATOGENSIS, MYOGENESIS and PANCREAS_BETA_CELLS) removed from analysis, and the COMPLEMENT pathway was renamed COMPLEMENT_INNATE_IMMUNE_SYSTEM. Pathways INTESTINAL_STEM_CELL 68 and WNT_SIGNALING ( ) were added. For each multi-region tumour ( n = 17), the TPM expression of protein-coding genes converted to entrez gene IDs ( n = 18,950) was used as input for single-sample gene set enrichment analysis using the GSVA R package v.1.42.0 (ref. 69 ). The mean and s.d. of enrichment were then recorded for each tumour. Because KRAS_SIGNALING_DN had average enrichment below zero it was removed from downstream analysis, leading to a final list of 48 pathways. Analogously to the genic analysis, mean and s.d. of pathway enrichment were jointly used to determine four groups of pathways whereas tumours were clustered and matrices normalized by column as before. Fisher’s exact tests were subsequently performed to determine whether pathway classes 25 were significantly enriched/depleted in particular pathway groups. CMS and CRIS classifications were determined using the CMScaller R package v.2.0.1 (ref. 70 ). As recommended, raw gene counts were used as input with ‘RNA-seq=TRUE’, meaning that these counts underwent log 2 transformation and quantile normalization. CMS and CRIS were predicted using templates provided in the CMScaller package, and samples were assigned to the subtype with the shortest distance. High-accuracy classifications were determined by running 1,000 permutations, where a classification was considered significant if the FDR-adjusted P -value was under 0.05. Construction of phylogenetic trees Reconstruction of maximum-parsimony trees From deep WGS (dWGS) samples, maximum-parsimony trees were reconstructed with the Parsimony Ratchet method 71 implemented in the phangorn R package v.2.8.1 (ref. 72 ). Mutations with an estimated cancer cell fraction above 0.25 were considered to be mutated (state 1) and others to be non-mutated (state 0) in a given sample. The ratchet was run for a minimum of 100 and a maximum of 10 6 iterations, and terminated after 100 rounds without improvement. The acctran algorithm 72 , 73 , 74 , 75 was used to estimate ancestral character states. From these a set of mutations ( \({M}_{e}\) ) that were uniquely mutated (that is, state 0 to greater than 1) on each edge \(e\) of the phylogeny were obtained. Addition of shallow WGS samples to the tree For anymutation \(i\) the number of reads supporting the variant \({y}_{i}\) and the total number of reads covering the locus \({n}_{i}\) in a shallow WGS (sWGS) sample were obtained from the bam files. The mutation data were assumed to follow a binomial (Bin) distribution: $${y}_{i}\sim {\rm{B}}{\rm{i}}{\rm{n}}({n}_{i},{p}_{i}),$$ where the success probability \({p}_{i}\) is a function of the sample’s purity \(\rho \) , the number of mutated alleles \({m}_{i}\) in tumour cells, the total copy number \({c}_{i}\) in tumour cells and the copy number in contaminating normal cells, c n = 2, given by $${p}_{i}=\frac{\rho {m}_{i}}{{\rho c}_{i}+(1-{\rho }_{s}){c}_{n}}=\frac{{\rho m}_{i}}{2-\,2\rho +{\rho c}_{i}}.$$ For a set of mutations \({M}_{e}\) from a given edge \(e\) of a tree \(T\) , all, none or a fraction \({\pi }_{m}\) of mutations might be present in a sample. The marginal likelihood of the observed data ( \({D}_{e}\) ) of the set of mutations is $$p({D}_{e}|{\pi }_{m})=\mathop{\prod }\limits_{i=0}^{|{M}_{e}|}({\pi }_{m}\,p({y}_{i}|{n}_{i},{p}_{i})+(1-{\pi }_{m})\,p({y}_{i}|{n}_{i},{p}_{0})),$$ where \({p}_{0}\) is the background noise of the WGS at a unmutated site. Assuming that mutated sites are not lost at any point in time, for a mutation from the edge e = ( s,t ) to be mutated in a sample, all variants on the path from the germline node \(r\) to the node \(s\) of this edge ( r ⇝ s ) also have to be mutated (that is, π m = 1). All remaining mutations—that is, those that occur in the descendants of \(t\) or in different lineages of the tree—must be absent (that is, π m = 0). The likelihood of the data \(D\) for all mutations that are part of the tree is $$L(e,{\pi }_{m},{p}_{o},\rho )=p({D}_{e}|{\pi }_{m}){\prod }_{{e}^{{\prime} }\in {\rm{A}}{\rm{n}}{\rm{c}}(s)}p({D}_{{e}^{{\prime} }}|{\pi }_{m}=1){\prod }_{{e}^{{\prime} }\notin {\rm{A}}{\rm{n}}{\rm{c}}(t)}p({D}_{{e}^{{\prime} }}|{\pi }_{m}=0),$$ where Anc( s ) is the set of all ancestral edges on the path from r to s . Maximum-likelihood estimates of sample parameters \(\hat{e}\in E\) , π m ∈ [0,1], \({\hat{p}}_{0}\) ∈ [0,1] and \(\hat{p}\) ∈ [0,1] were obtained for each sWGS sample by minimizing −log( L ), and samples were added to location \(\hat{x}=\left(\hat{e},{\hat{\pi }}_{m}\right)\) of the tree. Estimation of copy number multiplicities The above analysis was restricted to mutations in regions in which no subclonal SCNA occurred. The multiplicity of mutations \({m}_{s,i}\) was estimated across the set of all samples S as $${m}_{i}=\mathop{{\rm{argmin}}}\limits_{{m}_{s,j}\in \{1,...{c}_{s,j}\}}\sum _{s\in S}-\log \left(\left(\begin{array}{c}{n}_{s,i}\\ {y}_{s,i}\end{array}\right){p}_{s,i}^{{y}_{s,i}}{\left(1-{p}_{s,i}\right)}^{{n}_{s,i}-{y}_{s,i}}\right){{\mathbb{I}}}_{s,i}$$ with \({p}_{s,i}\) as defined above and where \({{\mathbb{I}}}_{s,i}\) indicates whether the mutation \(i\) was detected in sample \(s\) . Due to potential issues with the accuracy of estimates for large copy numbers, only sites with copy number 0 < c < 4 were used. The tool for assignment of sWGS samples to a dWGS tree is available as R package MLLPT at . Intermixing scores To calculate intermixing within tree \(T\) , each tip \({v\in V}^{1}\) was labelled with the region of the tumour from which the corresponding sample was obtained. Intermixing within the tree was then measured as $$I(T)=\frac{1}{|{V}^{1}|}\sum _{v\in {V}^{1}}\left(\frac{1}{{|D}_{s}|}\sum _{s\in {D}_{s}}{{\mathbb{I}}}_{{m}_{v}\ne {m}_{s}}\right),{D}_{s}:\,=\,\{t\in {V}^{1}|t\in \,{\rm{desc}}({\rm{pa}}(s))\},\,$$ where \({{\mathbb{I}}}_{{m}_{v}\ne {m}_{s}}\) is an indicator function that indicates whether \(v\) and \(s\) had different labels, \({\rm{pa}}(s)\) is the parent of s and \({\rm{desc}}(s)\) is the set of all descendants of s . Phylogenetic signal analysis Tumours with fewer than six paired DNA–RNA samples were excluded from this analysis, leaving 114 samples from eight tumours (median 11 samples per tumour, range 6 to 31). Additional sWGS samples, however, had zero branch length because mutations unique to a sample could not be called with sWGS methodology. To account for these ‘missing’ unique variants, we inferred the probable number of unique variants from the matched dWGS samples. For each sWGS sample from a particular tumour region, a new tip branch length (‘leaf length’) was drawn from a Poisson distribution based on the mean number of unique mutations observed in each dWGS sample from the same spatial tumour region. DNA samples that did not have matched RNA-seq samples were then removed from the trees (with drop.tip from ape R package v.5.6-1, ref. 76 ). This process was repeated 100 times for each tumour, leading to a forest of 100 phylogenetic trees with slightly varying branch length for each sWGS sample. In the genic phylogenetic signal analysis, Pagel’s λ was calculated for group 1–3 genes ( n = 8,368) using ‘phylosig’ from the phytools R package v.1.0-1 (ref. 77 ). This returns the maximum-likelihood Pagel’s λ estimate and a P value for the likelihood ratio test with the null hypothesis of λ = 0. This analysis was performed for all 100 trees and the median λ and P value determined for each tumour, with median P < 0.05 indicating evidence of phylogenetic signal for that gene. Genes with recurrent phylogenetic signal were defined as those with evidence of phylogenetic signal in at least three tumours. The STRINGdb R package v.2.6.1 (ref. 78 ) was used to determine pathway enrichment of these recurrent phylogenetic genes, and ‘string-db.org’ was used for plotting of PPAR signalling genes. To assess how phylogenetic signal is affected by purity, the analysis was rerun with purity-corrected expression. The coefficients of how purity determines gene expression had already been calculated during gene filtering (that is, the coefficient of purity in ‘expression ~ purity’ regression for all DNA matched samples ( Methods ) and samples used for phylogenetic analysis had matched DNA samples, allowing the use of accurate purity values. The expression of each gene (first normalized by DESeq2 variance-stabilizing transformation) was then normalized with the following equation: $${{\rm{Exp}}}_{{\rm{pur}}}={{\rm{Exp}}}_{{\rm{vst}}}+({\rm{Purity}}\,{\rm{coefficient}}/{\rm{Sample}}\,{\rm{purity}})$$ Phylogenetic signal analysis was then undertaken with purity-corrected expression (Supplementary Fig. 6 ). In pathway phylogenetic signal analysis, pathway enrichment values were used as input for ‘phylosig’ for the 48 pathways. Evidence of phylogenetic signal was then determined as above. Recurrent phylogenetic pathways were defined as those with evidence of phylogenetic signal in at least two tumours, and Fisher’s exact tests were used to determine enrichment/depletion in pathway groups and classes. To determine the power for each tumour used in phylogenetic signal analysis, gene expression was simulated and λ P values estimated. Gene expression was Poisson distributed across nodes and was increased by a factor of 5–100% across every clade of the tree. This was performed over the forest of 100 trees of differing branch length, and this process was then repeated 1,000 times. The power to detect evidence of phylogenetic signal for a particular expression percentage change at a particular clade was therefore inferred by the percentage of simulations that had a median (that is, over the 100 branch-length-variant trees) P < 0.05. Assessment of phenotypic plasticity For expression-based sample clustering, we calculated Euclidean distance matrices on genes from groups 1–3 ( n = 8,368) and performed complete hierarchical clustering for each tumour with at least five RNA-seq samples ( n = 17). The resulting dendrograms are plotted in Supplementary Fig. 10 . To quantify space–gene expression correlations we constructed a permutation test. For tumours with at least ten samples ( n = 11), cophenetic distance matrices were extracted from the dendrograms plotted in Supplementary Fig. 10 . The sum of all cophenetic distances between samples from the same tumour region was then calculated to acquire a metric of expression correlation with region for each tumour. To determine the significance of this metric, sample names for cophenetic distance matrix were randomly relabelled and the mixing statistic recalculated 10,000 times, followed by evaluation of whether the observed data were more extremely clustered than the random permutations (Supplementary Fig. 11 ). The intermixing scores used in Supplementary Fig. 9 were calculated as in Intermixing scores . To assess the impact of tumour microenvironment we used CIBERSORTx 34 , specifically with the LM22 signature file comprising 22 immune cell types 79 via the online portal ( ). First, Euclidean distances between the vector of gene expression from pairs of samples in the same tumour were calculated based on the expression of the 8,368 genes used in phylogenetic signal analysis. Euclidean distances were also calculated based on absolute scores from CIBERSORTx (note that CIBERSORTx was run using all genes). These two metrics were then plotted together for sample pairs from the same tumour and the correlation assessed (Supplementary Fig. 12 ). Genetic determinants of gene expression heterogeneity Tumours with at least two tumour samples were included in this analysis (153 tumour samples from 19 tumours, median four samples per tumour) and only loci mutated in at least two samples and connected to an expressed gene (groups 1–3 from Fig. 1 ) were analysed (22,961 mutated loci connected to 5,927 expressed genes—29,949 unique gene–mutation combinations). The following data were used as input for the linear model: Exp: a gene × sample matrix of variance-stabilized normalised gene expression of group 1–3 genes, converted to a z -score by subtracting the mean expression of all samples and dividing by the s.d. of all samples. CNA: a gene × sample matrix of the total copy number of the gene locus. If multiple copy number states were detected for the same gene, the segment overlapping most with the gene’s locus was selected. Mut: a binary mutation × sample matrix in which mutations (SNVs and indels) were either within the enhancer region of the gene or a non-synonymous mutation within the coding region of the gene itself. Enhancer links to genes were defined using ‘double-elite’ annotations from GeneHancer tracks 80 . Some enhancer regions overlapped with the gene coding region, and non-synonymous mutations in these regions were annotated as both enhancer and non-synonymous. Purity: the purity of each sample as determined from dWGS or sWGS. In addition, 14 matched normal samples were added and these were assigned WT for all mutations, 2 for total copy number and 0 for purity. For each gene–mutation combination, the following linear model was implemented: \({\rm{E}}{\rm{x}}{\rm{p}}\sim {\rm{M}}{\rm{u}}{\rm{t}}+{\rm{C}}{\rm{N}}{\rm{A}}+{\rm{P}}{\rm{u}}{\rm{r}}{\rm{i}}{\rm{t}}{\rm{y}}+{\rm{T}}{\rm{u}}{\rm{m}}{\rm{o}}{\rm{u}}{\rm{r}}\) , where ‘Tumour’ indicates whether the sample was a normal or tumour sample. A gene–mutation combination was said to be explained if the FDR-adjusted P value of the F -statistic for overall significance was less than 0.01. Storey’s π , the estimate of the overall proportion of true null hypotheses, was calculated using the qvalue R package v.2.26.0 (ref. 81 ). A gene–mutation combination was significantly affected by a variable (that is, Mut/CNA/Purity/Tumour) if the FDR-adjusted P value for the coefficient of that variable was under 0.05. For analysis of clonality (Fig. 2f ), a mutation was considered ‘subclonal’ if at least one mutation associated with that gene was not found in all matched DNA–RNA samples for at least one tumour. For combination of eQTLs with phylogenetic analysis and clonality (Fig. 2g ), a gene mutation combination was considered an ‘eQTL’ if it was significant for Mut, ‘subclonal’ if it was not found in all matched DNA/RNA samples for at least one tumour and considered ‘phylogenetic’ if the associated gene had significant phylogenetic signal in the tumour in which the mutation was present. To look for recurrence of eQTL mutations in the Hartwig cohort, mutation loci were first converted to hg19 using liftOver from the rtracklayer R package v.1.54.0 (ref. 82 ) and ‘hg38Tohg19.over.chain’ from . Two out of 22,961 loci could not be converted and were therefore discarded for this analysis. Converted loci were searched for in the CRC Hartwig cohort using the ‘purple.somatic.vcf.gz’ files. For Hartwig gene expression, ‘adjTPM’ values were used and converted to a z -score whereas tumour purity was extracted from the metadata. For each locus with at least one mutated DNA–RNA Hartwig sample, the linear models of Exp ~ Mut + Purity and Exp ~ Purity were compared via a likelihood ratio test. An eQTL was said to validate in Hartwig if the P value of the test was under 0.05 and the coefficient of the Mut variable was the same sign as the coefficient in the original eQTL analysis (that is, the mutation increased expression in EPICC and Hartwig or vice versa). A post hoc power analysis was carried out using the pwr.t2n.test from the pwr R package v.1.3-0 (ref. 83 ). For each eQTL, absolute mutation effect size was used as the input effect size with ‘power’ set to 0.99 and ‘n2’ set to the number of DNA–RNA Hartwig CRC samples ( n = 394) minus the number of Hartwig samples with the mutation. The tool then returned the number of samples needed to determine the effect, and this number was multiplied by 1.15 given the non-parametric nature of the data. If absolute input effect size was greater than 3.04, this was set to 3.04 because higher values returned a ‘not available’ result. MSI investigations for eQTL analysis A PCA analysis of germline SNPs plotted with ggbiplot v.0.55 (ref. 84 ) found a lack of bias for germline SNPs, with the top two principal components accounting for only 16.6% of explained variation (Supplementary Fig. 16 ). Labelling tumours by MSI status also showed that principal component 1 slightly separated MSS from MSI tumours. To directly assess the effect of MSI on eQTL analysis the analysis was rerun twice, once with only MSS tumour samples ( n = 149 across 15 tumours) and again using only MSI tumour samples ( n = 18 across three tumours). Given the large difference in sample size and therefore power, to make the two analyses comparable only mutations with very large (over 1.5) effect sizes were considered. The absolute mutation effect sizes of 73 eQTLs from the MSS analysis were therefore compared with 293 eQTLs from the MSI analysis. A QQ -plot comparing these two datasets showed there was a difference in the distribution of effect sizes of significant eQTLs between MSS and MSI analyses (Supplementary Fig. 17 ). Specifically, there was a higher proportion of MSS eQTLs at very large effect size in comparison with the MSI analysis. This is interesting because it suggests a difference in the genetic control of gene expression between MSS and MSI tumours. The original eQTL analysis was also rerun with MSI as a cofactor (Supplementary Fig. 18 ), and this was found to have a minor impact on results. Notably, there was a small decrease in the number of significant eQTL genes (Supplementary Fig. 18a,b ), non-coding enhancers were no longer significantly associated with increases in expression ( P = 0.08; Supplementary Fig. 18e ) and subclonal mutations were no longer more likely to be eQTLs ( P = 0.17; Supplementary Fig. 18f ). However, it should be noted that the direction of these effects did not change. Finally, the distribution of R 2 values was compared between the original analysis (without MSI as a covariate) and with MSI as a covariate. Supplementary Fig. 19 shows that, for models that were significant in both analyses, R 2 values were highly correlated ( P < 1 × 10 −16 , R 2 = 0.855). It is worth noting that R 2 values tend to be higher for the analysis with MSI, and this was found to be significant (paired Wilcoxon signed rank test, P = 1.071 × 10 −241 ). Therefore, inclusion of MSI as a covariate marginally increased the amount of variance explained by each model but R 2 values were very highly correlated with the original analysis dN/dS analysis Per-patient variant calls were obtained from the VCF files and lifted to the hg19 reference genome using the rtracklayer R package v.1.54.0 (ref. 82 ). Variants were split into clonal (that is, present in all samples) and subclonal mutations (that is, present in a subset of samples) in cancer, as well as a set of mutations present in any of the adenomas. Patients were further split into MSI and MSS tumours. The dndscv model (dndscv R package v.0.1.0) 45 was fit separately for each of the four mutation sets. For this, default parameters apart from deactivated removal of tumours due to the number of variants were used. In addition to global dN/dS estimates of the fitted models, dN/dS estimates of CRC-specific driver mutations from IntOGen 41 , 85 were obtained with the ‘genesetdnds’ function of dndscv. Gene essentiality analysis Cancer dependency profiles were downloaded from (version used: CRISPRcleanR_FC.txt) and scaled as previously described 86 , making the median essentiality scores of previously known essential and non-essential genes equal to −1 and 0, respectively. The mutational status of selected putative cancer driver genes used to produce the box plots in Supplementary Fig. 20 , and to test differential gene essentiality across mutant versus WT cell lines, was obtained from Cell Model Passports 87 . In situ mutation detection BaseScope in situ mutation detection was performed as previously described 52 , using mutation-specific probes designed and provided by the manufacturer. Data were assessed manually: a tumour gland was denoted as ‘mutant’ if at least one cell in the gland had detected expression of the mutant transcript, otherwise it was classified as ‘wild type’ for that mutation. Spatial computational inference Inference of evolutionary dynamics using spatially resolved genomic data was performed by Bayesian fitting of a spatial agent-based model of clonal evolution to the observed molecular data. The model described growth, death, physical dispersion and mutation of individual tumour glands, and was a substantial modification of the framework previously described in ref. 51 . Full details are provided in the Supplementary mathematical note. Transcriptomic and epigenetic characterization of selected clones Differential expression analysis was run using DESeq2 (ref. 59 ), comparing RNA samples in inferred selected regions with all other samples from that tumour. Analysis was also rerun with random shuffling of sample labelling to filter for the signal of the selected subclone, and genes found to be differentially expressed in more than 5% of shuffled analyses were excluded. Volcano plots of significant differentially expressed genes were plotted with EnhancedVolcano v.1.12.0 (ref. 88 ) (Supplementary Fig. 25a ). To perform gene set enrichment analysis 89 all remaining genes were ordered by DESeq2’s test statistic, and enrichment of Gene Ontology annotations, KEGG pathways and Hallmark pathways was tested for (FDR < 0.05) using gseGO, gseKEGG and GSEA, respectively, from ClusterProfiler 65 . Significant results are shown in Supplementary Fig. 26 . We also performed differential ATAC-seq peak analysis between selected subclones and background clones. To assess the subclonality of ATAC-seq peaks while controlling for purity, a log-ratio test from DESeq2 was used to compare a ‘full model’ of ‘~ purity + clone’ to a ‘reduced model’ of ‘~ purity’. ATAC-seq peaks were considered to be significantly altered in selected clones when the adjusted P value was below 0.05 (Supplementary Fig. 25b ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Gene expression data, somatic mutation calls (VCFs from Mutect2+Platypus), copy number calls (Sequenza and QDNAseq), fraction of mutated microsatellites (MSIsensor), ATAC-seq insertion sites and allele counts of somatic SNVs in all sample types are available at Mendeley ( ). Sequence data (processed BAM files) have been deposited at the European Genome-phenome Archive (EGA), which is hosted by the EBI and CRG, under study no. EGAS00001005230 . Access to these data is restricted and subject to application. Source data are provided with this paper. Code availability Complete scripts to replicate all bioinformatic analyses and perform simulations and inference are available at , and .
Two teams of researchers working independently from one another have conducted studies to learn more about the role that epigenetics plays in the behavior of cancerous tumors. The first group, a team at the Institute of Cancer Research in the U.K., working with colleagues from several other institutions in the U.K. analyzed thousands of samples of bowel cancers from different patients looking for instances of epigenetic changes. The second team, with members from around the globe, focused on multiple samples taken from the same tumor—they also looked for changes due to epigenetics. Both teams have published papers outlining their work in the journal Nature. For many years, medical scientists have believed that most, if not all, cancers develop due to mutations in DNA, leading to abnormal tissue growth in the form of tumors. In more recent years, researchers have found evidence showing that not all cancers have a simple genetic origin. Instead, evidence has shown that some have an epigenetic factor. Through epigenetics, age or environment exert an influence on the way that DNA code in cancer cells is expressed. In this new effort, both teams looked to better understand the role of epigenetics in the development and progression of cancerous tumors. In the first study, the researchers collected and studied tissue from various types of bowel cancer collected from 30 different patients. In all, they looked at 1,370 samples. Each was subjected to both whole-transcript RNA-seq and whole genome sequencing. They were able to track down which tumors were purely DNA based and which were not. They found that only 166 of them could be traced to underlying genetics. In the second study, the researchers used spatially resolved, paired whole-genome and transcriptome sequencing on multiple tissue samples taken from the same tumor. They found evidence showing that the majority of variations they identified could not be blamed on underlying genetics. Both teams acknowledge that their work did not prove that epigenetics leads directly to changes in the behavior of cancers, but both found evidence that suggests that is the case. They also both note that much more work is required to better understand the role that epigenetics plays in cancer development and progression.
10.1038/s41586-022-05311-x
Biology
Team maps genome of black blow fly; may benefit human health, advance pest management
Anne A. Andere et al. Genome sequence of Phormia regina Meigen (Diptera: Calliphoridae): implications for medical, veterinary and forensic research, BMC Genomics (2016). DOI: 10.1186/s12864-016-3187-z Journal information: BMC Genomics
http://dx.doi.org/10.1186/s12864-016-3187-z
https://phys.org/news/2016-11-team-genome-black-benefit-human.html
Abstract Background Blow flies (Diptera: Calliphoridae) are important medical, veterinary and forensic insects encompassing 8 % of the species diversity observed in the calyptrate insects. Few genomic resources exist to understand the diversity and evolution of this group. Results We present the hybrid (short and long reads) draft assemblies of the male and female genomes of the common North American blow fly, Phormia regina (Diptera: Calliphoridae). The 550 and 534 Mb draft assemblies contained 8312 and 9490 predicted genes in the female and male genomes, respectively; including > 93 % conserved eukaryotic genes. Putative X and Y chromosomes (21 and 14 Mb, respectively) were assembled and annotated. The P. regina genomes appear to contain few mobile genetic elements, an almost complete absence of SINEs, and most of the repetitive landscape consists of simple repetitive sequences. Candidate gene approaches were undertaken to annotate insecticide resistance, sex-determining, chemoreceptors, and antimicrobial peptides. Conclusions This work yielded a robust, reliable reference calliphorid genome from a species located in the middle of a calliphorid phylogeny. By adding an additional blow fly genome, the ability to tease apart what might be true of general calliphorids vs. what is specific of two distinct lineages now exists. This resource will provide a strong foundation for future studies into the evolution, population structure, behavior, and physiology of all blow flies. Background One in ten species on earth are flies (Diptera). They are the most derived group within arthropods, and have experienced an explosive radiation in the last 50 million years [ 1 , 2 ]. Over the past decade, dipteran draft genomes including the fruit fly ( Drosophila melanogaster , [ 3 ]), the house fly ( Musca domestica , [ 4 ]), and the malaria mosquito ( Anopheles gambiae , [ 5 ]) have been published. Within Diptera, the family Calliphoridae, commonly known as blow flies, comprises ~1500 species [ 6 , 7 ], and contributes 8 % of species diversity in calyptrate flies [ 8 ]. Calliphorids are ubiquitous, distributed world-wide, and are important in the medical [ 9 – 12 ], veterinary/agricultural [ 13 – 16 ] and forensic fields [ 17 , 18 ]. For example, blow flies are responsible for the transmission of human disease [ 19 – 22 ]. Mihalyi’s danger-index was calculated for seven blow fly species in South America, with consideration for the synanthropic index. Six of the seven species posed a greater sanitary risk than the house fly [ 23 ], a known disease vector [ 24 , 25 ]. Interestingly, other closely related blow fly species have been shown to be medically advantageous as a means of wound debridement, otherwise known as maggot therapy [ 9 , 12 , 26 , 27 ]. In these cases, maggots physically debride the wound of decaying tissue while simultaneously excreting antimicrobial compounds [ 26 , 28 – 36 ] effective against antibiotic resistant bacteria such as methicillin resistant Staphylococcus aureus (MRSA) [ 29 , 37 ]. Recently, the first calliphorid genome, from the sheep blow fly ( Lucilia cuprina, [ 38 ]), was released. The publication of the L. cuprina draft genome brings with it the potential for studying a group of flies that have evolved recently [ 1 , 2 , 39 ], and have adopted many different life histories [ 6 ]. For example, the sheep blow fly specimen(s) that was sequenced was from a location in which it is predominantly, if not exclusively, an obligate ectoparasite (infestation of living vertebrate tissue by fly larvae) [ 38 ], and has presumably adapted under selective pressures from subsisting on carrion to infesting live animals. Calliphorid genomes will provide the necessary resources needed to understand the basic biological processes of lineage-specific traits in myiasis-causing flies. The black blow fly, Phormia regina (Meigen), is a Palearctic fly with records throughout North America and Northern Europe, and is the dominant carrion fly for most of Canada and the United States [ 40 ]. We chose to examine the genome of this species because 1) it plays an important role in ecosystems via carrion decomposition and nutrient recycling [ 41 ], 2) its’ abundance in North America, and 3) because it exhibits no specialized parasitic adaptations or unusual sex determination strategies (i.e. it is not monogenic), though the sex determination strategy for Phormia regina is largely unknown. Like other calliphorid flies, P. regina contains 2n = 12 chromosomes, including heteromorphic sex chromosomes [ 42 , 43 ]. These characteristics make it a good candidate for comparisons with species that have more specialized life histories. Furthermore, providing an additional reference genome from Calliphoridae will allow for a more complete understanding of the clade and adaptive processes that take place within it. We sequenced the male and female genomes (~40X), allowing us to characterize sex chromosomes and sex determining pathways, as well as evolutionary relationships of chemoreceptors, antimicrobial peptides, and insecticide resistance pathways in relation to other calliphorids and dipterans. Methods Genome sequencing – short reads Genomic DNA was extracted from whole flies using the DNeasy Blood and Tissue DNA Extraction kit (Qiagen Inc., Valencia, CA) and pooled from five female and five male Phormia regina flies housed in a laboratory colony (approximately 4–6 months old). The founders originated from Indianapolis, IN (39.7684° N, 86.1581° W) and were collected during the summer of 2012. The extracted DNA from each individual was quantified using a Qubit fluorometer (ThermoFisher Scientific, Grand Island, NY) and mixed in equal proportions to yield the two pooled extracts, one for each sex. DNA libraries were prepared using TruSeq DNA sample preparation (Illumina, San Diego, CA) and sequenced (2 × 100 bp) using one half lane of an Illumina HiSeq2000 platform at the Purdue University Genomics Core Facility (West Lafayette, IN). Additional 454 reads were obtained as described in [ 44 ]. Genome sequencing – long reads Genomic DNA was extracted from a whole single male P. regina specimen that had been in colony for >15 generations (different colony from above, but same originating location, Indianapolis). DNA library preparation and sequencing was performed according to the manufacturer’s instructions and reflects the P6-C4 sequencing enzyme and chemistry, respectively, at the Icahn School of Medicine at Mount Sinai Genomics Core Facility. 14 % of the input library eluted from the agarose cassette and was available for sequencing. For all cases, this yield was sufficient to proceed to primer annealing and DNA sequencing on the PacBio RSII instrument (Pacific Biosciences, Menlo Park, CA). SMRTcell libraries were placed onto the RSII machine at a sequencing concentration of 150 pM and configured for a 240-min continuous sequencing run. Sequencing was conducted to achieve a 7401 bp subread N50 across a total of 1.5 Gb of data comprised of 268,000 reads on 2 SMRTcells. Due to the high error rate of reads generated from PacBio sequencing [ 45 ], error correction was performed using the Correct PacBio Reads tool of CLC’s Genome Finishing Module plug-in v1.5.1 (Qiagen Inc) on a local workstation. Additional error correction was also performed using the SMRT Analysis PacBioToCA correction module, with the assistance of the high quality Illumina reads, using default settings. Genome processing and assembly Genome processing and assemblies were accomplished using CLC Genomics Workbench v6.0.5 (CLC-GWB; Qiagen Inc.). However, in order to evaluate the effectiveness of the CLC genome assembler, preliminary assemblies were generated using de novo genome assemblers: Velvet [ 46 ] and SOAPdenovo [ 47 ]. The CLC assemblies were accomplished using a desktop computer with enhanced memory (32 Gb RAM), whereas the Velvet and SOAPdenovo assemblies were performed using a large memory supercomputing cluster (Indiana University, Bloomington, IN). The male and female Illumina short reads were initially assembled into two draft genomes with the three assemblers. Each assembly was carried out using a range of kmer values until the ‘best’ assembly was captured as determined by contig number and contig N50 size (the kmer values varied with each assembly). The CLC-GWB de novo assembly using only Illumina reads resulted in the smallest number of contigs and the longest N50 (Table 1 ). With this data, we therefore decided to rely solely on CLC-GWB for the remainder of our analyses and any additional assemblies produced using additional reads (454 and PacBio). Table 1 Comparative assembly statistics of preliminary assemblies generated using CLC-GWB, Velvet and SOAPdenovo Full size table The pipeline for assembly using the Illumina and 454 reads on CLC-GWB was as follows: low quality reads (phred scores < 20) and adaptor sequences were removed, duplicate reads were removed and overlapping pairs were merged (mismatch cost was set to 2 and a gap cost was set to 3). To remove extraneous or contaminating DNA, we used a filtering pipeline that included the mapping and subsequent removal of the above processed reads to 1405 phage ( , downloaded 03/2014) and 595 bacterial genomes (NCBI, , downloaded 03/2014). Further filtering removed mitochondrial reads, which were assembled separately using methods described below. Approximately 0.38 and 0.49 % of reads were identified as bacteriophage and bacterial contaminants, respectively. Following the removal of these contaminants, 5 % of overlapping paired reads were merged, 1 % of duplicates and low quality reads were removed, leaving a total of 253,233,928 male and 254,306,608 female reads for downstream analyses. Preliminary assembly iterations involved further optimization of kmer values (ranging from 24 to 60 nucleotides) and bubble sizes (ranging 100–1000 bp). Assemblies were evaluated based on total number of assembled contigs, estimated genome sizes, contig N50 values, and completeness as per CEGMA v2.4.010312 (see below [ 48 ]). Once optimal kmer sizes were determined (45 bp for each individual genome), reads were mapped back to the assemblies using CLC-GWB (mismatch cost of 2, insertion cost of 3, deletion cost of 3, length fraction of 0.5, similarity fraction of 0.8, bubble size 300 bp). PacBio (error-corrected) and 454 reads were combined with the Illumina reads to create a hybrid de novo assembly by using them to scaffold the contigs using CLC’s Join Contigs Tool from the genome finishing module plug-in. Mitochondrial genome assembly The mitochondrial genomes of seven calliphorid flies ( Cochliomyia hominivorax , NC_002660; Protophormia terraenovae , NC_019636.1; Chrysomya albiceps , NC_019631.1; Chrysomya bezziana , NC_019632.1; Chrysomya rufifacies , NC_019634.1; Chrysomya megacephala , NC_019633.1; Lucilia sericata , NC_009733; Lucilia cuprina , NC_019573.1) were obtained from NCBI ( ) and used as local reference genomes for the pooled male and female reads. Reads that mapped to these reference genomes were then extracted and assembled using CLC-GWB v7.0.3 to generate a mitochondrial genome assembly. The mitochondrial genome sequence was 99 % similar to a previously published genome from P. regina (KC005712, 15,513 bp), 93 % to Protophormia terranovae (JX913743.1, 15,170 bp), 92 % to Chrysomya megacephala (AJ426041, 15,831 bp), 91 % to Lucilia sericata (AJ422212.1, 15,945 bp), and 90 % to Cochliomyia hominivorax (AF260826, 16,022 bp), among others. For all calliphorid genomes, sequence similarity was 90 % or greater. All mitochondrial amino acid sequences were 100 % identical to other calliphorid mitochondrial protein sequences. Genome assembly assessment Assembly statistics were calculated using CLC-GWB and the genome assessment tool QUAST v3.1 [ 49 ]. To evaluate the completeness of the genome assemblies, CEGMA v2.4.010312 [ 48 ] was used to detect the number of complete and partial core eukaryotic genes present in the assembled genomes. This analysis was completed using GeneWise v2.4.1, NCBI-BLAST+ v2.2.28 and geneid v1.4.4 to return DNA sequences of each prediction and their associated statistical reports. Gene prediction & gene ontology AUGUSTUS [ 50 ] was used for ab initio prediction of gene sequences based on reference Drosophila melanogaster sequences as it is an extensively studied and annotated genetic model organism. Predicted protein sequences were annotated by BLASTp v2.2.28+ using a non-redundant protein BLAST database and an E-value cutoff ≤ 1 e -5. Comparative analysis of the male and female gene set was performed by using CD-HIT-2D v4.5.6 [ 51 ] to compare the two protein datasets (90 % identity, word size (-n) of 5, and length difference cutoff (-s2) of 90 %). The unique protein sequences for the male were blast searched (BLASTp, E-value cutoff ≤ 1 e -10) against the complete predicted amino acid sequences of the female to confirm their uniqueness. The same was done for the female protein set against the male protein set. Proteins without hits were assumed to be unique to each sex. Functional characterization of the predicted gene sequences was performed using default settings in Blast2GO v3.1.3 [ 52 ]. Gene ontology (GO) terms were assigned to the annotated sequences. GO slim functionality in Blast2GO was used to simplify the annotation into functional categories and the proteins were categorized at level 2 into the three main GO classifications of biological process, cellular component and molecular function. InterProScan [ 53 ] statistics and KEGG [ 54 ] map pathways were also extracted from Blast2GO v3.1.3 using default values. GO terms from biological processes for each of the unique gene sets were summarized after the removal of redundant GO terms by the web server REViGO (reduce and visualize gene ontology) to create a visual representative subset of the terms using a clustering algorithm that uses similarity measures [ 55 ]. The allowed similarity was set to 0.5 (small) and the database of GO terms selected was from D. melanogaster. Sex chromosome identification In order to characterize the sex chromosomes, we used the chromosome quotient (CQ) [ 56 ] approach which discovers sex chromosome sequences by using a stringent aligning criterion of the male and female reads onto each other’s genomes. The stringent alignment required a whole read to map onto the reference contigs with a zero mismatch in order to reduce the number of false positives [ 56 ]. The female to male ratio of the alignments was then used to distinguish contigs from the X or Y chromosome. Male contigs with a CQ of less than the arbitrary 0.05 were grouped as putative Y chromosome contigs and female contigs with a CQ ranging between 1.9 and 2.5 were grouped as putative X chromosome contigs. The sex chromosome contigs were then blast searched against an arthropod database (BLASTn, E-value ≤ 1 e -5) to determine any homology with other insects in the database. The contigs were also compared against the well-characterized sex chromosomes of D. melanogaster (tBLASTx, E-value ≤ 1 e -5). Putative gene approaches All candidate genes were discovered using one or all of three approaches. First, genes of interest (in particular pathways or associated with potential adaptive traits, see below) were initially acquired from Flybase ( , [ 57 ]) using queries for specific genes, or using genes associated with specific GO terms. Contigs with hits were identified via local blast (BLASTn) with an E-value cutoff of ≤ 1 e -5. Gene sequences were individually annotated for gene structure using the web server version of the AUGUSTUS prediction tool [ 50 ], aligned using MUSCLE [ 58 ] and viewed with MVIEW [ 59 ]. For comparison purposes with other calliphorid or dipteran species, if the gene sequences were available in Genbank, they were acquired and included in our nucleotide (and subsequent predicted amino acid) sequence alignments. If BLASTn approaches of Drosophila sequences failed to produce hits, a second approach to discovering candidate genes was to use keyword searches in our annotated gene dataset. A third approach was to use tBLASTx and homologous sequences from other taxa (such as Lucilia or Musca ) on the P. regina genomes. Sex-determining genes Putative sex determining genes were isolated and characterized by querying a set of known genes ( transformer ( tra ) – CG16724, transformer2 ( tra2 ) – CG10128, sex lethal ( sxl ) – CG43770, doublesex ( dsx ) – CG11094 , fruitless ( fru ) – CG14307 , daughterless ( da ) – CG5102 , and maleless ( mle ) – CG11680) from D. melanogaster against our male and female assemblies. tra and tra2 gene sequences did not result in contig hits using BLASTn, therefore we used the coding sequences from closely related blow fly species ( Lucilia cuprina – FJ461621.1 and FJ461620.1 , Cochliomyia macellaria – JX315619.1 , Cochliomyia hominivorax – JX315618.1 and Lucilia sericata – JX315620.1 ) as well as sequences from other calliphorid genomes (unpublished) for which we have transcriptomic data using discontinuous megablast. Chemoreceptors D. melanogaster gene sequences of odorant binding proteins (OBPs), odorant receptors (ORs), gustatory receptors (GRs) and ionotropic receptors (IRs) were used as query sequences for local blast searches against the P. regina genome (E-value cutoff of ≤ 1 e -5) using tBLASTx (Additional file 1 : Table S1). Protein sequences of the ionotrophic receptor IR25a were acquired from NCBI from the following species: Lucilia cuprina- KNC28739 , Calliphora stygia- AID61273 , Stomoxys calcitrans- XP013104244 , Musca domestica- NP001273813 , Batrocera oleae- XP014086336 , Ceratitis capitata- XP004530416 , and Drosophila melanogaster- NP001260049. Antimicrobial peptides A set of immune-related genes obtained from flybase.org was used to query the Phormia regina genome for antimicrobial peptides (attacins, cecropins, defensins, diptericins, Additional file 2 : Table S2). A second approach simply queried our BLASTp results (from the predicted) to identify putative immune-related genes that had homologs to other immune-related genes in insects. Insecticide resistance genes Genes associated with the metabolism of foreign material (xenobiotics) are primarily cytochrome P450′s (Additional file 3 : Table S3), glutathione S-transferases (GST, Additional file 4 : Table S4) and esterases/hydrolases (Additional file 5 : Table S5). These genes were identified by manually searching the BLASTp results from the annotation step of the predicted genes; and KEGG pathways for terms that included cytochrome P450s, GSTs and esterases. Repetitive elements Repetitive elements in the male and female Phormia regina genomes as well as the putatively identified X and Y chromosomes were identified using RepeatMasker [ 60 ] and a library of all known dipteran transposable elements (TEs; RepBase; accessed 14 March 2015). In addition to known transposable elements, RepeatMasker searches were used to identify low-complexity regions including mini and microsatellite sequences. Output from RepeatMasker was used to quantify overall repeat content and an accumulation profile. For the accumulation profile, which reflects relative rates and periods of TEs in a genome, the Kimura 2-parameter [ 61 ] distance between a transposable element insertion and the assumed ancestral sequence were calculated using the calcDivergenceFromAlign.pl script packaged with RepeatMasker. Results and discussion Genome assembly Raw reads and genome assemblies have been submitted to GenBank (BioProject ID PRJNA338752, accession numbers MINK00000000.1 and MINJ00000000.1 for the male and female genomes, respectively). Mitochondrial reads (8,378,416) were removed from the main genomic dataset and assembled into a mtDNA genome (15,801 bp, Additional file 6 : Figure S1, GenBank accession KX853042). In order to refine and scaffold our assemblies, we added longer 454 (average read length 344 bp) and error-corrected PacBio reads (average read length 5698 bp). We repeated the de novo assembly process in CLC-GWB using a range of kmer values. Our ‘best’ hybrid assembled genomes contained a combination of smaller numbers of contigs and longer N50s (Table 2 ). The male (534 Mbp) and female (550 Mbp) genomes had average coverages of 44X with >97 % of the reads mapping back to the genomes. Table 2 Final draft genome assembly statistics of the male and female genomes following the addition of 454 and PacBio reads, including a measure of the robustness of the assembly in the number of core eukaryotic genes assembled (CEGMA) Full size table These values are larger than the experimentally estimated sizes of 529 Mbp and 517 Mbp for the female and male P. regina, respectively [ 43 ]. This is likely due to the presence of repetitive sequences that do not assemble well [ 62 ], as well as the presence of allelic variation due to pooled sequencing of five male and five female individuals [ 63 ]. The robustness of the protein coding portion of the assembly was assessed using CEGMA [ 48 ], where 93.95 and 96.77 % of complete core eukaryotic genes were identified in the female and male genomes (see Additional file 7 : Table S6 for completeness report). Gene prediction and ontology A total of 9490 and 8312 full length genes were predicted by AUGUSTUS [ 50 ] (Table 3 ) in the male and female genomes, respectively. The genic characteristics of the exonic regions and intronic regions of the predicted genes in both sexes show a total of ~30,000 exons and 20,000 introns (Table 3 ). The total number of predicted protein-encoding genes in our assembled genomes was small compared to other recently sequenced Dipterans such as Lucilia cuprina (14,544 genes) [ 38 ] and Musca domestica (15,345 genes) [ 4 ]. This may be due to the high stringency we used for gene prediction where only complete genes (gene sequences with start and stop codons present within individual contigs) were allowed. With a more contiguous version of the genome, and inclusion of transcriptome data in the prediction process, the predicted gene count will probably increase. For comparison, we predicted complete genes in the scaffolded version of the L. cuprina genome (ASM118794v1) and found 10,681 genes (compared to the expected 14,554 genes, data not shown), these results demonstrate that our pipeline for predicting genes is more conservative. Table 3 Male and female P. regina gene predictions including the total number of complete genes, the genic structure characteristics (number and length distribution of the intronic and exonic regions), the proportion genes that produced an NCBI result, and the proportion with characterized identifiable protein domains (InterProScan) Full size table In order to compare the genic structures of the predicted genes with other flies, the total counts and average lengths of the exons and introns were compared to recently published genomes of the common housefly M. domestica [ 4 ], the blow fly L. cuprina [ 64 ] and the fruit fly Drosophila melanogaster (Table 4 ) . The average lengths of gene and intron sequences of L. cuprina and M. domestica are approximately double the size of P. regina ’s . However, the average lengths of the exons are similar in size among the three species. The cause in the length disparity is likely due to the scaffolded nature of the Musca and Lucilia genomes, which contain a large number of N’s as placeholders – thus giving rise to seemingly larger introns. Table 4 An overview comparing the genic structure and statistics in P. regina ( P.reg ) , L. cuprina ( L.cup ) and M. domestica ( M.dom ) genome assemblies Full size table A total of 7792 (94 %) and 8789 (93 %) of the predicted genes in the female and male, respectively, had homology to sequences in GenBank with E-values less than 1 e -5. Therefore, approximately 6 % of the predicted genes (701 male, 520 female) have no apparent homologs in the arthropod database and could be unique to P. regina . Additionally, annotation by InterProScan classified 83.12 and 83.91 % protein domains in the female and male predicted genes, respectively. The species that were most represented in the BLASTp results for both sexes came from calyptrate flies, specifically the sheep blow fly ( Lucilia cuprina) , followed distantly by the stable fly ( Stomoxys calcitrans) and the common house fly ( Musca domestica) reflecting the phylogenetic relatedness among these species [ 65 ]. A total of 5681 (68 %) gene sequences from the female and 5806 (61 %) gene sequences from the male had hits to L. cuprina gene sequences while ~7 % of gene sequences from both sexes had top hits from the stable fly and M. domestica (Additional file 8 : Figure S2). The two most abundant Biological Process GO categories for both sexes were cellular processes (female 72.4 %, male 73.7 %) and metabolic processes (female 60.4 %, male 67.0 %) (Fig. 1 ). The GO terms that were associated with Molecular Function were mainly assigned to binding (female 41.8 %, male 42.5 %) and catalytic activity (female 32.3 %, male 35.4 %). While the top GO terms associated with Cellular Component were assigned to cell (female 71.3 %, male 70.5 %) and organelle (female 48.9 %, male 46.4 %) (Additional file 9 : Table S7). The overall distribution of genes within GO classifications in P. regina was very similar to M. domestica and D. melanogaster [ 3 ] (Additional file 10 : Table S8). Fig. 1 GO term classification of the 3 functional categories (biological processes, molecular function and cellular component) of the predicted genes in the male and female genome assemblies Full size image Categorization of information from molecular-level interactions extracted from the annotated GO terms was performed by the KEGG component in Blast2GO. A total of 111 and 107 KEGG pathways from the GO-slim Blast2GO analysis were identified for the male and female gene sets, respectively. A visual representation of the pathways with greater than 10 sequences is shown in Fig. 2 . The top pathways for both sexes were purine metabolism, thiamine metabolism and biosynthesis of antibiotics. A full list of the KEGG pathways can be found in Additional file 11 : Table S9. Fig. 2 The top 35 KEGG biological pathways of the male and female gene sets extracted from the Blast2GO analysis Full size image Sex chromosomes Typically, sex determination is carried out through the heteromorphic XX/XY system where Y-linked male determining genes or the presences of a Y-linked male determining factor are proposed to repress female development causing male sexual differentiation thus promoting the male phenotype [ 66 , 67 ]. This has been observed in the Mediterranean fruit fly ( Ceratitis capitata ), the olive fruit fly ( Bactrocera oleae ) and the common house fly ( M. domestica ) [ 68 ]. Putative sex chromosomes for both the male and female genomes were isolated using the chromosome quotient (CQ) approach [ 49 , 56 ] and 9134 and 10,721 contigs were identified to putatively belong to the X and Y chromosome, respectively. Adding the sizes of each contig, these result in a putative X chromosome size of ~21.2 Mbp and ~14.5 Mbp for the putative Y chromosome, which approximates measured differences between the male and female flies of ~9 Mbp [ 43 ]. A direct comparison with the Drosophila X and Y chromosomes yielded 608 (~7 %) female contigs with homology to the X chromosome, and 233 male contigs (~2 %) with homology to the Y chromosome. BLASTn results against the arthropod database (Additional file 12 : Table S10) resulted in 47 % (4321 contigs) of the X chromosome contigs and 26 % (2789 contigs) of the Y chromosome contigs identified as having homologous sequences in the database. Most putative Y chromosome contigs did not yield BLAST hits. A reasonable explanation for the limited number of BLAST hits may be the fact that very few model species have characterized and annotated Y chromosomes in the database due to repeat-rich heterochromatic sequences [ 69 , 70 ]. The majority of the hits from the BLASTn results of the X and Y chromosomes corresponded to repetitive sequences. For example, in the BLAST results from the X chromosome, of the contigs that produced hits, 58 % hit to multiple BAC sequences in Calliphora vicina achaete-scute complex, AS-C (accession numbers LN877230-LN877235). Even though these sequences contain the AS-C complex genes, these blast hits are likely hitting to the repetitive regions present in 20–25 % of these BACs [ 71 ]. Furthermore, in Drosophilidae , the AS-C complex is found in the X chromosome, where scute plays an additional role in sex determination acting as an X chromosome signal element [ 64 , 72 , 73 ]. The presence of homologous sequences to the AS-C complex in both the male and female putative sex contigs is an indicator that this complex may also be involved in sex determination pathway in P. regina. Unique sex genes A comparative analysis between the male and female predicted genes showed that 1480 genes were unique to the male and 727 predicted genes were unique to the female. These unique sets of genes are a likely combination of sex-biased genes that drive phenotypic differences leading to sex-specific developmental trajectories [ 74 ], and/or be present in either male or female assembly because a complete gene was predicted in one sex and not the other. Approximately 73 % of the male and female unique genes had homology to sequences in NCBI’s Arthropoda NR database. Gene ontology analysis of the unique set of genes for each sex in the biological processes category produced a total of 1589 and 1841 GO terms for the male and female, respectively. Comparing the two sets of GO terms indicated that 517 of the GO terms were unique to the male while 769 were unique to the female. The long lists of the unique GO terms for each sex was summarized by clustering GO terms that belong to the same family and removing redundant terms using the web server REViGO [ 55 ]. An example of a clustered category in the male is the category flocculation (Additional file 13 : Figure S3A) which include the GO terms sperm motility, energy taxis and positive chemotaxis. The genes functionalized with these GO terms may be specific to the males and involved in sperm chemotaxis where sperm from the male fly is guided by a chemoattractant excreted by the female to fertilize an oocyte [ 75 ]. One of the categories clustered in the female (Additional file 13 : Figure S3B) is response to xenobiotic stimulus. These are genes expressed during an immune response or during exposure to toxic foreign material (xenobiotic) producing enzymes such as cytochrome P450 and acetyl-CoA synthetases. These female specific genes may be involved in the protection of female flies against the diverse array of pathogens it comes across while laying eggs, or from components of male ejaculates after mating [ 76 ]. Similar GO categories connected to immune response were detected to be enriched in female D. melanogaster flies as compared to males [ 77 ]. Sex determining genes In most dipterans investigated thus far, sex determination is regulated by a cascade of genes which exhibit a hierarchical interaction during development where a product of one gene controls the sex-specific splicing of the gene immediately downstream [ 66 , 78 ]. Some of the key players involved in dipteran sex determination pathway include the genes sex lethal ( sxl ), doublesex ( dsx ), transformer ( tra ), transformer 2 (tra2) , fruitless ( fru ), daughterless ( da ) and maleless ( mle ) (Additional file 14 : Table S11). The gene tra is one of the main sex determining genes in various insects whose ancestral function is to introduce variation in sex determining mechanisms [ 79 ]. We queried both the male and female P. regina genomes for homologs using D. melanogaster sex determining genes but found none. We then searched for P. regina tra and tra2 using homologs from closely related species - L. cuprina, Cochliomyia hominivorax, C. macellaria and L. sericata . This search also failed to produce any homologs to tra or tra2 – rather producing homologs to hypothetical L. cuprina proteins (FF38_00928 and FF38_09888, respectively). The only homology shared between our putative hits and those in the reference databases are due to the presence of zinc finger domains. For tra – Cochliomyia macellaria and C. hominivorax only share 60.8 % sequence identity. Our putative tra gene in Phormia regina shares 48.7 % and 52.3 % sequence identity with Cochliomyia and Chrysomya rufifacies, respectively (data not shown). For tra2 – a query of our genomes yielded hundreds of hits with e-values less than 1e-52 – suggesting not an assembly error, but rather a common domain is detected. Additional approaches are necessary to more fully annotate these genes in P. regina . The gene doublesex ( dsx ) is another transcription factor that controls the activity of genes involved in sexual differentiation [ 66 , 80 ]. Doublesex is differentially spliced, encoding male and female sex specific dsx proteins [ 66 ]. Homologous sequences of dsx were detected in both sexes with an E-value less than 1 e -42. The sex determining gene daughterless is a member of the basic helix-loop-helix (bHLH) family of DNA binding domains and is a transcription factor [ 81 ]. Da is essential for neurogenesis, oogenesis and sex determination [ 82 ]. We annotated da in P. regina for both sexes. The length of the predicted da gene was determined to be 4494 bp and 8479 bp long in the male and female, respectively, in comparison to the D. melanogaster (FBgn0267821) da gene (5124 bp) (Fig. 3 ). The difference between the two appears to be in the 5’ UTR region in which some noncoding gene sequences are predicted in the female, but the male’s 5′ UTR was not completely assembled (the data are missing, Fig. 3 ). Fig. 3 Predicted gene structure of the sex determining gene daughterless for the female (F) and male (M) P. regina. The red boxes represent the exon, the grey boxes inclusive of the red represent the mRNA, and the black line represent the intron. Image is not drawn to scale Full size image The coding sequences of da for both sexes were nearly identical (99.5 %) and predicted to be 2112 bp long, which is comparable to that of D. melanogaster of 2133 bp (J03148.1) and L. cuprina 2278 bp (JRES01000453.1 – scaffold966, locus tag FF38_09934). A multiple sequence alignment of the coding sequences demonstrates the high degree of variation with only 87.83 % similarity to L. cuprina, and 57.22 % to D. melanogaster (Additional file 15 : Figure S4). Protein sequences of da were also compared between the three species. The length of da was 726 amino acid sequences for both sexes in P. regina compared to 758 amino acid sequences in L. cuprina (KNC31067.1) and 710 amino acid sequences in D. melanogaster (P11420). Amino acid sequence alignment shows L. cuprina to be 95 % identical to P. regina, while D. melanogaster is 59 % identical (Additional file 16 : Figure S5). A conserved region shared among the three species is the helix-loop-helix domain (Additional file 17 : Figure S6). The maleless ( mle ) gene is one of the regulatory genes required for dosage compensation of X-linked genes in the X chromosome of male flies [ 83 ]. Annotation of the gene maleless (mle) resulted in gene sequences of total length 6375 bp and 6374 bp predicted for the male and female sexes, respectively, approximating the size of mle in D. melanogaster (6016 bp, JQ663522.1). The length of the protein sequence of mle in P. regina for both sexes was 1253 amino acids, comparable to D. melanogaster (1293 aa, AFI26242) with 72.27 % similarity to P. regina (Additional file 18 : Figure S7) . Chemoreceptor genes A fly’s ability to detect and respond to a chemical signal is integral to its survival. In particular, for blow flies, the adults must be able to detect odors associated with decay immediately following death in order to be among the first insects to show up and lay eggs. Following detection of carrion, an insect must be able to determine the quality of the resource and decide if it is suitable by using a variety of gustatory receptors (GRs), ionotropic receptors (IRs), odorant receptors (ORs), and odorant binding proteins (OBPs). These four gene families are the main chemoreceptors that function in the olfactory and gustatory system of insects [ 84 ]. To determine the presence of chemoreceptors in the P. regina genomes, we queried D. melanogaster’s chemoreceptor genes using tBLASTx. D. melanogaster has a predicted total of 68 GRs through alternative splicing, 62 ORs [ 85 ], 65 genes encoding IRs [ 86 ] and 52 genes encoding OBPs [ 87 ]. The tBLASTx results of the predicted chemoreceptors (including alternatively spliced sequences) in D. melanogaster resulted in a total of 61 GRs, 40 OBPs, 64 ORs and 63 IR’s with homologous sequences in the female assembled genome. These homologous sequences were detected in 28 contigs (61 GRs), 25 contigs (40 OBPs), 37 contigs (64 ORs) and 41 contigs (63 IRs) (Additional file 1 : Table S1). In addition to the reception of chemical stimulus in olfaction, IR’s have recently been found to be involved in thermosensation, and the circadian cycle [ 88 ]. In comparison to ORs, the IR family in insects is relatively conserved suggesting that it is an ancient chemosensory receptor family [ 89 ]. IR25a is one of the most highly conserved IRs in many species [ 90 ]. It acts as a co-receptor with other different odor-sensing IRs. In D. melanogaster , it is expressed in sensory neurons in the antennae and also in the proboscis [ 91 ]. We compared IR25a protein sequences from the female P. regina to L. cuprina, M. domestica, Calliphora stygia, Stomoxys calcitrans, D. melanogaster, C. capitata and B. oleae via multiple sequence alignment, and subsequent PhyML for tree building, and TreeDyn for tree drawing [ 92 – 97 ]. Overall there was high similarity, with sequence identity ranging from 86 to 97 %. The predicted P. regina IR25a sequence was most similar to L. cuprina with 97.19 %. The amino acid sequences in regions annotated as putative peptide binding sites were mostly conserved (Additional file 19 : Figure S8). The remarkable sequence conservation of IR25a implies the presence of a unique and essential evolutionary conserved role of the IR25a receptors in various insect species (Additional file 20 : Figure S9). Antimicrobial genes Blow flies develop in an environment of decomposing vertebrate tissue that is overrun with bacteria and as such, they not only compete with bacteria for resources but also need protection from infection. Insects in general have a diverse innate immunity pathway that provides protection from various microbes that can inhibit their survival [ 4 , 98 , 99 ]. As a result, they possess different mechanisms that signal the expression of genes activating an antimicrobial defense system to fight bacterial and fungal infections [ 100 ]. Some blow fly species have proven useful in wound debridement therapy as they also excrete many potent antimicrobial compounds [ 31 , 33 ]. Two approaches were used to discover antimicrobial peptide genes: BLASTn using known D. melanogaster genes and keyword searching for known antimicrobial peptide sequences against annotation information assigned to the predicted protein sequences (Additional file 21 : Table S12). Thirty-six Drosophila genes were queried and hits were identified for 25 using BLASTn. Using the BLASTp approach, 115 predicted protein sequences produced BLASTp hits with homology to genes present in four major signaling pathways in insects that are involved with protection from bacterial and fungal infections. These include the Toll, immune deficiency ( Imd ), Janus kinase signal transducer and activator of transcription (JAK/STAT), and JNK pathways [ 98 , 101 ]. These pathways recognize different types of microbes and induce the transcription of immune-related genes which degrade pathogens or act as signaling molecules [ 99 ]. Both male and female had near identical BLASTp results. We therefore selected the female genome for the annotation of the contigs and genes of interest. The Toll signaling pathway is activated by the presence of gram-positive bacteria or fungi while the Imd pathway is mainly activated by gram-negative bacteria. Both pathways lead to the production of antimicrobial peptides to fight pathogens that cause infection [ 99 , 102 ]. A number of genes involved in the Toll signaling pathway were predicted in the P. regina female genome, including Spaetzle , tube , toll , cactus , G protein-coupled receptor kinase 2 [ 103 , 104 ]. This is consistent with the suggestions that the Toll signaling pathway may be involved in the antimicrobial defense system in blow flies as it is in other insects [ 101 , 103 , 104 ]. Furthermore, protein recognition receptors involved in pathogen recognition were identified in P. regina . These include peptidoglycan-recognition proteins – PGRP-LE, PGRP-LC, PGRP-SC2 and gram negative binding proteins – GNBP1 and GNBP3 . The NF-kappa β-like gene, relish , which is involved in inducing the humoral immune response in Drosophila as well as antibacterial and antifungal factors [ 105 ] was also identified. Both relish and PGRP-LC are involved in the Imd pathway. Similar to M. domestica and D. melanogaster , P. regina harbors the four antimicrobial families, attacins, diptericins, cecropins and defensins. We found homology to genes related to antimicrobial humoral responses ( par-1 , cec A1 and tlk [ 106 , 107 ]), as well as genes responsible for responses to bacteria ( Gprk 2 and Relish [ 108 – 110 ]), all possessing kinase or peptidase activity capable of breaking down bacteria cell walls (Additional file 21 : Table S12). These are likely produced and excreted in the salivary glands of the developing larvae as they feed on the decaying tissue. A summary of the top hits from BLASTn results after querying our female genome for the homologous sequences from D.melanogaster are listed in Additional file 21 : Table S12. The four antimicrobial families were represented by cecropin A1 (cecropin), iconoclast (defensing), Hephaestus (diptericin) and relish (attacin). These results based on homology to immune-related proteins in other insects, imply that the immune signaling pathway in P. regina is similar to other model insects, and may have evolved to work within the harsh sanitary conditions to which P. regina is normally exposed. Xenobiotic resistance The enhancement of xenobiotic (foreign compounds) metabolism in insects due to the extensive use of insecticides, has led to the evolution of xenobiotic resistance and tolerance in insects creating a challenge in pest management [ 111 ]. The three major groups of genes involved in producing metabolic enzymes to protect insects against plant defense systems (plant allelochemicals) and insecticides are cytochrome P450 monooxygenases, esterases/hydrolases, and glutathione-S-transferases [ 112 , 113 ]. The presence of these detoxifying enzymes likely helps P. regina to withstand high pathogen loads from decaying carrion. Cytochrome P450 genes are the family of enzymes primarily associated with metabolism of xenobiotics and resistance to most insecticides [ 114 , 115 ]. They are also involved in the catalysis of a range of chemical reactions important for developmental processes. A total of 41 and 44 predicted genes (Additional file 3 : Table S3) were annotated via the D. melanogaster database as cytochrome P450 genes from the female and male P. regina genes, respectively. This is fewer than in any of the three species we compared; M. domestica has 146 P450 genes, 90 in D. melanogaster , and 72 in Glossina morsitans [ 116 ]. However, when we applied our methods to the Lucilia cuprina published predicted gene set, we recovered 57 P450 genes (data not shown). CYP4 and CYP6 were the predominant P450 families in both male and female draft genomes (Additional file 3 : Table S3) where they occupied approximately 36 and 30 % of the total P450 genes respectively (in L. cuprina , they occupied 25 and 23 %, respectively). The predominance of CYP4 and CYP6 P450 genes was also observed in the genomes of D. melanogaster [ 117 ] and M. domestica where they occupy 50 and >60 % of the total cytochrome P450 genes in their genomes, respectively. The reduction in the number of P450 genes (assumed to be closer to 100 in insect genomes) is likely due to the stringent gene prediction criteria used in our fragmented genomes (only complete genes predicted). An increase in the expression or activity of the metabolic enzymes that belong to the esterases and hydrolases family has also been linked to insecticide resistance [ 118 ] and correlated with resistance to two major insecticide classes: pyrethroids and organophosphates [ 113 ]. This is mainly due to the presence of ester bonds in most insecticides which are hydrolyzed by the esterase [ 119 ]. A total of 103 and 131 genes with hydrolase/esterase activities were predicted in the female and male draft genomes, respectively. The common ones in both sexes included phosphodiesterase, thioesterase, carboxylesterase and phosphatase (Additional file 5 : Table S5). Glutathione S-transferases (GST) are multifunctional enzymes that are not only involved in detoxification of xenobiotic compounds, but also in other physiological processes in insects including intracellular transport and hormone biosynthesis [ 120 ]. In the detoxification process, they function by metabolizing insecticides producing water-soluble metabolites that are easily excreted [ 121 , 122 ]. A total of 9 and 11 GST genes were predicted in the female and male P. regina genomes, respectively (Additional file 4 : Table S4). Unfortunately, insecticide resistance is a growing problem [ 123 ] in many insects, including mosquitos [ 124 – 126 ] and blow flies [ 127 – 129 ] has in some cases been attributed to glutathione S-transferase activity [ 120 , 123 , 130 – 132 ]. Repetitive elements Repeat identification in the male and female genomes via homology-based searches identified close to 38 and 46 Mbp of repetitive DNA accounting for 7.3 and 8.7 % of the male and female assemblies (Additional file 22 : Table S13). Approximately 10.5 (male) and 18 Mbp (female) of repetitive sequences were transposable elements, and the remaining were low complexity sequences including mini- and microsatellites. Several major transposable elements super-families were identified in the P. regina genome, but the vast majority (>70 %) of elements belonged to 5 families or super families: Jockeys, LOAs, Gypsys, Tc-Mariners, and Helitrons. DNA transposons and retrotransposons were present in roughly equal proportions. Low complexity sequences were the dominant type of repetitive sequence making up ~66 % of the male and female genome assemblies. Dipteran species show significant variability in SINE content but SINE elements appear to be missing from the P. regina genome entirely. For example, SINEs are absent in Drosophila [ 133 ] but present in high copy numbers in other dipterans [ 5 ]. Only 39 SINE insertions were identified in P. regina , most of which are distantly related, or highly mutated version of SINE-3_QC and SINE-4_QC from Culex quinquefasciatus , the southern house mosquito. In C. quinquefasciatus , SINE-3_QC and SINE-4_QC are present more than ten thousand times [ 134 ]. Based on the female (Fig. 4a ) and male (Fig. 4b ) accumulation profiles, it appears that transposable elements in P. regina tend to be old, with little accumulation in the recent past. In general, transposable element insertions for each family tend to be more numerous in the female genome assembly. Transposable elements present in the female assembly, but absent in the male assembly follow a similar accumulation profile to the genomes as a whole (Fig. 4c ) ruling out temporally biased accumulation in either sex. Class II transposons, LINEs, and LTRs accumulated at similar times given that the majority of elements in each group are between 37 and 45 % diverged from their putative ancestral partner. DNA transposons have been accumulating for a slightly longer period than LINEs and LTRs with most element divergences ranging from 23 to 41 %. In all, >97 % of all elements are >10 % divergent from their respective consensus elements. This implies that the minimal transposable elements accumulation has occurred in the recent past, or that newly inserted transposable elements are being actively removed from the genome [ 135 ]. However, with high divergences between potentially novel transposable elements in P. regina and transposable elements in RepBase is possible that lineage-specific SINEs are present but unidentifiable using homology based searches [ 136 ] and a full TE curation of the genome is necessary. Fig. 4 Transposable element accumulation in the female ( a ) and male ( b ) Phormia regina genome assemblies. Kimura 2-parameter distances were calculated between transposable element insertions in the genome and the homologous element in the dipteran Repbase library. Larger divergences indicate elements with larger mutation loads, and by extension, were deposited in the genome in the more distant past. Less than 40 SINE elements are present in either the female or male assemblies and are not shown here. Transposable elements are slightly more abundant in the female genome assembly. The accumulation of female specific repeats ( c ) follows that of the whole genome in general Full size image Close to 1.7 Mb of the X chromosome (8.1 % of the total X chromosomes) was derived from repeats, compared to ~390 Kb of the Y chromosome (2.7 % of the total Y chromosome). For both chromosomes, more than half the repetitive sequences are in the form of simple repeats (60 and 55 % for the X and Y chromosomes, respectively). The increased amount of repeats on the X chromosome is likely due to its larger size, however, even though the X chromosome is proportionally more repetitive than the Y, it is not large enough to be significant. Conclusions Although all impacts are impossible to foresee, we anticipate four important fields will benefit from this data: (1) insecticide resistance and/or sensitivity; (2) adaptive evolution in a rapidly evolving clade (Calliphoridae); (3) sex chromosome evolution; and (4) genotype-phenotype correlations of development rate variation. Insecticide resistance There are four main calliphorids that are serious agricultural pests causing myiasis (infestation by fly larvae): Chrysomya bezziana (Old World screwworm), Cochliomyia hominivorax (New World screwworm), Lucilia sericata (green bottle fly) and L. cuprina (sheep blow fly) [ 14 , 137 , 138 ], although other species have been implicated in secondary myiasis, including P. regina ([ 139 – 143 ]. Two of these, Ch. bezziana and Co. hominivorax are obligate parasites, while the remaining two Lucilia species are either obligate, or facultative ectoparasites, depending on where they reside [ 144 ]. The genome of the obligate ectoparasite L. cuprina (in Australia/New Zealand, this species does not appear to ever develop on carrion) was recently sequenced in an effort to determine how insecticide resistance has persisted and to develop novel pest management targets that would not harm beneficial insects. In order to better understand these processes and assess target efficacy, it is important to understand how these genes are structured in other closely related flies. The P. regina reference genome presented here will provide the opportunity to extract candidate genes, determine the structure-function relationship, and produce new ligands with the potential to slow or eradicate these pest species. Adaptive evolution Calliphoridae includes approximately 1500 different species and accounts for ~8 % of calyptrate species diversity [ 8 ]. Many lineages within Calliphoridae have evolved specialized adaptations. As the above example demonstrates, ectoparasitism has evolved at least twice within Calliphoridae, perhaps in response to selective pressures of the usually ephemeral carrion resource – if the insect can arrive at a resource before it becomes available to all its competitors, its genes have a greater probability of being maintained. P. regina does not have these features or life-strategies, suggesting the retention of the ancestral life history, and can thus serve as a robust reference genome from which more derived features can be understood. Sex chromosome evolution Many calyptrate flies utilize XY sex determination. In many other species sharing this system, the X chromosome is relatively stable, and the Y chromosome has experienced loss of function due to the absence of recombination over time and the presence of repetitive DNA. Therefore, the structure of the Y chromosome represents its approximate evolutionary age. With individual sex genomes, and putative chromosomes sequenced and assembled herein, we can compare sex chromosome structure within the Calliphoridae, where some species have vastly different sex chromosome sizes (i.e. Lucilia cuprina male and female flies differ by nearly 100 Mbp, [ 43 ]), or within the monogenic fly Chrysomya rufifacies with no difference in genome sizes of the male and female flies [ 43 ]. Additionally, this reference genome will be useful in studying sex determining pathways in Calliphoridae, which could be useful as a mechanism to target for pest control [ 145 ]. Development rate variation As many of these flies are primary colonizers of vertebrate carrion [ 17 ], they have forensic uses in cases of decomposition when the time of death cannot be determined using traditional physiological approaches. The collected larvae serve as a clock to the minimum time since the individual died, as they will only colonize a body following death (excluding species known to cause myiasis). Then, the age of the larvae is estimated for a minimum postmortem interval [ 146 , 147 ]. The age, however, is estimated from a reference data set in which laboratory conditions permit multiple temperatures and conditions [ 148 – 150 ]. These models of development assume little to no population-level variation even though variation has been clearly demonstrated [ 151 – 153 ]. Thus, it is important to understand the fundamental developmental processes in blow flies if they are to be used for postmortem interval estimates, and to be capable to predicting the possible variation based on genotypes of the individual larvae collected. Once again, the availability of a reference genome, differentiating male and female genomes [ 154 ], will be invaluable to understanding the molecular basis of development, and its associated variation. This first draft of the P. regina genome represents a critical step in calliphorid genomics. The accessibility of the P. regina genome will quicken the pace of the exploration and comparisons in the evolutionary relationships and developmental analysis among blow fly species and also with other dipteran species. In time, these findings could have significant agricultural, medical and forensics fields. Abbreviations aa: Amino acid AS-C: Achaete-scute complex BAC: Bacterial artificial chromosome bHLH: Basic helix-loop-helix bp: Base pair CEGMA: Core eukaryotic genes mapping approach CoA: Coenzyme A CQ: Chromosome quotient D. melanogaster : Drosophila melanogaster da : daughterless dsx : doublesex fru : fruitless Gb: Gigabase GNBP: Gram negative binding proteins GO: Gene ontology GRs: Gustatory receptors GSTs: Glutathione S-transferases Imd : Immune deficiency IRs: Ionotropic receptors JAK/STAT: Janus kinase signal transducer and activator of transcription KEGG: Kyoto Encyclopedia of Genes and Genomes L. cuprina : Lucilia cuprina LINEs: Long interspersed nuclear elements LTRs: Long terminal repeats Mb: Megabases Mbp: Million base pair mle : Maleless MRSA: Methicillin resistant Staphylococcus aureus OBPs: Odorant binding proteins ORs: Odorant receptors P. regina : Phormia regina PGRP: Peptidoglycan-recognition proteins REViGO: Reduce and visualize gene ontology SINEs: Short interspersed nuclear elements sxl : Sex lethal tra : Transformer tra2 : Transformer2 UTR: Untranslated region
Researchers at the School of Science at Indiana University-Purdue University Indianapolis have sequenced the genome of the black blow fly, an insect commonly found throughout the United States, southern Canada and parts of northern Europe. Black blow flies have environmental, medical and forensic uses, functioning as nature's recyclers, as wound cleansers and as forensic timekeepers. They have a blue or green sheen and are similar in size to common houseflies. The female genome was found to contain 8,312 genes; the male genome had 9,490 genes. "There is nothing special about black blow flies (scientific name Phormia regina), but that lack of uniqueness is why scientists are interested in studying them," said Christine Picard, assistant professor of biology and forensic scientist, who led the team that sequenced the genome. Picard offers the following analogy to explain her research interest in black blow flies: "If you are interested in studying a particular human disease, for instance, you don't start by studying people with the disease. You start by studying healthy individuals, and then you look for differences between the healthy and the sick to make sure any differences that you observe are actually due to the disease and not due to other factors. "The first step is to figure out what the normal is. That's why I have been studying this fly for a decade and we have been working on sequencing its genome for the past five years: because this is an essentially unremarkable insect. It doesn't do anything that is abnormal. Having sequenced the black blow fly genome, we are providing a major resource for all of the researchers who are studying other insects that have unusual or dangerous characteristics, such as a species of fly that fatally attacks livestock." Black blow flies are active insects that perform three tasks that benefit humans: recycling carrion, debriding human wounds and laying eggs on freshly dead bodies. They have no harmful or parasitic behaviors. Researchers at the School of Science at Indiana University-Purdue University Indianapolis led by Christine Picard have sequenced the genome of the black blow fly, an insect commonly found throughout the United States, southern Canada and parts of northern Europe. Credit: Whitney Walker, IUPUI Black blow flies feed on decaying flesh and help consume dead vertebrates throughout the environment. Black blow fly larvae, or maggots, are used medically to debride human wounds, as the insects physically remove dead tissue while simultaneously excreting antimicrobial compounds into the wound. With an excellent sense for smelling recently dead tissue, black blow flies are usually the first insects to colonize a human body, frequently within minutes after death. Females lay eggs on recently deceased corpses, setting a "clock" that enables forensic investigators to estimate the postmortem interval, or minimum time since death. Other gene-mapping projects have been conducted by large, often international consortiums, with one group working on one aspect and others on different aspects, as a collaborative project. IUPUI's black blow fly genome sequencing was primarily conducted over four years by biology doctoral candidate Anne Andere under Picard's mentorship, with input from Texas Tech University biologists R.N. Platt II and David A. Ray. "Genome sequence of Phormia regina Meigen (Diptera: Calliphoridae): Implications for medical, veterinary and forensic research" is published online in BMC Genomics. Graduate student Andere is the first author. Picard, the corresponding author, is assistant professor in the Department of Biology and the Forensic and Investigative Sciences Program. She works at the interface of forensic entomology and molecular biology. Insects constitute slightly more than half of all living species on Earth. In 2011, i5K, an initiative to sequence the genomes of 5,000 insects and other arthropods within five years, was launched. Thus far, however, only 239 arthropod genomes have been sequenced, with now the black blow fly genome available. "Now that we have described the genome," said Picard, "I plan to continue working toward a better understanding of black blow fly population variation from location to location, and show how the variations influence postmortem interval estimates, with the goal of making these important determinations more accurate." Picard said the mapping of the black blow fly genome will also help researchers gain better insight into insecticide sensitivity and resistance. Knowledge of the genome will advance understanding of the antimicrobial compounds secreted by these specific insects as well. The black blow fly gene-mapping results completed at IUPUI have been deposited with the National Center for Biotechnology Information's Sequence Read Archive Database and are accessible to researchers around the world.
10.1186/s12864-016-3187-z
Chemistry
Automated synthesis allows for discovery of unexpected charge transport behavior in organic molecules
Songsong Li et al, Using automated synthesis to understand the role of side chains on molecular charge transport, Nature Communications (2022). DOI: 10.1038/s41467-022-29796-2 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-022-29796-2
https://phys.org/news/2022-05-automated-synthesis-discovery-unexpected-behavior.html
Abstract The development of next-generation organic electronic materials critically relies on understanding structure-function relationships in conjugated polymers. However, unlocking the full potential of organic materials requires access to their vast chemical space while efficiently managing the large synthetic workload to survey new materials. In this work, we use automated synthesis to prepare a library of conjugated oligomers with systematically varied side chain composition followed by single-molecule characterization of charge transport. Our results show that molecular junctions with long alkyl side chains exhibit a concentration-dependent bimodal conductance with an unexpectedly high conductance state that arises due to surface adsorption and backbone planarization, which is supported by a series of control experiments using asymmetric, planarized, and sterically hindered molecules. Density functional theory simulations and experiments using different anchors and alkoxy side chains highlight the role of side chain chemistry on charge transport. Overall, this work opens new avenues for using automated synthesis for the development and understanding of organic electronic materials. Introduction The development of high-performance organic electronic devices critically relies on a fundamental understanding of intra- and intermolecular charge transport 1 , 2 . Organic electronic materials are typically designed for high charge-carrier mobilities and generally have highly delocalized conjugated backbones, with common building blocks including acenes, fluorenes, phenylene vinylene derivatives, thiophene derivatives, diketopyrrolopyrrole, isoindigo, and their donor-acceptor copolymers 1 , 2 . Strong pi–pi interactions in these systems significantly decrease their solubility in organic solvents, which hinders solution processing. To overcome this issue, flexible side chains such as alkyl, fluoroalkyl, and oligo(ethylene glycol) chains are appended to pi-conjugated backbones to enhance solubility, facilitate solution processing, and tune intermolecular packing 3 , 4 . In recent years, side chain engineering has been pursued organic electronic materials, which highlights the importance of side chain length, bulkiness, branching, substitution, and chain-end functionalization on charge transport in conjugated polymers 3 , 4 . However, the majority of prior work has focused on understanding the role of backbone composition and side chain chemistry on the macroscopic or device-level properties of these materials. Despite recent progress, the role of side chain chemistry on the molecular charge transport properties of conjugated organics is not fully understood. Unlocking the full potential of organic materials relies on accessing their vast chemical space, but the synthetic workload to make large numbers of new compounds presents a practical barrier to properly survey conjugated organic derivatives. In recent years, automated iterative coupling has emerged as a promising avenue to achieve precise sequence control and enable high-throughput synthesis of oligomeric small molecules 5 . Recent efforts have leveraged general Suzuki cross-coupling reactions using stable and accessible chemical building blocks in the development of an automated small-molecule synthesizer 6 . In this way, automated synthesis platforms have emerged as powerful tools to advance our understanding of functional materials properties, but these methods have not yet been widely applied to the field of organic electronics. In general, we lack a full systematic understanding of structure-property relationships for organic electronic materials due to the tedious and challenging nature of chemical synthesis. In this work, we systematically investigate the role of side chain chemistry on the charge transport properties of conjugated oligomers using a combination of automated synthesis and single-molecule charge transport experiments. A library of terphenyl derivatives with different side chain compositions and anchors was synthesized using automated iterative Suzuki coupling. Following synthesis and chemical characterization, a scanning tunneling microscope-break junction (STM-BJ) technique was used to directly characterize the charge transport properties of these molecules. STM-BJ methods provide an ideal approach for understanding charge transport at the molecular level, thereby enabling quantitative structure-property relationships for organic materials 7 , 8 , 9 . Our results show that molecular junctions with long alkyl side chains exhibit a bimodal conductance distribution with an unexpectedly high conductance state upon increasing concentration. Systematic control experiments using asymmetric, planarized, and sterically encumbered molecules indicate that the high conductance state results from surface adsorption and molecular planarization facilitated by long alkyl side chains. Our results further show that the choice of chemical anchors and side chains directly affects molecular adsorption, and density functional theory (DFT) simulations are in reasonable agreement with experiments. Taken together, these results highlight the use of automated chemical synthesis to/understand molecular charge transport for the development of new organic electronic materials. Results and discussion Automated synthesis of terphenyl derivatives with varying alkyl side chains We used automated iterative Suzuki-Miyaura cross-coupling to prepare a library of terphenyl derivatives using stable and easily accessible molecular building blocks containing methyliminodiacetic acid (MIDA) boronates and dihalide groups (Supplementary Information, Sections S.1 and S.2 and Supplementary Figs. S16 – S81 ). Building blocks contained phenyl rings with pre-installed alkyl side chains and terminal anchor moieties to facilitate STM-BJ experiments. Building on the first-generation automated small-molecule synthesizer designed by Burke and coworkers 6 , we developed a second-generation small-molecule synthesizer capable of parallel runs of deprotections, couplings, and purifications (Fig. 1a ). Briefly, the synthesis strategy relies on the MIDA protecting group rendering boron unreactive until deprotected with mild aqueous base, analogous to the Fmoc group in iterative peptide synthesis (Fig. 1b ). The new automated synthesis instrument leverages advances in hardware and software to enable up to 12 fully parallelized, simultaneous preparative-scale iterative Suzuki reactions. Importantly, the increased throughput allows for the screening of large regions of chemical space in a single automated synthetic run. Using this approach, a library of symmetric terphenyl derivatives with different alkyl side chains ( Rn ) and an asymmetric target molecule ( R6-H ) were prepared using two automated procedures in conjunction with the standard building block set (Fig. 1c ). Fig. 1: Automated chemical synthesis of terphenyl derivatives with different side chain compositions. a Picture of the automated synthesis instrument in our lab. b Iterative coupling strategy for small-molecule synthesis using MIDA boronates. c Synthesis schemes for C2-symmetric and non-C2-symmetric terphenyl derivatives via iterative Suzuki coupling. Full size image Role of alkyl side chain length on molecular charge transport We began by studying the molecular conductance properties of Rn (Fig. 2a ) using a custom-built scanning tunneling microscope-break junction (STM-BJ) instrument, as previously described 10 , 11 . STM-BJ has been used in prior work to study single-molecule charge transport as a function of backbone length 10 , 12 , chemical substituents 13 , 14 , molecular conformation 15 , 16 , and anchor-electrode contacts 17 . Prior work has shown that alkyl side chains alter backbone conformation 16 , thereby affecting molecular conductance. Here, we used STM-BJ to systematically understand the role of side chain length and composition in the molecular library generated by automated synthesis. Using this approach, we determined the molecular conductance of Rn in a nonpolar solvent (1 mM solution in 1,2,4-trichlorobenzene) at 0.25 V applied bias. One-dimensional (1D) (Fig. 2 b, c ) and two-dimensional (2D) (Fig. 2 d, e and Supplementary Fig. 1 ) molecular conductance histograms were determined for Rn , such that each histogram is generated from a large ensemble of >4000 individual traces. Interestingly, molecular/conductance in these terphenyl derivatives shows an unexpected dependence on alkyl chain length. In particular, R0-R2 only shows a single prominent conductance peak, and molecular conductance decreases upon increasing the alkyl chain length (Fig. 2b ). The average molecular displacement for R0-R2 is constant (≈0.8 nm) because the alkyl side chain does not significantly affect molecular end-to-end distance for terphenyl derivatives with short side chains (Fig. 2d and Supplementary Fig. 1 ). In contrast, R3-R12 exhibits two dominant and well-spaced conductance states (high G and low G). Interestingly, the conductance of the high G state (10 −3 G 0 ) is more than one order of magnitude larger than the low G state (≈10 −4 –10 −5 G 0 ). The molecular displacement corresponding to the high G state is ~0.4 nm, and the molecular conductance of the high G state is independent of the alkyl side chain length for R3-R12 (Fig. 2b and Supplementary Fig. 1 ). Fig. 2: Single-molecule characterization of charge transport in terphenyl derivatives with different alkyl side chains. a Schematic of Au–Rn–Au junction. b Conductance peak values of 1 mM Rn at 0.25 V applied bias. c 1D conductance histograms for 1 mM Rn at 0.25 V bias voltages, each constructed from >4000 traces. d , e Representative 2D conductance histograms for R2 and R5. Full size image Alkyl side chains are known to change the twist angle between planar conjugated units in molecular backbones, which affects electronic properties 15 , 16 . To understand the role of backbone conformation on molecular conductance, we performed molecular modeling of conductance in terphenyl junctions with varying twist angles. Terphenyl junctions with different conformations (Fig. 3a ) were modeled using DFT calculations performed on Spartan’16 Parallel Suite using the B3LYP functional with a 6–31 G (d,p) basis set. Following the determination of geometry-optimized structures, transmission functions were calculated using nonequilibrium Green’s function-density functional theory (NEGF-DFT) via the Atomistix Toolkit package (Fig. 3b and Supplementary Figs. 2, 3 ). Our results show that the transmission values close to the Fermi energy decrease with cos 2 θ (between θ = 0 o and 90 o ) , where θ is the dihedral angle between the first two phenyls. The simulated conductance is ≈10 −2.9 G 0 for terphenyl junctions with full planar conformations and ≈10 −4.5 G 0 when the dihedral angle is close to 90 o , which is in good agreement with the experimentally observed high G and low G states, respectively (Fig. 2b ). To further validate these results, we synthesized two control molecules with planar ( R1-planar ) and twisted ( R1-twisted ) conformations (Fig. 3a ). The experimentally measured conductance values for these molecules are in reasonable agreement with results from NEGF-DFT simulations (Fig. 3b and Supplementary Fig. 4 ). These results suggest that the low G state derives from twisted conformations and depends on backbone dihedral angles, whereas the high G state is likely associated with a planarized backbone but is only observed for long alkyl side chains ( n ≥ 3). However, the molecular displacement corresponding to the high G state is ≈0.4 nm, which is significantly smaller than the expected displacement of an extended molecular backbone for a terphenyl derivative. Fig. 3: Molecular origin of the high conductance state. a Terphenyl derivatives with different conformations. b Peak molecular conductance values for terphenyl derivatives with different dihedral angles from DFT simulations and experiments. Dihedral angles are determined from the lowest energy conformers using DFT simulations. c 1D molecular conductance histograms showing the concentration-dependent behavior of R5. d Representative 2D conductance histograms of R5 (0.01 mM) showing only the low G state. e 2D correlation of molecular conductance for R5 (0.1 mM). f Schematic showing mechanism for concentration-dependent conformation and conductance behavior of single-molecule junctions. The high G and low G states both arise due to transport through the long axis of the molecule, facilitated by the linkage of the molecule to the tip and substrate between two terminal anchors. Full size image To fully understand the molecular origin of the high G state, we performed a series of single-molecule conductance experiments using R3-R12 at different concentrations (100 nM–1 mM) (Fig. 3c and Supplementary Fig. 5 ). Surprisingly, our results show that the emergence of the high conductance state depends on the solution concentration of R3-R12 . R3 exhibits a single peak (low G state) at relatively low concentrations (<1 mM) and dual peaks (high G and low G states) at high concentrations (1 mM). Interestingly, the onset concentration for the emergence of the high G state decreases from 1 to 0.1 mM upon increasing the length of the alkyl side chain (Supplementary Fig. 5 ). Moreover, R3-R12 do not exhibit dual peaks at very low concentrations at 0.01 mM (Fig. 3d and Supplementary Fig. 6 ), whereas R2 and R3-iPr (containing the isomeric isopropyl side chain) do not exhibit a high G state at very high concentrations (10 mM) (Supplementary Fig. 7 ), suggesting that a minimum linear alkyl side chain length is required for the emergence of the high conductance state. In addition, we analyzed correlations between the high G and low G states for R3-R12 by determining a 2D covariance histogram 18 , 19 , 20 (Fig. 3e and Supplementary Fig. 8 ). In a 2D covariance histogram, correlations are used to determine whether two conductance states occur independently (negative correlation, shown in blue) or sequentially (positive correlation, shown in red) in a single-molecule conductance trace. These results show a clear negative correlation between the high G and low G states (two blue regions in the green boxes), indicating that these two conductance states occur independently and therefore do not occur sequentially in a single-molecule trace. Recent work has reported that additional conductance states may arise from in-backbone molecule-electrode linkages 9 , 21 , 22 , 23 , 24 , in situ dimerization 17 , 25 , 26 , 27 , or intermolecular interactions 28 , 29 , 30 . To investigate the origin of the high G state, we synthesized the non-C2-symmetric terphenyl derivative R6-H containing only one terminal anchor using automated iterative cross-coupling (Fig. 1c ). Single-molecule characterization using STM-BJ shows that R6-H cannot form stable molecular junctions and therefore does not exhibit a high G state (10 −3 G 0 ) at 0.1 mM (Supplementary Fig. 9 ). These results suggest that the high G state does not arise from intermolecular interactions or in-backbone linkages between the molecule and gold electrodes. To investigate the potential for solution-based aggregation, we performed an 1 H NMR and UV-vis dilution experiments at experimentally relevant concentrations for R6 (Supplementary Information, Section S.4 and Supplementary Figs. 13 , 14 ). Overall, these experiments showed no significant spectral changes for concentrations between 0.01 and 10 mM, which further suggests that solution-based molecular aggregation is not responsible for the high G state. We further performed flicker noise analysis experiments for R6 at a solution concentration of 1 mM. Flicker noise analysis has been used in prior work to understand the nature of electronic coupling at metal-molecule interfaces 29 , 31 , 32 . In this way, through-bond transport (where flicker noise scales as G 1.0 ) is distinguished from through-space transport (where flicker noise scales as G 2.0 ) based on the power-law exponent of “noise power” versus average conductance, where noise power is determined from power spectral density analysis of conductance fluctuations 13 . Intermolecular charge transport typically results in larger fluctuations in conductance, which corresponds to through-space distributions of flicker noise 29 . Our results show that the noise power scales as G 1.01 for the high G and G 1.36 for the low G state, corresponding to through-bond coupling in molecular junctions (Supplementary Fig. 10 ). These results further exclude the possibility of intermolecular interactions such as molecular aggregation, which is consistent with 1 H NMR dilution experiments. Taken together, the combination of flicker noise analysis and control experiments indicates that the high G state arises from molecule-electrode linkages between SMe anchors at both termini. Based on these results, we hypothesized that long alkyl side chains ( n ≥ 3) promote adsorption of the terphenyl backbone onto gold surfaces through van der Waals interactions, thereby inducing backbone conformational changes leading to increasingly planar backbone geometries (Fig. 3f ). To test this hypothesis, we developed an analytical model based on Langmuir adsorption with a weakly bound conformation (low G state) that converts to a strongly bound state (high G state) (Supplementary Information, Section S.5 ). Here, the low G state corresponds to an upright molecular conformation with a junction anchored at both termini to metal electrodes, whereas the high G state corresponds to a molecular conformation in which a molecule is “lying down” on the electrode surface via side chain-mediated van der Waals interactions. The concentration-dependent conductance behavior of the high G state suggests that molecular junctions transition from a “standing up” to a “lying down” conformation facilitated by intermolecular interactions between free molecules in solution and molecules linked to the surface electrode via a terminal anchor 33 , thereby planarizing the terphenyl backbone and reducing steric hindrance for the adsorbed molecule to lie down on the surface. The sequential two-state adsorption results in a threshold-level dependence of solution concentration for the “lying down” conformation (Supplementary Fig. 15 ). As solution concentration is increased, the “lying down” confirmation becomes increasingly dominant, and the molecular sub-population exhibiting the high G state conductance increases. Overall, the results from the analytical model qualitatively agree with the concentration-dependent conductance behavior observed in experiments. Molecular adsorption behavior in amine-terminated terphenyl derivatives To further understand the role of terminal anchor groups on the conductance behavior of terphenyl derivatives, we synthesized two amine-terminated terphenyls with different alkyl side chain lengths ( R4-N and R6-N ) using our automated synthesis method (Fig. 4a ). It is known that SMe and NH 2 serve as dative coordination anchors for Au electrodes, yet exhibit different contact resistance, binding behavior, and adsorption free energies in single-molecule experiments 34 , 35 , 36 . Interestingly, our results show that amine-terminated terphenyls exhibit different concentration-dependent conductance behaviors compared to their SMe-terminated counterparts. R4-N does not show a high G state even at relatively high concentrations (10 mM), suggesting a lack of side chain-mediated molecular adsorption for R4-N on electrode surfaces (Fig. 4b ) despite containing identical alkyl chains as R4 . We posit that this phenomenon arises from the low adsorption free energy and different binding behavior of NH 2 compared to SMe anchors, which in turn inhibits backbone adsorption of R4-N at concentrations up to 10 mM. On the other hand, R6-N shows the emergence of an ultra-high conductance state (10 −2 G 0 ) at 1 mM concentration (Fig. 4b, c ), which contrasts with the behavior of R6 and R4-N . The ultra-high conductance state of R6-N occurs over small molecular displacements and likely arises due to perpendicular transport via Au–pi–orbital interactions (Fig. 4d ), as reported in prior work 37 , 38 . If two-terminal amine anchors are involved in surface binding, the probability of junction formation is likely reduced due to an inability of the tip to disrupt molecule-surface binding interactions. Terminal amine anchors only contain a single lone electron pair instead of two lone pairs for SMe anchors, which we speculate may decrease the availability for dative interactions between a lone electron pair and the free orbital on the gold tip. This mechanism of transport is consistent with the conductance pattern of R6-H (containing a single terminal SMe anchor) at 1 mM concentration (Supplementary Fig. 9 ), which exhibits a similar ultra-high conductance state (10 −2 G 0 ) as R6-N . Fig. 4: Effect of alkyl side chains in amine-terminated terphenyl derivatives. a Chemical structures of R4-N, R6-N, R4, and R6. b Concentration-dependent study of R4-N and R6-N. c 2D conductance histograms of 0.01 mM (low G state) and 1 mM (ultra-high G state) R5. d Hexyl side chains in R6-N facilitate backbone adsorption and planarization. However, the gold tip cannot pick up from the amine end, which induces Au–pi interaction and exhibit ultra-high conductance. Full size image Role of side chain chemistry on molecular charge transport Oligo(ethylene glycol) (OEG) and alkoxy chains contain oxygen atoms that increase hydrophilicity, polarity, and chain flexibility compared to hydrophobic alkyl side chains 39 . Conjugated materials bearing OEG or alkoxy side chains exhibit different morphologies and electronic properties compared to their alkyl side chains counterparts 39 , 40 , 41 , 42 . To investigate the role of side chain chemistry and composition on molecular charge transport, we synthesized a series of terphenyl derivatives with OEG and alkoxy side chains using automated synthesis (Fig. 5a ). In general, molecules with OEG/alkoxy chains show higher conductance and reduced molecular/adsorption behavior compared to their alkyl side chain counterparts (Fig. 5b and Supplementary Figs. 11 , 12 ). Enhanced intramolecular conductance for molecules with OEG/alkoxy chains likely derives from two aspects. First, oxygen atoms in the side chain generally increase HOMO energy levels, which results in slightly better alignment with the Fermi energy level of Au electrodes 14 (Supplementary Table 1 ). Second, molecules with oxygen-containing side chains generally show more planar backbone conformations 43 , which enhances charge transport (Supplementary Table 1 ). We further hypothesize that the reduced adsorption behavior for molecules with OEG/alkoxy chains is related to side chain flexibility. Prior work reported that introducing oxygen atoms into linear alkyl chains decreases the barriers to rotation around the corresponding single bonds, thereby increasing degrees of freedom for the linear chains 39 . More flexible oxygen-containing side chains may incur a greater entropic cost upon surface immobilization. The increased protein binding affinity of macrocyclic small molecules relative to their linear counterparts is attributed to a similar phenomenon 44 , 45 . To test this hypothesis, we synthesized a series of molecules with the same side chain length but varying levels of side chain oxygenation ( O3 , RO7 , and R8 ). In the case of RO7 (side chains with a single oxygen atom), the high conductance peak appears prominently at 10 mM compared to 0.1 mM for R8 (Fig. 5b ). On the other hand, O3 (containing three oxygen atoms) did not show a prominent high conductance peak for concentrations up to 10 mM (Fig. 5b and Supplementary Fig. 12 ). These results are consistent with increased oxygenation of side chains increasing chain flexibility and suppressing side chain-mediated adsorption (Fig. 5c ). Moreover, it is possible that aliphatic chains form stronger van der Waals interactions with the electrode surface than their oxygenated counterparts 46 . Fig. 5: Single-molecule characterizations of terphenyl derivatives with OEG and alkoxy chains. a Chemical structures of terphenyl derivatives with OEG and alkoxy chains. b Concentration-dependent study of O3, RO7, and R8. c Differences of flexibility and adsorption free energy between O3, RO7, and R8. Full size image In this work, we use automated synthesis and single-molecule experiments to investigate the effect of side chain chemistry on the charge transport properties of terphenyl derivatives. Broadly, these results deepen our understanding of structure–function relationships in organic electronic materials and highlight the need to fully understand the effect of substitution position, side chain chemistry, backbone identity, and side chain functionalization on charge transport. Overall, our results show that molecular adsorption and molecular conformation can be controlled using different side chain chemistries and anchor groups, which is useful for informing complementary studies on side chain engineering involving organic electronic devices and thin films. However, our work focuses primarily on intrachain transport which complicates direct comparison to studies involving organic thin films due to the combined roles of intra- and intermolecular transport in those systems. Nevertheless, our results provide new strategies for interface engineering to tune the device performance of organic electronics. Interface engineering has been widely used to tune the device performance of organic electronics 47 . Adsorbed molecules at organic electronics interfaces affect charge injection barriers and change thin film morphologies 47 , 48 , 49 . From this view, our work provides molecular engineering strategies to control the molecular adsorption and conformation at semiconductor-electrode interfaces using different side chain chemistries and anchors. For example, our work shows that the amine-terminated R6-N molecule exhibits “standing up” and “lying down” conformations at different concentrations. Here, the interfacial surface energy is expected to similarly change with concentration as well. Broadly, our work demonstrates the utility of automated chemical synthesis to enable efficient and systematic exploration of chemical space for organic electronic materials. Overall, this work opens new avenues in combining automated synthesis with single-molecule characterization to aid in the design of new materials for organic electronics. Methods Small-molecule synthesis Automated small-molecule synthesis was performed in parallel on a Burke-type small-molecule synthesizer 6 using two general automated procedures (iterative or non-iterative Suzuki coupling). Building blocks (MIDA boronates, halides), palladium catalyst, and inorganic base were loaded onto the synthesizer along with cartridges for drying, precipitation, and purification, followed by execution of the automated synthesis procedures. After completion of the automated procedures, crude reaction mixtures were purified using medium pressure liquid chromatography (MPLC) or preparative high-performance liquid chromatography (HPLC). These procedures are generally amenable to scale-up and can be performed by hand to achieve similar results at a slower pace. For full experimental details and chemical characterization, see the Supplementary Information. Single-molecule conductance measurements Single-molecule conductance measurements were performed using a home-built scanning tunneling microscope setup 9 , 15 . Gold STM tips were prepared using 0.25 mm Au wire (99.998%, Alfa Aesar). Gold substrates were prepared by evaporating 120 nm of gold onto polished AFM metal specimen disks (Ted Pella). Conductance measurements were carried out in 0.001–1 mM molecule solution in 1,2,4-trichlorobenzene. Break junction experiments were performed at a constant bias as described. One- and two-dimensional conductance histograms (>4000 traces for each molecule) are constructed without data selection. Flicker noise analysis Flicker noise analysis was performed to distinguish between intramolecular and intermolecular modes of charge transport for a subset of molecules 31 . Conductance fluctuations were measured by holding individual molecular junctions at a fixed position for 150 ms with a sampling rate of 40 kHz after the junctions were formed between the tip and substrate electrode. After each measurement, the junctions were further elongated until ruptured before repeating the measurement. This process was repeated for >15,000 iterations for each sample. During analysis, only traces where junctions survived throughout the entire 150-ms holding period were considered. The discrete Fourier transformation was applied and squared to the conductance data in the holding phase of the experiment to obtain the noise power spectral density (PSD). Flicker noise was quantified by numerically integrating the PSD between frequencies of 100 Hz to 1 kHz, followed by normalizing the average conductance of the corresponding trace. Two-dimensional histograms of normalized noise power (PSD/ G ) versus the average conductance ( G ) were constructed from “effective” traces with junction conductance within one standard deviation from the ensemble peak average conductance. The relationship between the normalized noise power (PSD/ G n ) and the average conductance ( G ) was determined by 2D Gaussian fitting to the 2D histograms 29 , where n is the scaling exponent. The scaling exponent n was varied between n = 1–2 with a step size of 0.05 to minimize the correlation parameter from the 2D Gaussian fit. Molecular modeling and density functional theory (DFT) simulations Electron transport calculations were performed using the nonequilibrium Green’s function-density functional theory (NEGF-DFT) method via the Atomistix Toolkit package. Molecular geometries are first optimized using Spartan with a 6–31 G** basis. Geometry-optimized molecules are then placed in built junctions, and all atoms are relaxed to 0.05 ev/Å using DFT with local spin density approximation, a double-ζ polarized basis set for the molecules (except for gold atoms which use a single-ζ basis set), and k-point samplings of 3 × 3 × 50, where 50 is the direction of transport. Transmission spectra are then calculated for the junction. Data availability All other data are available from the corresponding author upon request. Code availability The STM-BJ data that support the findings were acquired using a custom instrument controlled by custom software (Igor Pro, Wavemetrics). The automated small-molecule synthesizer was controlled by custom software (LabVIEW). The data analysis software is available from the corresponding author upon request.
A cross-disciplinary University of Illinois at Urbana-Champaign (UIUC) team has demonstrated a major breakthrough in using automated synthesis to discover new molecules for organic electronics applications. The technology that enabled the discovery relies on an automated platform for rapid molecular synthesis at scale—which is a game-changer in the field of organic electronics and beyond. Using automated synthesis, the team was able to rapidly scan through a library of molecules with precisely defined structures, thereby uncovering, via single-molecule characterization experiments, a new mechanism for high conductance. The work was just reported in Nature Communications and is the first major result to emerge from the Molecule Maker Lab, which is located in the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The unexpectedly high conductance was uncovered in experiments led by Charles M. Schroeder, who is the James Economy Professor in materials science & engineering and a professor in chemical & biomolecular engineering. The project's goal was to seek out new molecules with strong conductivity that might be suitable for use in molecular electronics or organic electronics applications. The team's approach was to systematically append many different side chains to molecular backbones to understand how the side chains affected conductance. The first stage of the project consisted of synthesizing a large library of molecules to be characterized using single-molecule electronics experiments. If the synthesis had been done with conventional methods, it would have been a long, cumbersome process. That effort was avoided through use of the Molecule Maker Lab's automated synthesis platform, which was designed to facilitate molecular discovery research that requires testing of large numbers of candidate molecules. Edward R. Jira, a Ph.D. student in chemical & biomolecular engineering who had a leading role in the project, explained the synthesis platform's concept. "What's really powerful... is that it leverages a building-block-based strategy where all of the chemical functionality that we're interested in is pre-encoded in building blocks that are bench-stable, and you can have a large library of them sitting on a shelf," he said. A single type of reaction is used repeatedly to couple the building blocks together as needed, and "because we have this diverse building block library that encodes a lot of different functionality, we can access a huge array of different structures for different applications." As Schroeder put it, "Imagine snapping Legos together." Co-author Martin D. Burke extended the Lego-brick analogy to explain why the synthesizer was so valuable to the experiments—and it wasn't only because of the rapid production of the initial molecular library. "Because of the Lego-like approach for making these molecules, the team was able to understand why they are super-fast," he explained. Once the surprisingly fast state was discovered, "using the 'Legos,' we could take the molecules apart piece by piece, and swap in different 'Lego' bricks—and thereby systematically understand the structure/function relationships that led to this ultrafast conductivity." Ph.D. student Jialing (Caroline) Li, an expert in single-molecule electronics characterization who studied the molecules generated by the synthesizer, explained the essence of the conductivity discovery. "We observed that the side chains have a huge impact on how the molecule behaves and how this affects charge transport efficiency across the entire molecule," she said. Specifically, the team discovered that molecular junctions with long alkyl side chains have unexpectedly high conductance, which is dependent on concentration. They also figured out the reason for the high conductivity: The long alkyl side chains promote surface adsorption (the molecule's ability to adhere to a surface), which results in planarization (in effect, flattening out) of the molecules such that electrons can flow through them more efficiently. Burke, who is the May and Ving Lee Professor for Chemical Innovation and a professor of chemistry, called the building-block approach a "one-two punch": it makes the platform "a powerful engine for both discovering function, and then understanding the function." The conductance discovery represents a significant advance for the field of organic electronics. "Semiconductor-metal interfaces are ubiquitous in electronic devices. The surprising find of a high conductance state induced by metallic interfaces can pave the way to new molecular design for highly efficient charge injection and collection across a wide range of electronic applications," said co-author Ying Diao, an I. C. Gunsalus Scholar, Dow Chemical Company Faculty Scholar, and associate professor of chemical & biomolecular engineering. Schroeder explained that organic electronic materials have multiple benefits. To begin with, their use avoids the need for metals or other inorganic electronics. But organic electronics also offer much more: deformation and elastic properties that can be vital to some applications, such as implantable medical devices that could bend and flex along with, for example, a beating heart. Such organic devices could even be designed to degrade within the body, so that they break down and disappear after their job is done. Some organic electronics are already available in commercial products. For example, organic light-emitting diodes (OLED) can be found in the screens of smart phones, smart watches, and OLED TVs. It's anticipated that organic solar cells are on their way to becoming a commercial success as well. But the research community has only scratched the surface of organic electronics' potential; progress has been slowed by the lack of key materials discoveries like the one just made by the UIUC team. Schroeder said that it's significant to have proven that "we can design and synthesize large libraries for various applications." The paper "showcases the fact that we successfully did it for a class of molecules for molecular electronics." He admitted, "I didn't expect to see something as interesting on this first study." Co-author Jeffrey S. Moore, who is a Stanley O. Ikenberry Endowed Chair, professor of chemistry, and Howard Hughes Medical Institute Professor, reflected on the work: "Advancing basic science and technology by combining new facilities with a collaborative team is what makes the Beckman Institute so special. This discovery is the first of many that will come from the Molecule Maker Lab." Schroeder believes that the Molecule Maker Lab facilities—which also offer artificial intelligence capabilities for predicting what molecules are likely to be worth making—will open up a new approach to research in that "you can start thinking about designing based on a function instead of a structure." Whereas researchers today might start by saying, "I need to make this particular structure because I think it's going to do something," it will be possible to tell the system, "I want to get this ultimate function," and then let it help you figure out what structures you should make to get that function. The intent is eventually to make the Molecule Maker Lab facilities available to researchers outside UIUC. Burke said he'd like to see the Lab "become a global epicenter of democratized molecular innovation," empowering people who are not molecular synthesis specialists to solve important research problems. "I think this is the beginning of something really special," Burke said. "The journey has begun."
10.1038/s41467-022-29796-2
Biology
Aircraft microbiome much like that of homes and offices, study finds
The Airplane Cabin Microbiome, Microbial Ecology (2018). DOI: 10.1007/s00248-018-1191-3 Journal information: Proceedings of the National Academy of Sciences
http://dx.doi.org/10.1007/s00248-018-1191-3
https://phys.org/news/2018-06-aircraft-microbiome-homes-offices.html
Abstract Serving over three billion passengers annually, air travel serves as a conduit for infectious disease spread, including emerging infections and pandemics. Over two dozen cases of in-flight transmissions have been documented. To understand these risks, a characterization of the airplane cabin microbiome is necessary. Our study team collected 229 environmental samples on ten transcontinental US flights with subsequent 16S rRNA sequencing. We found that bacterial communities were largely derived from human skin and oral commensals, as well as environmental generalist bacteria. We identified clear signatures for air versus touch surface microbiome, but not for individual types of touch surfaces. We also found large flight-to-flight beta diversity variations with no distinguishing signatures of individual flights, rather a high between-flight diversity for all touch surfaces and particularly for air samples. There was no systematic pattern of microbial community change from pre- to post-flight. Our findings are similar to those of other recent studies of the microbiome of built environments. In summary, the airplane cabin microbiome has immense airplane to airplane variability. The vast majority of airplane-associated microbes are human commensals or non-pathogenic, and the results provide a baseline for non-crisis-level airplane microbiome conditions. Working on a manuscript? Avoid the common mistakes Introduction With over three billion airline passengers annually, the risk of in-flight transmission of infectious disease is a vital global health concern [ 1 , 2 ]. Over two dozen cases of in-flight transmission have been documented, including influenza [ 3 , 4 , 5 , 6 , 7 ], measles [ 8 , 9 ], meningococcal infections [ 10 ], norovirus [ 11 ], SARS [ 12 , 13 ], shigellosis [ 14 ], cholera [ 15 ], and multi-drug resistant tuberculosis [ 1 , 16 , 17 , 18 ]. Studies of SARS [ 12 , 13 ] and pandemic influenza (H1N1p) [ 19 ] transmission on airplanes indicate that air travel can serve as a conduit for the rapid spread of newly emerging infections and pandemics. Further, some of these studies suggest that the movements of passengers and crew (and their close contacts) may be an important factor in disease transmission. In 2014, a passenger infected with Ebola flew on Frontier airlines the night before being admitted to a hospital [ 20 ]. Luckily, she did not infect anybody during that trip. Despite many sensational media stories and anecdotes, e.g., “Flying The Filthy Skies” [ 21 ] or “The Gross Truth About Germs and Airplanes” [ 22 ], the true risks of in-flight transmission are unknown. An essential component of risk assessment and public health guidance is characterizing the background microbial communities present, in particular those in the air and on common touch surfaces. Next-generation sequencing has the potential to identify all bacteria present via their genomes, commonly called the microbiome. There have been a few previous studies of the bacterial community in cabin air [ 23 , 24 , 25 , 26 ], but none, to our knowledge, on airplane touch surfaces. These studies estimated total bacterial burden of culturable cells present, and applied early forms of 16S rRNA sequencing and bioinformatics, claiming species-level resolution. At the time of these studies, there were far fewer reference genomes with which to align. Although these were at the vanguard of research of the microbiome of built environments, 10 years later, current methods and protocols are significantly more rigorous. The microbiome of the built environment is an active research area. Using a wide range of methods, authors have studied the microbiomes of classrooms [ 27 , 28 , 29 ], homes [ 30 , 31 , 32 ], offices [ 33 , 34 ], hospitals [ 35 ], museums [ 36 ], nursing homes [ 37 ], stores [ 38 ], and subways [ 39 , 40 , 41 ]. Several of these studies, particularly those of classrooms and offices, identified significant quantities of Lactobacillus on seats. With the exception of the hospital microbiome, all of these studies indicate that the main microbiome constituents, at the family level, are human commensal and environmental bacteria. What else could they be? Airplane environments are unique to the examples listed above. Special features include very dry air, periodic high occupant densities, exposure to the microbiota of the high atmosphere, and long periods during which occupants have extremely limited mobility. Thus, one might expect that the airplane cabin microbiome might differ considerably from those of other built environments. Another key difference is that in an airplane cabin, it is difficult to avoid a mobile sick person, or one sitting in close proximity. In another publication [ 42 ], we describe behaviors and close contacts of all passengers and flight attendants in the economy cabin on ten flights of duration 4 hours or more, the FlyHealthy™ Study. FlyHealthy™ has provided first detailed understanding of infectious disease transmission opportunities in an airplane cabin. In addition to quantifying the opportunities, we wanted to understand the infectious agents present in an airplane cabin that might be transmitted during these opportunities. To this end, we identified the microbiota present on these flights, allowing characterization of the airplane cabin microbiome. We hypothesized that the airplane cabin microbiome differs from that of other built environments due to the above-stated reasons. Since the majority of flights were during the seasonal flu epidemic in either the originating city or the destination city, we were interested to determine if we could detect influenza virus in our samples. Since the transmission opportunities we characterized in the first part of the FlyHealthy™ study were those that would allow transmission by large droplets, we were interested in sampling air as well as touch surfaces (fomites). Key questions related to differences between types of samples (air versus touch surfaces), pre- to post-flight changes, and changes from flight-to-flight in the “core” airplane cabin microbiome. Results Airplane Cabin Bacterial Communities in the Air and on Touch Surfaces Skin commensals in the family Propionibacteriaceae dominate both air (~ 20% post-filtered reads) and touch surfaces (~ 27% post-filtered reads). There is substantial overlap of the top 20 families in air and touch surface samples (Fig. 1 ). The top ten families in both air and fomites additionally contain Enterobacteriaceae , Staphylococcaceae , Streptococcaceae , Corynebacteriaceae , and Burkholderiaceae . The environmental bacteria Sphingomonadaceae is quite prevalent in the air, but much less so on touch surfaces. Note that “unclassified family” aggregates different families from different higher level taxa. The top OTUs are shown in SM Fig. 1 . Fig. 1 Most prevalent families in air (left) and touch surface samples (right) by relative abundance (proportion of families) Full size image OTUs within the genera Propionibacterium and Burkholderia were present in every sample and two OTUs, annotated as genus Staphylococcus and Streptococcus ( oralis ), were present in all but one sample. These four OTUs are contained in three phyla: Actinobacteria , Proteobacteria , and Firmicutes , and comprise the “core” airplane cabin microbiome. Air and Touch Surface Communities Have Discernible Signatures, but There Are No Discernible Signatures of Touch Surface Types Figure 2 shows the results of the principal component analysis (PCA) on a log-scale of families of all samples over all ten flights. The associated scree plot (SM Fig. 3 ) indicates that the vast majority (73%) of the variability is captured by first principal component, about an order of magnitude more than that captured by PC2. We observe that the air samples are primarily positive on PC1 and, in fact, greater than 50, while the touch surface samples are largely negative. When combined with the variance explained by PC1 (Fig. 2 b), this indicates a clear signature of the air community. The complement is the signature of the touch surface community. There is a potpourri of touch surface types in the figure, again indicating the lack of clear signature of individual touch surface type. There are no statistically significant differences of alpha diversity between air and fomites as measured by any of six indices (SM Fig. 2 ). Fig. 2 Scatterplot of the logs of the first two principal components, colored by sample source. a Families. b OTUs Full size image Use of an infinite Dirichlet–multinomial mixture ( iDMM) model [ 43 ] identified four clusters (or ecostates), with ecostate 4 containing the vast majority of air samples, though it also includes many fomite samples as well (Fig. 3 a). Figure 3 b shows the diagnostic OTUs present in this air cluster and their weights. Note that the weights are an essential component of this characterization. Fig. 3 Results of iDMM analysis indicating two distinct ecostates. a Composition of the four ecostates identified in the iDMM analysis. b Most prevalent OTUs identified in the two ecostates associated with cabin air Full size image Another important question is whether bacterial communities change discernibly during flight? Again, Fig. 4 shows the admixture of pre- and post-flight communities in the touch surface samples. Note the linearity of these scatterplots of the logged average number of reads for OTUs from pre- to post-flight for each touch surface type. There is no discernible pattern of change of pre-flight to post-flight communities. Fig. 4 Logged average number of reads for OTUs from pre- to post-flight for each touch surface (fomite) type Full size image A final key question is whether bacterial communities in the cabin air change discernibly from flight to flight? For example, is there a difference between east-bound versus west-bound flights? A principle component analysis at both the family and OTU levels shows a wide variation with no clustering by flight (Fig. 5 ). Furthermore, without exception, between-flight (B) beta diversity is statistically higher than within-flight (W) beta diversity, that is, each flight is already starting with microbiomes that are likely different from other flights. Fig. 5 Beta diversity of samples. Scatterplot of the first two principal components of the beta diversity analysis, for a OTU-level and b family-level abundance, based on a Bray-Curtis distance. c Distributions of Bray-Curtis distances for different touch surface types, within and between flights Full size image Discussion Toward the goal of characterizing the airplane cabin microbiome, our study team flew on ten transcontinental US flights on which we collected 229 air and touch surface samples. We employed highly stringent quality control criteria during sampling, sample extraction, 16S rRNA gene sequencing, and the bioinformatics pipeline. The observed microbial communities, when merged across samples, are comprised of human commensals and common environmental (water and soil) genera. We identified a “core” airplane cabin microbiome containing OTUs within the genera Propionibacterium, Burkholderia ( glumae ), Staphylococcus , and Streptococcus ( oralis ). We identified clear OTU signatures for the air microbiome, but not for individual touch surface types. We found no meaningful differences between air and touch surfaces with respect to alpha diversity measures. Finally, we found no systematic pattern of change from pre- to post-flight. We also found large flight-to-flight variations with no distinguishing signatures of individual flights. This would suggest that each flight starts with a different microbiome from other flights, which would greatly hinder pre-and post-flight microbiome comparisons (e.g., Fig. 4 ) that aggregate samples between flights. A methodological implication is that aggregating communities between flights for statistical analyses is problematic. Instead, sample replication must be derived from within a flight in order to determine how passengers alter the airplane cabin microbiome. Every plane being different in terms of its microbiome suggests that each retains aspects of its historical living microbiome, that is, the passengers. The development of a cleaning routine that erases much of this inherited microbiome could be a powerful preventative measure against the spread of disease. Propionibacterium is a genus of the phylum Actinobacteria , comprised of commensal bacteria that live on human skin and commonly implicated in acne. Burkholderia glumae is a species of the phylum Proteobacteria and is a soil bacterium. Staphylococcus is a genus of the phylum Firmicutes that is found on the skin and mucus membranes of humans. Most species of Staphylococcus are harmless. Streptococcus oralis , a species of the phylum Firmicutes , is normally found in the oral cavities of humans. These constituents of the core airplane cabin microbiome are usually harmless to humans unless an unusual opportunity for infection is present, such as a weakened immune system, an altered gut microbiome, or a breach in the integumentary system. While airplane cabins are certainly examples of built environments, there are unique features. These include very dry air, periodic high occupant densities, exposure to the microbiota of the high atmosphere, long periods during which occupants have extremely limited mobility, and it is difficult to avoid a mobile sick person or one sitting in close proximity. Half of the cabin air is recycled after passing through a bank of HEPA filters, and the other half is taken from the outside. Furthermore, the airline’s cabin cleaning policy is to disinfect all hard surfaces whenever the plane “overnights,” and all touch surface samples were taken from hard surfaces. Different airlines have different cabin disinfection protocols and supervise their cabin cleaning staff in different ways. Despite the uniqueness of the airplane cabin as a built environment, our findings are surprisingly consistent with other recent studies of the microbiome of built environments. This consistency is reassuring in light of frequent sensationalistic media stories about dangerous germs found on airplanes. For this reason, there is no more risk from 4 to 5 hours spent in an airplane cabin than 4–5 hours spent in an office, all other exposures being the same. Our microbiome characterization also provides a baseline for non-crisis level airplane microbiome conditions. It is not possible to make quantitative comparisons to other studies which used different primers and different sequencing methods and technologies. For example, the genus Propionibacterium is a core component of the airplane cabin microbiome, but by choice of primers, the most common species, Propionibacterium acnes , a common skin commensal, was excluded from discovery in the New York City subway microbiome study . Although different primers and sequencing techniques were used, the core microbiome identified in the Boston subway system study has significant overlap with airplane cabins [ 41 ]. Corynebacteriaceae , a skin commensal, appeared in nearly every subway sample, and while we do not include it in the airplane cabin core list, it was present in all but ten of our samples. A study of the microbiome of the International Space Station, the only other airborne built environment that has been studied, led to the same conclusion [ 44 ], as did two studies of office spaces [ 33 , 34 ]. A number of previous studies identified large amounts of Lactobacillus , but Lactobacillaceae did not appear in our list of 20 most prevalent families in our touch surface samples. Lactobacillus is commonly found in vaginal microbiota, suggesting that it should be found on surfaces where women sit. Many other studies of the built environment have sampled seats, and thus, it is not surprising to find Lactobacilli present in those environments. We did not sample from the seat fabric where passengers sat; thus, the absence of Lactobacilli in the 20 most prevalent families is to be expected. Airplanes fly through clouds. The narrow-body twin-engine models on which we flew use about 50% bleed (outside) air to refresh the cabin air throughout the flight. A study of the microbiome of clouds finds some members of the Propionibacterium and Burkholderia families in their core, as well as Streptococcus in some samples [ 45 ]. A more recent study of cloud water found Burkholderia, Staphylococcus , and Streptococcus in samples [ 46 ]. Interesting future research would be to ascertain the influence of the cloud microbiome on the airplane cabin microbiome. In conclusion, our study found that although the microbiome of airplane cabins has large flight-to-flight variations, it resembles the microbiome of many other built environments. This work adds to the growing body of evidence characterizing the built environment. These investigations form critical linkages between the categories of environmental and human-associated microbial ecology, and thus must meet the challenges of both areas. Improvements in future studies should include incorporation of rich metadata, such as architectural and other design features, human-surface contacts, and environmental exposures, as well as determination of microbe viability and the mechanisms used to persist in the airplane cabin environment. Identification of microbes that can be transferred between passengers and specific fomites will be especially important in informing public health and transportation policy. We hope to undertake an analogous study on significantly longer, international flights, as well as at key locations at departing and arriving airports. An improved understanding of the airplane cabin microbiome and how it is affected by passengers and crew may lead ultimately to construction of airplane cabins that maintain human health. Materials and Methods Selection of Flights Each of five round-trips, on non-stop flights, targeted a different west coast destination to provide data representative of transcontinental flights. We flew to San Diego, Los Angeles, San Francisco, and Portland, OR, between November 2012 and March 2013. We flew to Seattle, WA, in May 2013. We flew on narrow-body twin-engine aircraft, with all but one flight on a specific model. Our movement data are representative of passenger and crew movements in a single aisle “3 + 3” economy cabin configuration. Air Sampling Methods The two air sampling pumps used were model SKC AirChek XR5000. These were located in a seat at the back of the economy class cabin. Both pumps sampled at 3.5 liters per second, the NIOSH protocol for stationary sampling and approximately the normal breathing rate of adults. Just prior to each sampling, each pump was calibrated using a MesaLab Defender Calibrator. Air samples of 30-min duration were collected onboard the aircraft during five distinct sampling intervals. Once the pilot announced the flight time, we calculated the quarter-way point, halfway point, and three quart-way point. Thus, the five sampling periods were pre-boarding and boarding, Q1 ± 15 min, Q2 ± 15 min, Q3 ± 15 min, and touchdown to end of deplaning. In addition, one sample was collected throughout the whole flight from 10,000 ft on ascent to 10,000 ft on descent. Flight 2 only has data for four time points. Following each sampling period, the sampling cartridges were wrapped with Teflon tape, labeled, logged, and placed in a cooler with chemical ice packs. Fomite Sampling Methods Prior to each flight, we prepared an ordered list of seven randomly selected seats, of which the first two occupied seats, as confirmed by the gate agent prior to boarding, were sampled. We also randomly chose a rear lavatory door (port or starboard) for sampling. We swabbed the laboratory door handles using Bode SecurSwab DNA Collector dual swabs, placing three drops of DNA- and RNA-free water on one of the two swabs, then, swabbing in one direction within a 9 cm × 9 cm template, and finally swabbing in the perpendicular direction within the same template. Afterwards, we placed each swab into its secure tube, labeled it, logged it, and placed it into a cooler on a chemical ice pack. We sampled three touch surfaces at each passenger seat—the inside tray table, the outside tray table, and the seat belt buckle. Using the templates and the dual swabs, we sampled the bottom corners of each side of the tray table as described above. We did not use the template to swab the seat belt buckle; rather, we swabbed the entire upper surface in one direction and then in the perpendicular direction. We placed each swab into its secure tube, labeled it, logged it, and placed it into a cooler on a chemical ice pack. Material from the two swabs was combined in Tris Buffer and homogenized per kit instructions. The air filters were similarly prepared. DNA isolations were performed using the Power Soil kit (MoBio Laboratories, Carlsbad, CA) according to the manufacturer’s directions with an elution volume of 50 μl. The 16S rRNA gene was amplified for sequencing using the 515F primer (5′ GTGCCAGCMGCCGCGGTAA 3′) and 806R primer (5′ GGACTACHVGGGTWTCTAAT 3′) [ 47 ]. The 16S rRNA gene-specific primers were tailed with Illumina adaptor sequences to allow a secondary PCR to add indexing barcodes and full Illumina adaptor sequences to support paired-end sequencing. Libraries were pooled for sequencing in batch sizes of 48 samples per batch and sequenced on the Illumina MiSeq at HudsonAlpha Biosciences. Paired-end sequencing with a read length of 150 bases per read was used, providing a small overlap at the end of each read to facilitate assembly of the paired-end sequencing reads to a single fragment of ~ 290 bp representing the V4 region of the 16S rRNA gene. In reality, the reverse read was of very low quality preventing assembly for forward and reverse reads. Therefore, only quality trimmed forward reads were used for all downstream analyses. The 16S sequence data have been deposited in the National Center for Biotechnology Information (NCBI) database on BioProject accession number: PRJNA420089 and at the Sequence Read Archive (SRA) under Accession IDs SRR6330835–SRR6330871. Reads were de-multiplexed according to the barcodes and trimmed of barcodes and adapters. Following the initial processing of the sequence data, sequences were combined, dereplicated, and aligned in mothur (version 1.36.1) [ 48 ] using the SILVA template (SSURef_NR99_123) [ 49 ]; subsequently, sequences were organized into clusters of representative sequences based on taxonomy called operational taxonomic units (OTU) using the UPARSE pipeline [ 50 ]. Initial filtering of the samples ensured discarding OTUs containing less than five sequences. Libraries were normalized using metagenomeSeq’s cumulative sum scaling method [ 51 ] to account for library size acting as a confounding factor for the beta diversity analysis. Moreover, in addition to discarding singletons, OTUs that were observed fewer than seven times in the count data were also filtered out to avoid the inflation of any contaminants that might skew the diversity estimates.
What does flying in a commercial airliner have in common with working at the office or relaxing at home? According to a new study, the answer is the microbiome—the community of bacteria found in homes, offices and aircraft cabins. Believed to be the first to comprehensively assess the microbiome of aircraft, the study found that the bacterial communities accompanying airline passengers at 30,000 feet have much in common with the bacterial communities surrounding people in their homes and offices. Using advanced sequencing technology, researchers from the Georgia Institute of Technology and Emory University studied the bacteria found on three components of an airliner cabin that are commonly touched by passengers: tray tables, seat belt buckles and the handles of lavatory doors. They swabbed those items before and after 10 transcontinental flights and also sampled air in the rear of the cabin during flight. What they found was surprisingly unexciting. "Airline passengers should not be frightened by sensational stories about germs on a plane," said Vicki Stover Hertzberg, a professor in Emory University's Nell Hodgson Woodruff School of Nursing and a co-author of the study. "They should recognize that microbes are everywhere and that an airplane is no better and no worse than an office building, a subway car, home or a classroom. These environments all have microbiomes that look like places occupied by people." The results of the FlyHealthy study are reported June 6, 2018, in the journal Microbial Ecology. In March, the researchers reported on a separate part of the study that examined potential routes for transmitting certain respiratory viruses—such as the flu—on commercial flights. Credit: Georgia Tech Given the unusual nature of an aircraft cabin, the researchers hadn't known what to expect from their microbiome study. On transcontinental flights, passengers spend four or five hours in close proximity breathing a very dry mix of outdoor air and recycled cabin air that has been passed through special filters, similar to those found in operating rooms. "There were reasons to believe that the communities of bacteria in an aircraft cabin might be different from those in other parts of the built environment, so it surprised me that what we found was very similar to what other researchers have found in homes and offices," said Howard Weiss, a professor in Georgia Tech's School of Mathematics and the study's corresponding author. "What we found was bacterial communities that were mostly derived from human skin, the human mouth—and some environmental bacteria." The sampling found significant variations from flight to flight, which is consistent with the differences other researchers have found among the cars of passenger trains, Weiss noted. Each aircraft seemed to have its own microbiome, but the researchers did not detect statistically significant differences between preflight and post-flight conditions on the flights studied. "We identified a core airplane microbiome—the genera that were present in every sample we studied," Weiss added. The core microbiome included genera Propionibacterium, Burkholderia, Staphylococcus, and Strepococcus (oralis). Though the study revealed bacteria common to other parts of the built environment, Weiss still suggests travelers exercise reasonable caution. "I carry a bottle of hand sanitizer in my computer bag whenever I travel," said Weiss. "It's a good practice to wash or sanitize your hands, avoid touching your face, and get a flu shot ever year." This new information on the aircraft microbiome provides a baseline for further study, and could lead to improved techniques for maintaining healthy aircraft. How samples were taken from airline tray tables to study the microbiome of aircraft. Credit: John Toon, Georgia Tech "The finding that airplanes have their own unique microbiome should not be totally surprising since we have been exploring the unique microbiome of everything from humans to spacecraft to salt ponds in Australia. The study does have important implications for industrial cleaning and sterilization standards for airplanes," said Christopher Dupont, another co-author and an associate professor in the Microbial and Environmental Genomics Department at the J. Craig Venter Institute, which provided bioinformatics analysis of the study's data. The 229 samples obtained from the aircraft cabin testing were subjected to 16S rRNA sequencing, which was done at the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The small amount of genetic material captured on the swabs and air sampling limited the level of detail the testing could provide to identifying genera of bacteria, Weiss said. The extensive bioinformatics, or sequence analysis, was carried out at the J. Craig Venter Institute in La Jolla, Calif. In the March 19 issue of the journal Proceedings of the National Academy of Sciences, the researchers reported on the results of another component of the FlyHealthy study that looked at potential transmission of respiratory viruses on aircraft. They found that an infectious passenger with influenza or other droplet-transmitted respiratory infection will most likely not transmit infection to passengers seated farther away than two seats laterally and one row in front or back on an aircraft. That portion of the study was designed to assess rates and routes of possible infectious disease transmission during flights, using a model that combines estimated infectivity and patterns of contact among aircraft passengers and crew members to determine likelihood of infection. FlyHealthy team members were assigned to monitor specific areas of the passenger cabin, developing information about contacts between passengers as they moved around. Among next steps, the researchers would like to study the microbiome of airport areas, especially the departure lounges where passengers congregate before boarding. They would also like to study long-haul international flights in which passengers spend more time together—and are more likely to move about the cabin.
10.1007/s00248-018-1191-3
Medicine
Researchers develop liquid biopsy test for pediatric solid tumors
Eirini Christodoulou et al, Combined low-pass whole genome and targeted sequencing in liquid biopsies for pediatric solid tumors, npj Precision Oncology (2023). DOI: 10.1038/s41698-023-00357-0 Journal information: npj Precision Oncology
https://dx.doi.org/10.1038/s41698-023-00357-0
https://medicalxpress.com/news/2023-03-liquid-biopsy-pediatric-solid-tumors.html
Abstract We designed a liquid biopsy (LB) platform employing low-pass whole genome sequencing (LP-WGS) and targeted sequencing of cell-free (cf) DNA from plasma to detect genome-wide copy number alterations (CNAs) and gene fusions in pediatric solid tumors. A total of 143 plasma samples were analyzed from 19 controls and 73 patients, including 44 bone or soft-tissue sarcomas and 12 renal, 10 germ cell, five hepatic, and two thyroid tumors. cfDNA was isolated from plasma collected at diagnosis, during and after therapy, and/or at relapse. Twenty-six of 37 (70%) patients enrolled at diagnosis without prior therapy (radiation, surgery, or chemotherapy) had circulating tumor DNA (ctDNA), based on the detection of CNAs from LP-WGS, including 18 of 27 (67%) patients with localized disease and eight of 10 (80%) patients with metastatic disease. None of the controls had detectable somatic CNAs. There was a high concordance of CNAs identified by LP-WGS to CNAs detected by chromosomal microarray analysis in the matching tumors. Mutations identified in tumor samples with our next-generation sequencing (NGS) panel, OncoKids®, were also detected by LP-WGS of ctDNA in 14 of 26 plasma samples. Finally, we developed a hybridization-based capture panel to target EWSR1 and FOXO1 fusions from patients with Ewing sarcoma or alveolar rhabdomyosarcoma (ARMS), respectively. Fusions were detected in the plasma from 10 of 12 patients with Ewing sarcoma and in two of two patients with ARMS. Combined, these data demonstrate the clinical applicability of our LB platform to evaluate pediatric patients with a variety of solid tumors. Introduction Pediatric solid tumors encompass a heterogeneous group of rare malignancies that constitute approximately 40% of all childhood cancers 1 . Among them, soft-tissue tumors including embryonal tumors, germ cell tumors, and renal tumors account for approximately 10% of cases, whereas bone and soft-tissue sarcomas comprise ~20% of all cases 2 , 3 , 4 . The cytogenetic and molecular genetic characterization of pediatric solid tumors is used clinically to aid in diagnosis and determine prognosis, and in certain cases, guide treatment 5 . Identification of germline copy number alterations (CNAs) and sequence variants in cancer predisposition genes are also important for patient management, specifically to address the risk for second malignancies in the patient or cancer in a family member. Liquid biopsy (LB) assays for pediatric patients with solid tumors have the potential to transform patient care by providing a less invasive alternative to diagnostic biopsies for identifying genomic aberrations that can inform diagnosis, risk stratification, and therapeutic options as well as enable earlier detection of disease progression compared with conventional radiographic imaging 6 , 7 , 8 , 9 , 10 , 11 . The clinical effectiveness of plasma-based LB approaches, however, depends on optimizing the detection of circulating tumor DNA (ctDNA) fragments and distinguishing them from cfDNA derived predominantly from hematopoietic cells 12 , 13 . Recent advances in LB test development have focused primarily on adult-type malignant epithelial tumors defined by a spectrum of recurrent dominant activating mutations 14 , 15 , 16 . Pediatric tumors, however, are more often characterized by CNAs, loss of tumor suppressor genes, epigenetic modifications, and large-scale structural rearrangements with breakpoints that are variable and located in the intronic regions of the genome 5 , 17 , 18 . Furthermore, the incidence of pediatric solid tumors is low, and the number of histologic, genomic, and clinical subtypes is large, suggesting that the combined use of a pan-cancer assay and a target-specific approach may be better suited for clinical applications. Most importantly, the clinical development of LB assays for pediatric solid tumor patients has been limited by the requirement for relatively large volumes of blood, urine, or cerebrospinal fluid (CSF) to isolate sufficient amounts of cancer-derived nucleic acids for analysis. The feasibility of employing NGS and droplet digital PCR-based assays utilizing cfDNA derived from CSF, plasma, or the aqueous humor of the eye for pediatric central nervous system (CNS) tumors, solid tumors, or retinoblastoma, respectively, has recently been described 10 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 . In the diagnostic setting, ultra-low-pass WGS (ULP-WGS) analysis of plasma ctDNA was effectively used to distinguish malignant peripheral nerve sheath tumors (MPNST) from the benign lesion, plexiform neurofibroma (PN), in patients with the neurofibromatosis type 1 (NF1) cancer predisposition syndrome 28 . By using in silico enrichment of short cfDNA fragments and copy number analysis, MPNST samples were found to be enriched with ctDNA when compared to PN, and treatment response was correlated with the ctDNA-derived estimate of tumor burden 28 . To assess response to treatment, Liu et al. assessed measurable residual disease (MRD) in 123 children with medulloblastoma using LP-WGS of cfDNA derived from CSF 10 . The presence of MRD in the CSF was found to be associated with a higher risk of relapse. The presence of ctDNA in plasma has been proposed as a prognostic biomarker in pediatric solid tumors (reviewed in ref. 7 ), and targeted detection of EWSR1 and FOXO1 fusions in Ewing sarcoma and alveolar rhabdomyosarcoma (ARMS) has been demonstrated in small cohorts of patients 29 , 30 . The Precision Medicine Program in Pediatric and Adolescent Patients with Recurrent Malignancies (MAPPYACTS) employed whole exome sequencing (WES) of tumor tissue and cfDNA from plasma to identify targeted therapies in patients with relapsed/recurrent non-CNS solid tumors 31 . Notably, 57% of the somatic SNVs were detected in both the tumor and the cfDNA, whereas 31% were specific to the tumor and 11% were specific to the cfDNA, reflecting tumor heterogeneity, or possibly technical limitations of the assays 31 . Significantly more mutations were detected in patients with metastatic (66%) vs localized (47%) disease. The studies published to date have demonstrated the feasibility of using cfDNA from plasma to detect CNAs, sequence variants, and/or gene fusions, but have focused on disease-specific cohorts, or patients with recurrent/refractory disease. The aim of the present study was to evaluate the potential clinical applicability of a plasma-based LB assay for newly diagnosed and relapsed pediatric patients with a variety of solid tumors. In this approach, LP-WGS was performed using cfDNA to detect CNAs and mutations. Targeted panel-based sequencing of cfDNA was also performed to identify specific gene fusions. Clinical validation of these assays will allow for their implementation in a prospective pan-cancer setting as an aid in diagnosis and to monitor response to therapy. Results Patient clinical characteristics Patients were deemed eligible for this study if they had a newly diagnosed or recurrent malignant bone or soft-tissue sarcoma or germ cell, hepatic, thyroid, or renal tumor. Written informed consent (and assent when appropriate) was obtained from all patients, their parents, or a legal guardian to participate in this study under a Children’s Hospital Los Angeles Institutional Review Board-approved protocol (CHLA-19-00146). Written informed consent was also obtained for non-oncologic controls, their parents, or a legal guardian to participate in this study under a Children’s Hospital Los Angeles Institutional Review Board-approved protocol (CHLA-19-00230). One hundred and forty-three samples from 73 eligible patients and 19 non-oncologic controls were analyzed. The patients were diagnosed as having sarcoma ( n = 44), renal tumor ( n = 12), hepatic tumor ( n = 5), malignant germ cell tumor ( n = 10), or thyroid carcinoma ( n = 2). The age of the patients ranged from six months to 28 years (median 12 years) and there were 37 males and 36 females (Table 1 ). The median age of the controls was 11 years (Supplemental Table 1 ). A median of two samples were analyzed per patient (range, 1–6). Forty-eight of the patients were enrolled at diagnosis, although 11 had received chemotherapy and/or undergone surgery prior to obtaining a blood sample for the study. Thirty-five of the patients presented with localized disease. Twenty-five patients were enrolled at the time of a recurrence (two with local and 23 with distant disease). Six patients had received chemotherapy and/or had undergone surgery for relapse prior to enrollment. None of the patients received prior radiation therapy. Two patients had surgery and chemotherapy prior to study enrollment (Table 1 and Supplemental Table 1 ). Samples were collected at the time of initial diagnosis or recurrence, during therapy, at the time of recurrence/progression, where applicable, and during follow-up (i.e., after treatment). Table 1 Patient characteristics. Full size table Detection of copy number alterations in plasma from pediatric solid tumor patients The presence of ctDNA was determined by LP-WGS, evidenced by CNAs detected in plasma. To improve the sensitivity of detecting CNAs and to take advantage of the higher depth of coverage achieved, we used an in silico size selection approach to enrich for fragments between 90–150 bp in length (Supplemental Fig. 1 ) 32 . Positivity rates were determined by analyzing true diagnostic samples (defined as samples obtained prior to chemotherapy or definitive surgery) either at initial diagnosis or relapse at the time of enrollment (Table 2 ). There was detectable ctDNA by LP-WGS in 26 of 37 (70%) diagnostic samples and 10 of 19 (43%) relapse samples (Fig. 1 and Table 2 ). Eighteen of 27 (67%) patients were diagnosed with localized disease and eight of 10 (80%) patients with metastatic disease had CNAs detected by LP-WGS. CNAs were detectable in 10 of 19 (53%) patients enrolled at relapse with distant disease (Fig. 1 and Table 2 ). Neither of the two patients with local recurrences had CNAs detected in the LB. For the non-oncologic controls, 16 samples were negative and three demonstrated previously identified germline deletions, and were therefore subsequently excluded from further analysis (Supplemental Table 1 ). Table 2 Detection of copy number alterations (CNA) in the plasma by histologic subtype. Full size table Fig. 1: Copy number alteration detection in liquid biopsies of pediatric solid tumor patients. Summary of copy number alteration (CNA) detection and disease status at initial diagnosis and relapse in treatment naïve patients (i.e., no definitive surgery, radiation, or chemotherapy). CNAs were detected in the plasma from 18 of 27 (67%) and eight of 10 (80%) newly diagnosed patients with localized disease and metastatic disease, respectively. CNAs were detected in the plasma of 10 of 19 (53%) patients with distant recurrence of the disease. Dark blue- CNA positive, Light blue- CNA negative. Full size image Next, we tested whether characteristic copy number changes for each tumor type could be detected using cfDNA (Table 2 ). Marked genomic instability, characteristic of osteosarcoma, was readily detectable by LP-WGS in nine of 17 (53%) patients analyzed either at diagnosis or relapse (Supplemental Fig. 2a ). Nine of 13 (69%) Ewing sarcoma patients (Supplemental Fig. 2b ) had detectable CNAs, primarily gain of chromosome 8 (seven of 13 samples), a frequent abnormality observed in this disease 5 . CNAs were also detectable in three of four (75%) newly diagnosed renal tumor patients, and five of five (100%) patients at relapse with a renal tumor ( Supplemental Fig. 2c ). Moreover, all three patients with newly diagnosed germ cell tumor and one of two (50%) patients with germ cell tumor at relapse (Supplemental Fig. 2d ) had detectable CNAs (Table 2 ). The most common genetic aberrations in germ cell tumors were chromosome 1 and 12p gains, characteristic of this diesase 5 . Three of four (75%) of the patients with hepatic tumors, including two with hepatoblastoma and one with an embryonal sarcoma of the liver, had detectable CNAs at diagnosis (Supplemental Fig. 2e ). One patient with a peripheral malignant nerve sheath tumor (MPNST), a patient with ARMS and one patient with synovial cell sarcoma had CNAs detected by LP-WGS at diagnosis (Supplemental Fig. 2f ). Neither of the two patients with thyroid tumors had detectable CNAs (Supplemental Table 1 ). CNAs were observed in two of 11 patients who had chemotherapy and/or surgery prior to enrollment at diagnosis, one with a Wilms tumor (patient 55) and one with a germ cell tumor (patient 71), Interestingly, patient 55 was newly diagnosed with a localized Wilms tumor and had undergone nephrectomy prior to enrollment. Two of six patients who received chemotherapy prior to enrollment at relapse, one with osteosarcoma (patient 10) and the other with Ewing sarcoma (patient 35), had detectable CNAs in the LB (Supplemental Table 1 ). The CNA profiles of the primary, relapse, or metastatic tumors generated from clinical CMA assays (CytoScanHD or OncoScan) were available for 59 patients, including patients who received therapy prior to enrollment (Supplemental Table 1 ). Forty-nine of the 59 (83%) had abnormal CNA profiles in the tumor tissue. Thirty-four of the 49 (69%) informative cases had abnormal CNA profiles in the plasma by LP-WGS. The CNAs identified by LP-WGS for 21 patients with high-tumor fraction in ctDNA (>10%) were highly correlated with the CMA profiles of the matched tumor samples (Pearson’s correlation coefficient, r 2 > 0.7) (Supplemental Fig. 3a, b ). For example, LP-WGS from the plasma and CMA analysis from the primary tumor in a patient with metastatic Wilms tumor (patient 58) revealed nearly identical CN changes ( r 2 value of 0.96) affecting different chromosomes (1q gain, 7p loss, 7q gain, 10 gain, 11q loss, 12 and 20 gain) (Supplemental Fig. 3c, d ). There was also a group of six patients with matching LP-WGS and CMA data but an r 2 value lower than 0.7. For example, case 2 was osteosarcoma with complex chromosomal rearrangements affecting almost all chromosomes. The CNA profiles from CMA and LP-WGS of ctDNA had an r 2 of 0.5 (Supplemental Fig. 3e, f ). The majority of CNAs were the same in both profiles suggesting that they originated from the same clone(s) thus the low r 2 value did not reflect the true biologic concordance. In a patient with metastatic anaplastic Wilms tumor (56), the CNA profile from plasma-derived cfDNA was distinct from that of the primary tumor ( r 2 < 0.7) (Fig. 2 ). The primary tumor demonstrated loss of heterozygosity (LOH) of the distal region of the short arm of chromosome 11, as well as gain of 12q, loss of 16q, and gain of 19q. The cfDNA profile at diagnosis showed a partial gain of chromosome 4p, loss of 4q, loss of 7p, gain of 9q, and loss of 11q, 17p, and 22, which was distinctly different from the primary tumor but consistent with the CNA profiles that were later seen in the two non-responding residual lung nodules resected following two cycles of neoadjuvant chemotherapy (Fig. 2a–c and Supplemental Table 1 ). Homozygosity of 11p was supported by the LP-WGS data as well as by CMA analysis of the lung metastases, confirming the common genetic origin for all three tumors. Mutation analysis of the LP-WGS data from cfDNA demonstrated the identical TP53 mutation that was detected by OncoKids® NGS analysis of the two metastases in the lungs, which was not present in the primary tumor. The plasma samples obtained after the two cycles of chemotherapy prior to resection of the residual lung nodules showed no detectable CNAs, including the TP53 mutation (Fig. 2d ). Taken together, these findings suggest that the cfDNA profile more accurately reflected the molecular profile of the metastatic clone than the primary tumor. Fig. 2: Plasma CNA profile of a patient with Wilms tumor mimics resected residual lung nodules following neoadjuvant chemotherapy. a Chromosomal microarray analysis (CMA) plot of the primary renal tumor resected at the time of diagnosis from a male patient with Stage IV anaplastic Wilms tumor (patient 56) demonstrates LOH for a region of the short arm of chromosome 11, a gain of 12q, loss of 16q; and gain of 19q. b CNA detection at baseline in the liquid biopsy sample reveals distinct copy number (CN) changes when compared to the primary tumor. Specifically, chromosome 1p loss, 4p gain, 4q loss, 5p gain, 7p loss, 9q gain, 11q loss, 12q gain, 14q loss, 17p loss, and chromosomes 21 and 22 loss. The calculated tumor fraction (TFx) was 53%. c CMA profile of CN changes from the residual lung nodules (left), and d (right) resected after two cycles of chemotherapy with chromosome 4p gain, 4q loss, 7q gain (left lung mass), 11q loss, 17p loss, chromosome 22 loss. e CNA profile in the plasma after two cycles of chemotherapy showing a flat profile. The calculated TFx was 0%. For each serial LB sample, log2ratio is plotted on the y-axis against chromosome numbers 1-22, X, Y. Red indicates CN gain, blue indicates CN neutral, and green indicates CN loss. Full size image LP-WGS reveals the presence of ctDNA prior to clinical recurrence We then examined whether recurrence was detectable in the cfDNA of pediatric solid tumor patients. One patient with metastatic osteosarcoma (case 1) at diagnosis demonstrated gains in chromosomes 1q, 5p, 6p, 10p, 19p, and 21 in the cfDNA sample (Fig. 3a ) which was no longer present one month after completion of therapy (Fig. 3b ). The same pattern of ctDNA abnormalities with an additional gain of chromosomes 7 and 8, was detected in all three follow-up samples collected between three and 12 months after therapy (Fig. 3c–e ). At 12 months, the patient had a symptomatic recurrence at a distant bony site which was not evident on prior routine imaging surveillance. Following initiation of salvage chemotherapy, at 6 weeks, the abnormal clone was no longer detectable in the patient’s plasma (Fig. 3f ). Thus, plasma-derived cfDNA analysis revealed the presence of recurrent disease nine months prior to clinical detection by conventional imaging techniques. Fig. 3: Detection of recurrence in the liquid biopsy of a patient with osteosarcoma. a CNA profile in the plasma at the time of diagnosis in a patient with metastatic osteosarcoma (patient 1) showing copy number changes in chromosomes 1q, 5p, 6p, 19p, and 21. The calculated tumor fraction (TFx) was 6% . b CNA profile with no detectable alterations one month after completion of planned therapy, including surgical resection of the primary tumor, bilateral metastasectomy of residual lung lesions, and chemotherapy. The calculated TFx was 0%. c CNA detection at three months and d 6 months after completion of therapy matches copy number changes observed at diagnosis without any clinical/imaging evidence of disease recurrence. The calculated TFx was 6.5% and 20%, respectively. e CNA detection at 12 months after completion of therapy at the time patient presented with symptoms of recurrent disease confirmed by imaging at a distant bony site. The calculated TFx was 9% . f CNA profile with no detectable alterations at the end of therapy. The calculated TFx was 0%. For each serial sample, the estimated ichorCN log2 ratio is plotted on the y-axis, and chromosomes 1-22, X, and Y are shown on the x-axis. Red indicates CN gain, blue indicates CN neutral, and green indicates CN loss. Full size image LP-WGS of cfDNA in plasma may aid diagnosis and monitoring response to treatment of pediatric patients with solid tumors Patient 69 presented with a large mediastinal mass and large pleural effusion not amenable to biopsy due to the patient’s poor condition. A diagnosis of malignant germ cell tumor was made based on radiographic appearance and high alpha-fetoprotein (AFP) levels. The patient initially showed clinical improvement after initiating chemotherapy. However, after two cycles of standard chemotherapy, in the setting of decreasing AFP tumor markers, he had progression, and despite a change in systemic therapy, he died of disease. A germ cell tumor with somatic malignant transformation was suspected and confirmed at autopsy. LP-WGS of the plasma sample at diagnosis (Fig. 4a ) prior to initiation of chemotherapy showed a highly complex pattern of whole chromosome and segmental gains and losses, including a gain of 12p and amplification of a region in 12q that included KRAS , alterations that are characteristic for germ cell tumor. After two cycles of chemotherapy, CMA analysis of a pleural fluid sample (Fig. 4b ) and LP-WGS of plasma cfDNA (Fig. 4c ) showed the same complex genetic abnormalities detected at diagnosis. The cfDNA sample (Fig. 4d ) at three months on therapy and four days prior to the patient’s death as well as the CMA analysis of the tumor at autopsy (Fig. 4e ), respectively, showed novel CNAs in addition to what had been detected in cfDNA at diagnosis. Fig. 4: Treatment response monitoring using liquid biopsy in a germ cell tumor case. a CNA profile of ctDNA from plasma at diagnosis (patient 69) revealing complex abnormalities affecting almost all chromosomes with notable 12p gain. The calculated TFx was 16%. b Chromosomal microarray analysis of a pleural fluid sample with similar complex abnormalities and high-level gain of 12p. c CNA profile of plasma LB one month later, post cycle two of chemotherapy. The CNAs were detected for all chromosomes. The calculated TFx was 20%. d CNA profile of LB sample 3 months after diagnosis and one month before death. All CNAs were still detectable in the plasma of this patient with an additional high-level gain of 6p. The calculated TFx was 47%. e CMA plot at autopsy depicting highly complex CNAs affecting all chromosomes, but without the gain in 6p. For each serial sample, the estimated ichorCN log2 ratio is plotted on the y-axis, and chromosomes 1-22, X, and Y are shown on the x-axis. Red indicates CN gain, blue indicates CN neutral, and green indicates CN loss. Full size image Comparative LP-WGS of cfDNA (Supplemental Table 1 ) was performed for diagnostic and follow-up blood samples for 21 patients. In over half the patients ( n = 12) there were no detectable CNAs in follow-up specimens, suggesting a response to treatment. Detection of ctDNA through fragmentomics analysis The ability to detect somatic aberrations depends on the concentration of cfDNA fragments in the collected plasma which may vary by cancer type, stage of the disease, and disease burden. Recent literature has shown that the proportion of cfDNA fragments in the size range of 20–150 bp is correlated with cfDNA concentration in plasma as determined by mutation detection assays, and may be used to define high and low-burden tumor types using blood 12 , 32 , 33 . Hence, we sorted the diagnostic plasma samples by the median proportion of fragments less than 150 bp. We found that certain tumor types, including osteosarcoma and germ cell tumors, had the lowest median values overall compared to Ewing sarcoma and hepatic cancers, which had the highest values (Kruskal–Wallis p value = 0.023) (Supplemental Fig. 4a ). Interestingly, we observed that ichorCNA derived tumor fractions and cfDNA concentration values were very similar to the short fragment proportion median ranks, reaffirming our categorization of low and high-tumor burden cancer types (Kruskal–Wallis p value = 0.61 and 0.013 respectively) (Supplemental Fig. 4b, c ). Despite observing these discerning trends, more samples are necessary to make an affirmative conclusion. Mutation detection in plasma cfDNA In order to assess the sensitivity of our assay for detecting ctDNA from patients with pediatric solid tumors, we examined the LP-WGS data for somatic mutations. Targeted sequencing results were available from 58 primary or metastatic tumors profiled with our clinical targeted NGS-panel, OncoKids® 34 . Mutations in clinically significant cancer genes were detected in 26 patients (Fig. 5 ). We looked for the presence of these mutations in LP-WGS data from cfDNA samples using base counts generated from bam files and confirmed the presence of the variant allele by visual inspection using IGV. A total of 16 pathogenic or likely pathogenic mutations identified by OncoKids® were identified for 14 of our patients using the LP-WGS data (Fig. 5a ). One germ cell tumor (patient 69), described above, was profiled with our OncoKids® NGS-panel and had a confirmed KRAS mutation, c.182 A > G (Fig. 5b ). We were able to detect this mutation by LP-WGS in the plasma LB which revealed a similar variant allele frequency (VAF) as reported by OncoKids®, albeit at 4.4x depth of coverage. The VAF increased from 12 to 73% in the cfDNA during progression suggesting clonal expansion and evolution of the disease. This increase in VAF was also correlated with an increase in tumor purity estimates using ichorCNA, suggesting that mutations detected by LP-WGS could provide an alternative approach for monitoring clonal dynamics. Similarly, the tumor sample from a patient with metastatic osteosarcoma (patient 11) was positive for a TP53 missense (c.743 G > A) variant of strong clinical significance, at a VAF of 28%. Mutational analysis of LP-WGS data allowed us to detect this same mutation at a VAF of 33%. TP53 and KRAS were among the most mutated genes identified from both the LP-WGS ctDNA data and OncoKids® tumor data across all pediatric solid tumor cases. Further studies are required to determine if this approach will be clinically feasible on a prospective basis, at least for some patients, or whether a companion targeted capture NGS-based panel approach will be required. Fig. 5: Mutation detection using low-pass whole genome sequencing data from liquid biopsies. a Summary of all variants identified with the OncoKids NGS-panel from the tumor that were inspected for presence in the matched LP-WGS data from the LB ( n = 26 patients). (1) and (2) is used to refer to two different variants identified in the same patient. An indication of presence in LP-WGS is shown in green and an indication of absence is shown in gray. VAF is shown as a fraction of the number of reads for the variant allele over the total allele count. b IGV screenshot showing a KRAS c.182 A > G detection in LB from a germ cell tumor patient (wild-type allele = T, mutant allele = C). Full size image Detection of translocations by hybridization-based capture and sequencing of cfDNA Chromosomal translocations contribute to tumorigenesis in many pediatric cancers. To better understand the clinical applicability of cfDNA assays, we developed a hybridization capture-based panel targeting the most common translocation partners in two pediatric cancers, specifically EWSR1 for Ewing sarcoma and FOXO1 in ARMS 5 . We targeted the most common breakpoints in these tumors, spanning exons 8–11 and introns 7–12 within EWSR1 , and intron 1 within FOXO1 5 . We evaluated the panel using tumor DNA samples from five patients (four with EWSR1 fusions and one with a FOXO1 fusion, as previously identified by OncoKids® 34 ). We detected EWSR1 fusions in three patients with Ewing sarcoma as well as a FOXO1 fusion in a patient with rhabdomyosarcoma, confirming the specificity of our assay. One tumor sample with a confirmed EWSR1 fusion by OncoKids® was not detected with our assay in the primary tumor. However, the fusion was detected with the same panel using cfDNA, suggesting that intra-tumor heterogeneity in the primary tumor could be a possible reason for not detecting this fusion. We then tested 19 cfDNA samples from 17 patients, including 12 with identified EWSR1 fusions, two with identified FOXO1 fusions by OncoKids® 34 , and three negative controls. We were able to identify EWSR1 fusions in 10 of the 12 patients with Ewing sarcoma (Supplemental Fig. 5a ), including two patients who had a CNA-negative LP-WGS profile at diagnosis, one with localized (patient 28) and one with metastatic disease (patient 26), (Supplemental Fig. 5b, c ). FOXO1 fusions were identified in the plasma of both patients with ARMS, one of which was negative for CNAs by LP-WGS (patient 38). The controls were negative for EWSR1 and FOXO1 fusions. Combined, these data suggest that the detection of gene fusions using cfDNA is possible in patients with pediatric solid tumors. Discussion Establishing the clinical applicability of plasma-based liquid biopsies in pediatric cancer patients versus adult patients has been challenging due to the limited sample volumes from infants and young children, small cohort sizes for individual tumor types, as well as the varied nature of the genomic alterations that characterize pediatric solid tumors. Nevertheless, several studies have described the feasibility of NGS-based approaches in pediatric solid tumors 19 , 21 , 22 , 23 , 24 , 30 , 31 , 35 , 36 , 37 . For example, to detect neuroblastoma-specific markers for LB biobanking strategies in 84 infants, Lodrini et al. showed that as little as 1–2 ml of blood plasma, CSF, or urine had sufficient cfDNA for disease monitoring and ultimately clinical implementation of LB assays 23 . However, whether this was a disease-specific finding, and whether cfDNA yield is correlated with location, histology, or tumor burden has yet to be determined. Shulman et al. recently reported that ctDNA can be detected in plasma samples from ~50% of pediatric Ewing sarcoma and osteosarcoma patients 35 and Shah et al. 29 successfully detected EWSR1 or FOXO1 fusions in patients with Ewing sarcoma or ARMS, respectively. However, similar to MAPPYACTS 20 , it is worth noting that many of the large cohort studies included patients with recurrent and/or metastatic disease in which the probability of detecting ctDNA in serum may be increased compared to patients with primary and/or localized disease. We previously reported the development of an LB assay for retinoblastoma using as little as 100 microliters of the aqueous humor of the eye and ultra-low-pass WGS to detect RB1 mutations and RB-associated CNAs 25 , 26 , 27 . Our goal for the present study was to determine whether plasma-based LB and LP-WGS could be used to aid in clinical diagnosis and prognosis of pediatric solid tumors in general, to monitor response to therapy, and to demonstrate early evidence for relapse. In this study, we demonstrated high sensitivity in detecting ctDNA across different pediatric solid tumor types. The overall CNA detection rate by LP-WGS was ~70% (26 of 37 patients) when considering only those patients with no systemic therapy or surgery prior to enrollment. Notably, 18 of 27 (67%) patients enrolled at diagnosis with localized disease had abnormal copy number profiles. To our knowledge, this is one of the few studies to show the applicability of ctDNA detection in pediatric solid tumor patients with localized disease. Moreover, even with prior resection of the primary tumor or chemotherapy, patients enrolled at diagnosis may still have sufficient ctDNA in their plasma to be detected by LP-WGS. The majority of osteosarcoma patients profiled here had localized disease and detectable CNAs in cfDNA at diagnosis. The absence of CNAs in follow-up samples was consistent with clinical response to therapy. For example, as shown in Fig. 4 , patient one was diagnosed with metastatic osteosarcoma, and had a negative LP-WGS CN profile one month off therapy, consistent with a molecular response to treatment. However, he had CNAs in the plasma as early as three months after the completion of therapy, and nine months prior to clinical detection of recurrence in a distant bony site. Our data thus suggest that serial LB of patients may provide a sensitive means for detecting early response and relapse. The potential application of this approach as an aid in the primary diagnosis of patients is illustrated in patient 69 with a germ cell tumor. Three available LP-WGS profiles from an LB of the pleural fluid and plasma-derived ctDNA showed a complex degree of genetic abnormalities involving most of the chromosomes, but which included copy number gains of chromosome 1 and 12 that are characteristic of germ cell tumor. The use of LP-WGS of cfDNA in conjunction with other clinical biomarkers such as AFP, may ultimately provide a non-invasive means of molecular diagnosis and evaluating clonal evolution in pediatric patients with solid tumors. To improve sensitivity in detecting ctDNA, fragmentomic analysis is an effective method for distinguishing ctDNA from non-cancer-derived cfDNA fragments in different tumor types such as glioma 32 , 33 , renal and pancreatic cancer 32 , as well as soft-tissue sarcoma 28 . By looking at the proportion of short fragments <150 bp we show that pediatric tumor types may also be classified by tumor burden (Supplemental Fig. 4 ). Hepatic tumors and Ewing sarcoma appear to have the highest tumor burden with a high proportion of short ctDNA-derived fragments less than 150 bp. Similar patterns were evident when comparing tumor fraction estimates from ichorCNA 30 and cfDNA yield per mL between the different cancer types. Combined, we show that ctDNA detectability may vary by tumor type, since some types of cancer may carry a higher proportion of ctDNA fragments. This hypothesis has been tested recently by a quantitative approach to measuring mutant ctDNA fragments in adult cancer patients 12 . Neuroblastoma and prostate cancers seemed to have the highest proportion of KRAS mutant ctDNA fragments when compared to medulloblastomas and gliomas 12 . Since our analysis was limited by the small cohort size and varying patterns between tumor types, a larger cohort would allow us to affirm our conclusions on the applicability of fragment size analysis in detecting ctDNA and distinguishing tumor types in pediatric cancers. In a proof-of-concept retrospective analysis, we examined the LP-WGS data for 26 patients in whom clinically significant mutations had been identified by our clinical NGS-based OncoKids® assay of the tumor tissue. We identified 16 hotspot mutations (in 14 patients) in a variety of oncogenes and tumor suppressor genes using cfDNA, including PIK3CA, TP53, and CTNNB1 , demonstrating that our LP-WGS assay was sensitive in detecting the low but significant number of mutations present in pediatric solid tumors 17 . This approach, however, may be limited by the ctDNA fraction in plasma and the low sequencing depth using LP-WGS. Moreover, our fusion capture panel was able to detect fusions with high precision (12 of 14) in Ewing sarcoma and ARMS using cfDNA samples (Supplemental Fig. 5 ). Two Ewing sarcoma cases were negative for fusions in plasma and had no detectable ctDNA by LP-WGS, suggesting a low prevalence of circulating tumor as the likely explanation for these negative results. Additionally, three of the fusion-positive patients, two with an EWSR1 and one with a FOXO1 fusion, had negative CNA profiles by LP-WGS underscoring the importance of comprehensive characterization of aberrations by both a targeted panel and LP-WGS. It is worth noting that we were limited by sample size for RMS since this type of tumor and FOXO1 translocations are rare 5 . Targeted NGS panels with a higher depth of coverage are in development and are likely to have a higher sensitivity for identifying the most common gene mutations and fusions in both CNS and pediatric solid tumors. Determination of the clinical applicability of LB assays such as the ones described in the present study will require validating their use in larger cohorts of patients with pediatric solid tumors. Serial studies using NGS-based LB assays will lead to a greater understanding of how they can be employed to monitor patients from diagnosis through treatment and recurrence. Methods Patient sample collection and processing Peripheral blood (1–10 ml) was collected in EDTA tubes and processed within a median of 1 h and 30 min by centrifugation at 2000 × g for 10 min to separate plasma and a buffy coat. The supernatant was removed, and a second centrifugation step was performed at 16,000 × g for 10 min to remove cell debris. All centrifugation steps were performed at 4 °C. The plasma was frozen until DNA isolation. cfDNA was extracted from 500 ul to 9 ml (median 3 ml) plasma samples using the MagMAX™ Cell-Free Total Nucleic Acid Isolation Kit according to the manufacturer’s instructions (Thermo Fisher Scientific, Waltham, MA). The abundance and quality of cfDNA in the extracted samples were assessed using a bioanalyzer Cell-free DNA ScreenTape analysis (Agilent, Santa Clara, CA) and Quantus™ Fluorometer (Promega, Madison, WI). Library preparation and sequencing Whole genome sequencing libraries were constructed with the xGen Prism DNA Library Prep Kit according to the manufacturer’s instructions and applying 13 cycles of PCR with the addition of fixed single-stranded Unique Dual Molecular Identifier indexes (UDMIs) (Integrated DNA Technologies, Coralville, IA) using 5 ng of cfDNA input. All libraries were paired-end sequenced on an Illumina NextSeq 500 or an Illumina HiSeq 4000 (San Diego, CA) at an average of 4.4x depth of coverage, which is deemed low coverage. Copy number alteration and tumor fraction analysis Reads were aligned to the 1000 genomes phase 2 reference genome (hs37d5) which includes build GRCh37 and decoy sequences: ( ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5.fa.gz ) using the Illumina Dragen 3.7.3 aligner. The ichorCNA algorithm was subsequently applied using the depth of coverage obtained from 500 kb bins across the genome 38 . When available, LP-WGS data from the LB samples were compared with CNA profiles of the matching primary tumors generated from CMA assays (CytoScanHD or OncoScan, Thermo Fisher Scientific, Waltham, MA). Briefly, all 59 Cytoscan or Oncoscan arrays performed on the tumor tissue were processed using ASCAT and NxClinical (BioDiscovery, El Segundo, CA) using the default settings, except that the Piecewise Constant Fitting (PCF) penalty was increased to 95 to account for FFPE degradation of samples. Subsequently, the segmented calls from LP-WGS and CMA were binned into 1 Mb regions across the entire genome. Pearson’s correlation for each pair of samples across all bins over autosomes was calculated for all CMA samples and LP-WGS samples with purity greater than 10%. All samples with a Pearson’s correlation coefficient ( r 2 ) less than 0.7 were manually examined. Fragment size, tumor concentration, and fraction analysis Insert size distributions across the entire genome were calculated using Illumina Dragen 3.7.3. The ratio of the number of fragments with sizes less than or equal to 150 bp and greater than 150 bp but less than 500 bp were calculated for all samples. Tumor fraction was estimated for all samples using the ichorCNA variant calling pipeline as previously described in ref. 38 . Briefly, the algorithm uses a Hidden Markov Model (HMM) to predict copy number segments and to estimate the circulating tumor content from total cfDNA sequenced. The details of this approach have been previously described in ref. 38 . For each of the categories (i.e., proportion of fragments <150 bp, tumor fraction, and tumor concentration) samples were classified by tumor type, and a Kruskal–Wallis test was performed between cancer types. Only samples collected at the time of diagnosis were used for this analysis. A p value <0.01 was considered statistically significant. Mutational analysis and targeted sequencing for fusion detection Targeted sequencing results from primary or metastatic tumors profiled with our clinical NGS-panel, OncoKids®, were reviewed for 26 cases 34 . The LP-WGS data from the LB samples were assessed to determine whether the same mutations could be detected. For mutations identified by OncoKids®, we used GetBaseCountsMultisample, to verify the presence of a mutation in LP-WGS ( ). We further verified the presence of mutations by LP-WGS using Integrative Genomics Viewer (IGV) 39 . For detection of the EWSR1 and FOXO1 fusions in Ewing sarcoma and ARMS patients, respectively, we designed a custom panel to test in tumor DNA using a hybridization-based capture method (Twist Bioscience, South San Francisco, CA). Since UMIs only help with correcting PCR and chemistry artifacts when detecting point mutations, we trimmed the UMIs using the Dragen 3.7.3 aligner and realigned the reads to build 37 of the human reference genome. This increased the sensitivity to detect fusions. We used Illumina Manta (version 1.6.0) in the “targeted” mode ( ) to detect structural variants. All statistical analysis was carried out using R version 4.0.4. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The ability to share primary sequence data were not included in the informed consent signed by patients. A data use agreement is required to access the data. Sequencing data from patients who are deceased have been deposited in EGAS00001006913. Code availability We used publicly available packages using R version 4.0.3 for our analysis. The code used to generate figures is available upon request.
Pediatric solid tumors make up approximately 40% of all childhood cancers. While pediatric cancer is rare, children can develop a wide range of tumor types, located in different parts of the body, which can make the differential diagnosis challenging. Investigators at Children's Hospital Los Angeles have developed a liquid biopsy for solid tumors that has the potential to aid in reaching a specific diagnosis when surgery or a tissue biopsy is not feasible. The study findings were published in the journal npj Precision Oncology. "This is one of the first clinically validated liquid biopsy tests to be launched at a pediatric academic medical center," says Jaclyn Biegel, Ph.D., Chief of Genomic Medicine and Director of the Center for Personalized Medicine at CHLA. "We created a test that may be helpful in making a diagnosis, determining prognosis, and potentially identifying an effective therapy for children with solid tumors," says Fariba Navid, MD, Medical Director of Clinical Research in the Cancer and Blood Disease Institute at CHLA. Dr. Navid and Dr. Biegel are co-senior authors of this study. A specific test for pediatric tumors is required because the genetics of tumors that affect adults differ from those in children. Adult tumors tend to be caused by mutations—sequence-based changes in a gene— so most liquid biopsy tests have been developed specifically to identify these mutations. However, pediatric tumors arising from mutations are less common. In children, copy number changes—losing or having extra copies of one or more genes—or rearrangements of genes that result in gene fusions, are more characteristic. For their research study, the CHLA team combined a technique known as Low-Pass Whole Genome Sequencing (LP-WGS) with targeted sequencing of cell-free DNA from plasma to detect copy number changes, as well as mutations and gene fusions, that are characteristic of pediatric solid tumors. An important feature of the study was that it required a much smaller volume of sample than is required for liquid biopsy studies in adults. Since an infant or young child has a smaller blood volume, the assays needed to be scaled down to accommodate this difference. To create the test, the researchers collaborated with clinical teams and research investigators at CHLA including Jesse Berry, MD, Director of Ocular Oncology and CHLA's Retinoblastoma Program, as well as investigators involved in Oncology, Neurosurgery and Pathology and Laboratory Medicine. Leo Mascarenhas, MD, MS, Deputy Director of the Cancer and Blood Disease Institute at CHLA was also involved in the design and support of the project. The first version of the test, launched in Nov. 2022, evaluates chromosomal copy number changes in blood samples, cerebrospinal fluid and the aqueous humor of the eye to aid in the clinical diagnosis for patients with solid tumors, brain tumors and retinoblastoma, respectively. The next version of the clinical assay, available in about six months, will include detection of mutations and gene fusions. The liquid biopsy-based genetic tests join the CHLA-developed OncoKids cancer panel, a next-generation sequencing-based assay designed to detect changes in DNA or RNA that are associated with pediatric leukemias, brain and solid tumors; the CHLA Cancer Predisposition Panel; RNAseq for cancer, a transcriptome-based assay using RNA sequencing; VMD4Kids, a panel for vascular and mosaic disorders; as well as methylation array-based profiling for pediatric brain tumors.
10.1038/s41698-023-00357-0
Biology
Microcensus in bacteria: Bacillus subtilis can determine proportions of different groups within a mixed population
Heiko Babel et al, Ratiometric population sensing by a pump-probe signaling system in Bacillus subtilis, Nature Communications (2020). DOI: 10.1038/s41467-020-14840-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-020-14840-w
https://phys.org/news/2020-03-microcensus-bacteria-bacillus-subtilis-proportions.html
Abstract Communication by means of diffusible signaling molecules facilitates higher-level organization of cellular populations. Gram-positive bacteria frequently use signaling peptides, which are either detected at the cell surface or ‘probed’ by intracellular receptors after being pumped into the cytoplasm. While the former type is used to monitor cell density, the functions of pump-probe networks are less clear. Here we show that pump-probe networks can, in principle, perform different tasks and mediate quorum-sensing, chronometric and ratiometric control. We characterize the properties of the prototypical PhrA-RapA system in Bacillus subtilis using FRET. We find that changes in extracellular PhrA concentrations are tracked rather poorly; instead, cells accumulate and strongly amplify the signal in a dose-dependent manner. This suggests that the PhrA-RapA system, and others like it, have evolved to sense changes in the composition of heterogeneous populations and infer the fraction of signal-producing cells in a mixed population to coordinate cellular behaviors. Introduction Cell-to-cell communication by diffusible signaling molecules is a central component of higher-level organization of populations in time and space. Specific signaling peptides are frequently used for this purpose, both in eukaryotes and prokaryotes. In Gram-positive bacteria, these signals are either detected at the cell surface by histidine kinases or they are sensed inside the cell by RRNPP-type receptors after uptake by oligopeptide permeases 1 , 2 . When signals bind reversibly to receptors on the cell surface, the extracellular concentration of signaling molecules is the primary source of information, and the affinity of the receptor for its ligand controls signal transduction. In contrast, when signaling molecules are irreversibly pumped into the cell to activate an intracellular receptor, the extracellular signal concentration may not correlate with the concentrations sensed by the receptor. Thus, it is far from obvious what kinds of information cells can extract with the help of these networks to coordinate population-level behavior. There has been tremendous progress in elucidating the molecular organization of the RRNPP signaling networks in recent years. RRNPP systems are widespread among Firmicutes and regulate traits which are commonly controlled by bacterial communication, such as cell differentiation, various forms of horizontal gene transfer, and the synthesis of (exo)factors that shape the interactions of these bacteria with other microbes and their hosts 2 , 3 . Binding of the signaling peptide to the receptor induces a conformational change that alters the activity of the receptor’s output domain(s), which, depending on the receptor subtype, is either a DNA-binding domain or a protein-interaction domain or both 4 , 5 , 6 . Thus, some systems control gene expression directly, others indirectly, and a few do so in both ways. However, all systems share a common feature—namely, that the signals are produced by an export–import circuit. Cells express precursor peptides, which are subsequently secreted and cleaved by different proteases to produce the mature signaling peptides. These signals are then actively pumped into the cells by the conserved oligopeptide permease Opp 7 , 8 , an ABC-type transporter that hydrolyzes ATP to drive the import of short oligopeptides 9 . Thus, RRNPP signaling networks represent prototypes for “pump–probe” networks, since signals are first “pumped” into the cell before they are “probed” (interpreted) by the respective RRNPP-type receptors. The systems-level functions that are performed by these signaling networks are still unclear. They are commonly thought to facilitate “quorum sensing” in a bacterial population, i.e., the population-wide coordination of gene expression in response to changes in cell density 10 , 11 . Theoretically, they could indeed function as sensitive devices for cell-density monitoring 12 , but whether RRNPP signaling networks actually implement a “quorum-sensing” type of regulation has been questioned 13 , 14 . They have also been hypothesized to function as timers for (multi-)cellular development 15 , 16 , 17 , to coordinate the development of cellular subpopulations 18 , 19 and, under certain conditions, signaling could be self-directed and act in cis rather than in trans 20 . It is indeed conceivable that the more complex pump–probe network architecture allows for different types of extracellular information processing. However, this has not been systematically investigated. One of the best characterized pump–probe network is the PhrA-RapA-Spo0F signaling pathway in Bacillus subtilis . The Rap proteins represent evolutionarily ancient RRNPP-type receptors 21 that are found in many Bacilli 22 . RapA and several other Rap homologs control the initiation of endospore formation by modulating the flux of phosphoryl groups through the sporulation phosphorelay to the master regulator Spo0A 23 . RapA binds to the response regulator Spo0F and stimulates the auto-dephosphorylation of Spo0F 23 , 24 . PhrA, the hydrophilic linear pentapeptide ARNQT that is derived from the phrA gene, binds to the RapA receptor at an allosteric site 14 , 15 , 25 . This induces a conformational change, which alters the interaction of the RapA with its response-regulator target Spo0F 4 , 24 , 25 , 26 . The rapA-phrA operon is highly regulated 27 , 28 , 29 , 30 , 31 , 32 and is activated under both non-sporulating 30 , 31 and sporulating 16 , 27 , 28 , 33 conditions, indicating that signaling takes place in different situations. Interestingly, under some conditions the operon is expressed heterogeneously across the population 18 (a phenomenon that has been observed for other rap-phr -signaling systems 16 ), which might point at a signaling function beyond classical quorum sensing 34 . Specifically, in a heterogeneous population its composition might be a relevant parameter for cellular decision-making. Here we ask what regulatory functions pump–probe networks serve. To answer this question, we employ a combination of theoretical modeling of generic pump–probe networks and specific experiments on the PhrA-RapA-Spo0F pathway under non-sporulating conditions. We use Förster (fluorescence) resonance energy transfer (FRET) to monitor changes in the interaction of the RapA receptor with its response-regulator target Spo0F upon extracellular stimulation of B. subtilis cells with PhrA. We show that, in theory, pump–probe networks can exhibit different sensory modes that could mediate diverse functions, including quorum sensing, as well as chronometrically and ratiometrically controlled modes of regulation. The experimentally determined signal processing characteristics of the PhrA-RapA-Spo0F pathway suggest that the system could have evolved to sense the fraction of signal-producing cells in a heterogeneous population. We therefore propose that pump–probe networks could play an important regulatory function in coordinating decision-making in mixed populations. Results Pump–probe networks could serve different functions The characteristic pump–probe architecture that RRNPP-type networks employ for information processing distinguishes them from other bacterial communication systems. The defining features of a pump–probe network are that cells “pump” extracellular signaling molecules into the cytoplasm, effectively converting the extracellular into an intracellular signal, which is then “probed“ (interpreted) by the appropriate intracellular receptor and transduced into a cellular output (Fig. 1a ). We first asked whether pump–probe networks could perform the regulatory functions that have been attributed to them (Fig. 1b ). Fig. 1: Schematics of a pump–probe network and its proposed regulatory functions. a Schematics of the pump–probe model. Signals (red circles) are pumped into the cell, where they are probed (bound) by an intracellular receptor and transduced into an output. The conversion of extracellular to intracellular signal concentrations depends on signal transport by the pump, as described by Michaelis–Menten kinetics, and signal degradation as depicted by the scissors symbol. Intracellular signal transduction from the receptor to the output is modeled by a Hill function. See “Methods” for details. b Regulatory functions performed by pump–probe networks. From left to the right: Quorum-sensing control : The output O is regulated in accordance with changes in cell density ρ . Ratiometric control : The output is regulated in accordance with changes in the composition of the population. Only a subset of cells (red, ρ p ) produces the signal, while all cells ( ρ c ) in the population can take it up. The output is regulated by changes in the fraction of producing cells f = ρ p / ρ c . Chronometric control : Cells switch the output after a delay time that depends (mainly) on cellular parameters, independent of the social context. Full size image With the help of a theoretical model described in detail in “Methods”, we studied how information about the social context of a cell is encoded in the extracellular signal concentration C e , and then converted into an intracellular signal C i that is transduced into a cellular output O . In brief, we consider an exponentially growing population of cells in which a fraction f of cells produces the signal at rate π , while all cells take up the signal at a rate v . Indeed, our model suggests that pump–probe networks could enable the receptor to read out different kinds of information from the environment and perform different control functions, including quorum sensing, chronometric and ratiometric control (Fig. 2 ). Fig. 2: Pump–probe networks could execute various regulatory functions. The results from simulations of the pump–probe model. A population of cells, suspended in a volume V e = 10 mL, grows exponentially at a rate µ = 0.55 h −1 . The initial population size is given by the inoculum N 0 = α × OD 600 nm , where α = V e (mL) × 1.19 × 10 8 cells mL −1 . Starting at t = 0, a fraction of cells f produces the signal at a rate π . The results are shown for an initial OD 600 nm = 0.0125 (solid lines) and OD 600 nm = 0.0063 (dotted lines) and for a homogenous ( f = 1) and two heterogeneous populations ( f = 0.5, 0.25, line color set by to the color bar). Active uptake (“pumping”) of the signals into the cells converts the extracellular signal C e into an intracellular signal concentration C i , which is probed by the receptor and transduced into an output O . a Quorum sensi ng control : C e and C i continue to rise as the population grows. The output tracks changes in the density of signal-producing cells, and varies both with the inoculum and f . Parameters: K M = 1.40 mM, v max = 0.31 amol min −1 , π = 1 amol min −1 , λ e = 0.1 min −1 , λ i = 0.1 min −1 ; n = 1, EC 50 = 0.37 µM. b Ratiometric control : Despite continuous population growth, the signal concentrations and the output approach a steady state that depends on f . Parameters: K M = 140 nM, v max = 0.31 amol min −1 , π = 1 zmol min −1 , λ e = 0 min −1 , λ i = 0.1 min −1 , n = 1, EC 50 = 8.5 µM. c Chronometric control : C e rises and rapidly saturates uptake capacity. As a result, the accumulation of C i depends mainly on cellular parameters (i.e., transport v max and degradation λ i ). A switch-like receptor output enables the response to be delayed for a certain time, which is (largely) independent of the inoculum or f , but is tunable by cellular parameters, e.g., the peptide-degradation rate λ i . Parameters: K M = 140 nM, v max = 0.31 amol min −1 , π = 10 amol min −1 , λ e = 0 min −1 , λ i = 0.1 min −1 , 0.0001 min −1 ; n = 10, EC 50 = 29 mM. Full size image For example, in a population that produces more signaling molecules than can be removed by the cells (i.e., there is an effective net production rate π eff = fπ − v max > 0), the extracellular concentration tracks changes in the population density ( C e ~ρ). If the extracellular concentration is proportionally converted into an intracellular signal, the receptor can read out information about the population dynamics, and the output can be regulated in accordance with changes in cell density (quorum-sensing control, Fig. 2a ). On the other hand, if the capacity for signal uptake exceeds signal production (i.e., π eff < 0), the extracellular concentration reaches a steady state that depends only on the fraction of signal-producing cells f (fractional sensing), and not on cell densities. In this case, the output will be able to respond to changes in the population structure f (ratiometric control, Fig. 2b ). Finally, under conditions where signal accumulates so quickly as to saturate signal import, the intracellular concentration approaches a steady state at a rate that is (largely) independent of the social context of the cell and depends on cellular parameters only. As a result, a cell could delay an output for a specific time τ delay (chronometric control, Fig. 2c ). We thus conclude that pump–probe networks could perform various control functions, depending on network parameters and operating conditions. However, the network parameters of real systems are not well defined. PhrA alters FRET between CFP-RapA and YFP-Spo0F in B. subtilis To experimentally investigate signal processing by the PhrA-RapA-Spo0F pathway, we utilized a genetically encoded RapA-Spo0F FRET reporter (Fig. 3a ). As shown below, this reporter provides direct readout of PhrA-induced changes in the RapA-Spo0F signaling complex at least under non-sporulating conditions, and thus of signaling activity within the cell. FRET, which relies on the distance- and orientation-dependent transfer of energy from an excited donor fluorophore to an acceptor fluorophore, has emerged as a powerful tool with which to study the function of bacterial signaling networks by monitoring protein–protein interactions in vivo 35 . When signaling alters the interaction between two fluorescently labeled proteins, changes in intermolecular FRET provide specific, fast, and quantitative readout of signaling activity. We measured FRET using acceptor photobleaching, where photoinactivation of the acceptor suppresses quenching of the fluorescence emitted by the donor, in proportion to the level of FRET observed prior to bleaching (Fig. 3b ). This approach provides an absolute measure of the FRET efficiency, as the percentage change in donor fluorescence upon bleaching (see “Methods” Eq. ( 5 )), which facilitates direct data comparisons across experiments. Fig. 3: FRET assay used to study PhrA information processing in Bacillus subtilis . a Scheme of a reporter cell that expresses CFP-RapA and Spo0F-YFP from an IPTG-inducible promoter, in a strain lacking the endogenous signaling components. When RapA and Spo0F form a signaling complex, intermolecular FRET between CFP and YFP occurs. Upon uptake of PhrA or other peptides by the oligopeptide permease Opp, PhrA binds to RapA and thus perturbs the interaction of RapA with Spo0F, thereby altering FRET. b Representative CFP trajectories from acceptor-photobleaching experiments with populations of unstimulated reporter cells (black line) and reporter cells stimulated with 10 µM PhrA for 6 min (red line). A FRET-negative control expressing free YFP and CFP is included for reference (gray line). YFP was bleached for 20 s to abolish FRET, which resulted in a corresponding increase in CFP emission. The FRET efficiency was calculated from the ratio of CFP emission before (CFP pre ) and after (CFP post ) photobleaching (blue circles), as determined by linear fits to the data (dashed lines). A relative increase in CFP emission that exceeds 0.6% is scored as significant, i.e., as indicating level of donor–acceptor FRET. c Barplot of FRET efficiencies derived from peptide stimulation experiments from left to the right: a FRET-negative control (BIB138), wild-type reporter cells (BIB625), a kinA kinB spo0B mutant (BIB1993) and a oppA mutant (BIB1563). Error bars: mean ± SD. Numbers of independent experiments: n e = 160, 112, and 21 for unstimulated, PhrA-treated and Scr-PhrA-treated (a scrambled version of the PhrA pentapeptide) in strain BIB625, respectively. n e = 5 (BIB138), n e = 6 (BIB1993), n e = 8 (BIB1563), respectively. Unpaired t test: n.s.: P > 0.05, *** P < 0.001. See Supplementary Data 1 for further statistical information. Source data are provided as a Source Data file. Full size image We constructed a FRET reporter using CFP-RapA and Spo0F-YFP (Fig. 3a ) in a B. subtilis strain that lacked the endogenous signaling genes ( ∆rapA-phrA ∆spo0F ). The cells can be induced to express these stable (Supplementary Fig. 1a ) and at least partially functional (Supplementary Fig. 1b –g ) fusion proteins from an ectopic locus in the chromosome. Reporter cells were induced in S7 50 media and grown to a moderate cell density (optical density at 600 nm (OD 600 nm ) ~1.6). Under these conditions, the PhrA-RapA signaling pathway is active in wild-type cells (as judged from activation of P rapA-phrA 30 , 31 and the presence of PhrA in culture supernatants, as is shown in Supplementary Fig. 8 ), yet sporulation is inhibited. Acceptor-photobleaching experiments were performed on populations following a procedure that was previously established for E. coli 36 , 37 , where integral CFP fluorescence of several hundred reporter cells is measured using a photomultiplier tube (Supplementary Fig. 2a ). For the FRET reporter strain, an increase in CFP fluorescence indicative of FRET was observed upon bleaching of the acceptor (Fig. 3b , black line). In contrast, essentially no change in fluorescence was observed in a negative control expressing free cytoplasmic monomeric YFP and CFP (Fig. 3b , gray line; Supplementary Fig. 2b ), or in a strain expression only CFP-RapA (Supplementary Fig. 2b ), suggesting that any nonspecific contributions to FRET, e.g., from photoconversion or molecular crowding, are negligible. The FRET efficiency in unstimulated reporter cells was (11.2 ± 0.6)%, ± indicating the standard deviation. This was comparable with (27.7 ± 2.1)% observed for a positive control (genetically fused YFP and CFP), while the spurious signal from the negative control was (0.04 ± 0.3)% (Fig. 3c , first and second bar; Supplementary Fig. 2b ). We then stimulated a population of reporter cells by adding 10 µM PhrA to the medium—a concentration that is sufficient to complement the sporulation defect of a signal-deficient phrA mutant 14 , 15 and to induce sporulation in the FRET reporter strain (Supplementary Fig. 1g ). This resulted in a strong decrease in the FRET efficiency to (4.7 ± 0.6)% (Fig. 3b , red line and Fig. 3c , third bar). In contrast, addition of a sequence-scrambled pentapeptide (Scr-PhrA) did not alter FRET (Fig. 3c , fourth bar). Since the PhrA-RapA-Spo0F pathway is embedded in a complex signaling network, we analyzed whether FRET was affected by cross talk with other Phr signaling systems and whether changes in FRET arise as an indirect consequence of perturbations to phospho-signaling via the sporulation phosphorelay. Reporter cells neither responded to any other non-cognate Phr peptide (Supplementary Fig. 3a ) nor the deletion of kinA , kinB , and the phosphotransferase spo0B affected the FRET efficiency of stimulated or unstimulated cells (Fig. 3c , fifth and sixth bar; Supplementary Fig. 3b ). Together, these experiments show that the FRET reporter provides a specific and quantitative readout of signaling activity in the PhrA-RapA-Spo0F signaling pathway. Cells respond quickly, but activated cells recover slowly To characterize the dynamic properties of signal processing, we studied the response to the addition and removal of the extracellular PhrA. To this end, we applied a non-saturating PhrA stimulus and measured FRET by removing the cells from the medium at specific time points after stimulation t s . We concomitantly monitored the depletion of PhrA from the medium by exposing fresh reporter cells to the spent supernatant for a defined period of time (Fig. 4a , see “Methods” for details). Within a few minutes of exposure to medium containing the stimulus (10 nM), the FRET efficiency decreased to (7.3 ± 0.7)%, while the extracellular PhrA levels declined to a level below the detection limit of the bioassay (Fig. 4b ). Notably, after extracellular PhrA had been depleted, the intracellular response was nevertheless sustained. Upon removal of external PhrA by resuspending the stimulated cells in signal-deprived medium and incubating them at 37 °C in microtubes, cells retained their activated state for 3 h (Supplementary Fig. 4a ). However, cells also failed to grow under these conditions. We thus performed another stimulation experiment by adding PhrA directly to a shake-flask culture. As before, the growing cells rapidly responded to the addition PhrA (10 nM), resulting in a sharp drop in the FRET efficiency. This was followed by a slow increase in FRET over time (Fig. 4c ; Supplementary Fig. 4b ). Deletion of the pepF gene, an intracellular peptidase that is known to be capable of degrading PhrA when overexpressed 38 , had little effect on the FRET response (Supplementary Fig. 5a ), suggesting that other peptidases may contribute to signal degradation. Also, in cell-free supernatants, the extracellular PhrA (10 nM) was found to remain stable for hours (Supplementary Fig. 5b ). Fig. 4: PhrA activates cells quickly, but cells recover slowly. a Experimental setup used to characterize the signaling dynamics. Cells are exposed to a PhrA stimulus (red color) and take up the signal. Time t s refers to the time between addition of the stimulus and removal of cells from the medium. Stimulated cells and supernatants are separated to analyze the FRET response of stimulated cells and to measure the amount of PhrA remaining in the supernatants with the help of a bioassay (see “Methods”). b Activation dynamics upon stimulation with 10 nM PhrA. In stimulated cells (red), FRET decreases as a function of t s while, according to the bioassay, FRET values concomitantly rise to approach the levels seen in the unstimulated control, indicating that PhrA is depleted from the medium (yellow). Data: mean ± SD from n e = 9. Red and yellow lines (shaded areas) depict the best fits (95% confidence intervals) to the pump–probe model (see Fig. 6). c Deactivation dynamics: FRET response to a non-saturating stimulus (10 nM) that was added to a growing population of cells at an optical density of OD 600 nm = 1.6. Cells were removed from the culture at times t s to measure FRET. The red line (shaded area) depicts the best fit (95% confidence intervals) to an extended pump–probe model that considers effects of both population growth and intracellular signal degradation (see Fig. 6). Inset: Corresponding OD 600 nm curve. Data: mean ± SD, n e = 4. Black line: fit to exponential population growth with rate μ = 0.58 h −1 . Source data for all panels are provided as a Source Data file. Full size image Competition for substrate uptake inhibits PhrA signaling When we deleted the gene for the oligopeptide-binding protein OppA that delivers the peptides to the Opp transporter 7 , 39 , there was virtually no response, as expected (Fig. 3c , seventh bar). We then measured the response starting from different initial extracellular concentrations C e in the absence and presence of a competing peptide. With increasing signal concentrations, FRET gradually decreased and then levelled out at (4.7 ± 0.6)% (Fig. 5a ). Scr-PhrA strongly inhibited the PhrA-mediated response when the competing peptide was present in excess (Fig. 5b ). However, adding scr-PhrA to cells prior to stimulation with PhrA had no effect, indicating that the peptide competes with PhrA for uptake by Opp, but not for RapA receptor binding (Fig. 5c ). Fig. 5: Competition for substrate uptake inhibits PhrA signaling. a Response to increasing PhrA levels ( t s = 6 min). Data: mean ± SD from n e = 10. The curves (shaded areas) depict the best fits (95% confidence intervals) to the pump–probe model (see Fig. 6 ). The top axis denotes the intracellular PhrA concentrations as estimated by the model. b Competition for peptide uptake inhibits PhrA signaling. FRET response curve of reporter cells stimulated with 10 µM and 10 nM PhrA in the presence of the indicated concentrations of a competing peptide, scrambled PhrA ( t s = 6 min). Data: mean ± SD from n e = 5. Lines and shaded areas denote best fit and the 95% confidence interval to the pump–probe model. c Barplot of FRET values derived from a sequential stimulation experiment, including individual data points. Prior exposure of cells to 20 µM Scr.-PhrA did not alter the response to 10 nM PhrA. Data: mean ± SD from n e = 4. Unpaired t test: P (0.21) > 0.05 (n.s.). See Supplementary Data 1 for further statistical information. Source data for all panels are provided as a Source Data file. Full size image Signal processing is well described by the pump–probe model The signal processing characteristics of the PhrA-RapA-Spo0F pathway are jointly determined by its signal conversion and signal transduction properties. With the help of the pump–probe model, one should be able to disentangle the two and learn how extracellular signals are converted into an intracellular signal and how cytosolic PhrA then affects the RapA-Spo0F signaling complex (Fig. 6a ). In order to quantitatively describe our data, we investigated signal processing by the pump–probe model assuming that the FRET response is governed by the intracellular PhrA concentration and described by a Hill function. We included substrate competition, assuming that the Opp pump transports all pentapeptides with the same efficiency (see “Methods” for details). Furthermore, on the short timescale of our activation experiments in Figs. 4b and Fig. 5 , cell growth and signal degradation are negligible (Supplementary Figs. 4 and 5 ). For the long-term response dynamics of growing cells (Fig. 4c ), both intracellular signal loss from dilution due to cell growth at the experimentally determined rate (inset Fig. 4c ) and linear signal degradation were explicitly modeled. We then fitted our data set to the pump–probe model, which resulted in excellent agreement (all lines in Figs. 4 and 5 ) given the parameters summarized in Table 1 . Fig. 6: PhrA signal processing is well described by the pump–probe model. a Experimental–theoretical workflow. Experimental data were fitted to the pump–probe model to infer the parameters that determine extra- to intracellular signal conversion and transduction of the signal into a FRET output. The inferred parameters were then used to make testable predictions and validate the model experimentally. See lines in Figs. 4 and 5 for best fits using parameters in Table 1 . b Model validation: model-based predictions with 95% confidence intervals (lines and shaded area) and experimental results for a stimulation experiment with 30 nM PhrA. The response dynamics to stimulation (red) is shown, together with the depletion of PhrA from the supernatants as measured by the bioassay (black). Data: mean ± SD from n e = 3. Source data are provided as a Source Data file. Full size image Table 1 Parameters of the PhrA signaling network in Bacillus subtilis . Full size table To further demonstrate that the simple pump–probe model adequately describes PhrA signal processing, we predicted and experimentally verified the extra- and intracellular response dynamics to a 30 nM PhrA stimulus, which resulted in very good agreement (shaded areas depict 95% confidence intervals of the model prediction in Fig. 6b ). In addition, the response to a higher 100 nM stimulus was also captured satisfactorily (Supplementary Fig. 6 ). We thus conclude that PhrA signal processing is well described by the pump–probe model. Signal conversion results in strong signal amplification Based on the inferred parameters and their 95% confidence intervals listed in Table 1 , we can provide more details in the signal-transduction process. First, our model suggests that FRET between RapA and Spo0F changes in a graded manner in response to increasing concentrations of intracellular PhrA and then saturates at a finite level. Thus, signal transduction is well approximated by a simple hyperbolic response function (best fit: n = 1.4), indicating that there is little, if any, cooperativity present in signal transduction. Second, the inferred EC 50 (best fit: 38 µM) suggests that relatively high intracellular signal concentrations are required for signal transduction. To suppress FRET to half-maximum, a ~1000-fold lower extracellular signal concentration (nM) than the intracellular EC 50 ~µM was required (Fig. 5b ). This strong signal amplification upon extra- to intracellular signal conversion is the consequence of active and efficient signal transport by the Opp pump, which allows the accumulation of PhrA in the small cell volume against an external concentration gradient. Finally, the inferred characteristic timescale for signal processing \(\tau = 1/(\mu + \lambda _{\mathrm{i}})\sim 50\) min is relatively long, and signal degradation ( λ i ) and the dilution rate due to cell growth ( μ ) each contribute roughly equally. As a consequence, the intracellular concentration tracks fluctuations in extracellular concentrations on timescales faster than τ rather poorly. Instead, cells integrate extracellular signals over the characteristic signal processing time τ —or shorter times—until all signals are depleted from the medium (Fig. 4c ). Population of cells process PhrA in a dose-dependent manner When cells compete with each other for signal uptake, the signal conversion depends on the cell density in addition to the initial extracellular concentration. Both factors can be combined into a single environmental parameter, the signal dose, defined as the amount of available signaling molecules per cell. We thus investigated to what extent the response to PhrA stimulation depends on either factor alone (Fig. 7a, b ) and the combined effect as described by the dose, respectively. To this end, we kept the dose at a fixed level and varied the extracellular signal concentration and cell density, respectively. The response curves obtained with different doses were clearly distinct, especially with respect to the final degree of FRET inhibition achieved (Fig. 7c ). According to the model, response curves corresponding to the same dose will all converge on the same final output, although the kinetics varies. Thus, we next predicted the maximal degree of inhibition as a function of the extracellular concentration and the cell density, respectively, from the pump–probe model, and again found excellent agreement with our experimental data (Fig. 7d ). Indeed, all our data collapsed onto a single dose–response curve that describes how FRET is inhibited as a function of the number of extracellular signaling molecules per cell and fits the prediction from the model very well (Fig. 7e ). The dose for a half-maximal response is D 50 = 2.4 × 10 4 PhrA molecules. This indicates that “dose” is in fact the dominant environmental factor that determines signal processing under our conditions. Fig. 7: The response to PhrA depends on the signal dose. a The FRET response of the population varies with cell density. Time course of stimulation with 10 nM PhrA for three different cell densities. Data: mean ± SD from n e = 3. b The FRET response varies with the extracellular concentration when cell density is fixed (OD 600 ∼ 1.6). Data: mean ± SD from n e = 3. c Response curves using the indicated extracellular concentrations and cell densities, respectively. Proportional changes in both number of cells and extracellular concentrations, i.e., exposure of cells to the same dose, lead to very similar response curves. The response curves from different doses are distinct. d The contour plot of the lowest level of FRET as a function of cell density and the extracellular concentration of PhrA as predicted by the pump–probe model shows that variations in FRET require a change in the signal dose D . Filled circles denote experimental data. Color of circles indicates measured FRET values as set by the scale of the color bar on the bottom. e Dose–response curve: all of the minimal FRET values obtained from stimulation experiments with different extracellular concentrations collapse onto the predicted dose–response curve from the pump–probe model with a D 50 = 2.4 × 10 4 molecules. Color of circles depict extracellular concentrations used in the experiment according to the scale of the color bar on the right. Source data for all panels are provided as a Source Data file. Full size image The PhrA-RapA-Spo0F system is capable of ratio sensing The dose-dependent response provides evidence for the capacity for ratiometric output control in a heterogeneous population, where all cells take up peptides but only a fraction f of the population produces the signal. During the signal integration time τ each “producer” cell synthesizes N out = πτ PhrA molecules, where π is the PhrA production rate. Since only a fraction f of the population produces the signal, the number of available PhrA molecules per cell, i.e., the dose, is given by D − fN . Therefore, changes in the fraction f of producers result in a change in the signaling output, provided the intracellular pathway is not saturated. At steady state, signal production and signal uptake balance each other; thus one can estimate D from the number of signaling molecules taken up by each cell during the signal integration time, i.e., \(D\sim N_{{\mathrm{in}}} = \frac{{\tau v_{{\mathrm{max}}}C_{\mathrm{e}}}}{{C_{\mathrm{e}} + K_{\mathrm{M}}}}\) . We thus next estimated the PhrA concentration in the supernatant of wild-type cells with the help of a sensitized bioassay, and found that it is present at sub-nM concentrations, C e ~0.4 nM (Supplementary Fig. 7 ). Hence, using the inferred parameters from Table 1 , we estimate D = 2.7 × 10 4 , which is comparable with the dose required to induce a half-maximal response D 50 = 2.4 × 10 4 in our stimulation experiments. We thus conclude that the parameters of the PhrA signaling system are properly balanced to facilitate ratiometric output control in heterogeneous populations. Discussion Cellular signaling systems based on RRNPP receptors have emerged as promising targets for manipulating the behavior of bacterial populations in diverse biotechnological and biomedical settings 2 . Our systems-level analysis of signal processing provides key insights into both the functioning of pump–probe networks and the signal conversion and transduction properties of the prototypical rapA-phrA system in B. subtilis . By utilizing a novel FRET reporter, we could quantitatively study important features of signal processing, which has enabled us to infer network parameters with the help of the pump–probe model. The model fits the experimental data very well (Figs. 4 and 5 ), and it has predictive power (Fig. 6b ). For high signal concentrations (100 nM), additional effects could come into play (Supplementary Fig. 6 ), but these should have little relevance because PhrA levels in supernatants were orders of magnitude lower (Supplementary Fig. 7 ). However, we add the caveat that our model assumes that FRET changes as a function of the intracellular PhrA concentration, which implies that receptor kinetics is fast relative to all other processes. This is a reasonable assumption, given that the activation dynamics is limited by signal uptake and the K d ∼ µM 40 for Rap-Phr interactions is relatively high. Finally, the parameter values inferred from our model are generally in good agreement with previous data on Opp-based transport in B. subtilis 41 and PhrA signal transduction 25 , further increases our confidence in our model. The combined experimental and theoretical approach allows us to provide further insights into the individual processes that govern pump–probe signaling. The Opp pump imports v max = 1.9 × 10 5 molecules min −1 at maximal speed; thus, cells clear peptides from their environment very efficiently. However, the cellular signal import rate under physiological conditions is much lower ~500 PhrA molecules min −1 owing to the low PhrA concentrations in the medium. Thus, the peptide-binding protein OppA must have sufficient affinity to facilitate signaling at low peptide concentrations. Indeed, the inferred effective affinity of peptide transport ( K M = 140 nM) is about two orders of magnitude lower than that for OppA from Lactococcus lactis , which feeds on peptides in protein-rich environments 42 , 43 . In the presence of other peptides, competition for peptide uptake slowed down PhrA signal accumulation in B. subtilis , and thereby interfered with signaling. Notably, peptide-rich media have an inhibitory effect on RRNPP signaling 17 , 21 , 44 and in Enterococcus faecalis signal import does not occur via OppA but with the help of a signal-specific peptide-binding protein PgrZ 45 , presumably to avoid such competition and to minimize signal interference. After signals are pumped into the cell, the intracellular signal concentration is probed by RRNPP-type receptors. Raps belongs to a subclass of RRNPP receptors termed s witchable a llosteric m odulator p roteins (SAMPs) 46 , because they modulate the activity of response regulators; Phr peptides switch this interaction by binding to the receptor in a 1:1 stoichiometry at an allosteric site 4 , 40 , and structural studies suggest competitive allosteric inhibition as the dominant mode of signal transduction 4 , 40 . While the analysis of receptor function in vitro is very advanced, functional in vivo analyses have lagged behind. Thus far, receptor function has been assessed rather indirectly, using gene expression 4 or cell differentiation readouts 38 . Moreover, the responses are typically reported as a function of (initial) extracellular concentration, which may not correlate well with the intracellular signal concentration that is detected by the receptor. With the help of the pump–probe model, and utilizing FRET to directly probe the interaction of the receptor with its response-regulator target, which allowed us to infer the intracellular signal concentrations (Fig. 5b ), we have successfully met these challenges. Indeed our data suggest that extra- and intracellular concentrations are very different. PhrA signaling operates at very low extracellular signal concentrations (sub-nM), although the intracellular signal transduction exhibits limited sensitivity, as indicated by the fairly high EC 50 (µM). Given the relatively low nM–µM extracellular signal concentrations in culture supernatants that have been measured in other systems 41 , 47 and the high K I values determined for other RRNPP receptors 25 , 40 , such strong signal amplification upon extra- to intracellular signal conversion is probably quite common. Our data also provide first insights into how the RapA receptor functions in the bacterial cell under non-sporulating conditions where PhrA signaling is active in wild-type populations. In line with expectations based on the 1:1 stoichiometry of PhrA binding to the RapA receptor, the FRET response mediated by the PhrA-RapA-Spo0F pathway is well described by a hyperbolic response function. If PhrA acted to dissociate the RapA-Spo0F complex, it should reduce, and eventually abolish, FRET. Indeed, PhrA inhibited FRET, but substantial FRET signal above the negative control still remained at high signal concentrations. This residual FRET is not an artefact of population measurements using acceptor photobleaching, as E-FRET measurements 48 on single cells also confirmed that all cells respond to PhrA but retain residual FRET (Supplementary Fig. 8 ). Hence, the receptor–regulator complexes may not (fully) dissociate and instead, a stable ternary complex might form. In support of this inference, the in vitro action of PhrA on RapA is best described by a partial noncompetitive inhibition mechanism 25 , which implies that PhrA-RapA-Spo0F complexes contribute to signaling. Thus upon activation, Raps may remain (partially) bound to their (unphosphorylated) response-regulator targets 24 , which could fine-tune the cellular response to receptor stimulation 46 . Since our experiments were conducted under non-sporulating conditions, FRET likely reports on the interaction of RapA with unphosphorylated Spo0F. In vitro data suggest that phosphorylation of Spo0F alters and stabilizes the interaction with RapA 24 . This could affect FRET under sporulating conditions and should be investigated in the future. In bacteria, there are numerous examples of different network architectures that utilize diffusible signaling molecules to regulate cellular behaviors 12 , 49 . Precisely what kind of information cells can extract with the help of these sensory networks remains under debate 50 , 51 . The most popular interpretation is that they are utilized for cell-density sensing 52 . In the case of RRNPP-based signaling, the receptors are commonly referred to as “quorum-sensing” receptors 4 , 6 , 53 . However, the experimental evidence that these systems mediate a cell-density-dependent type of regulation is—at least not only for the Rap systems in B. subtilis 14 , 54 , but also others 17 —rather weak. However, the capacity for quorum sensing is in principle only one of several control functions that pump–probe networks could perform, as suggested by our model. It is thus possible that this or other types of regulation occur in other systems or under different conditions. For example, upon the transition from non-sporulating to sporulating conditions, the inferred parameter values for the PhrA-RapA network might change, since all signaling components (and their interactions) are regulated by a complex network 55 . In principle, this could switch the network’s control function, e.g., from ratio to chronometric or quorum-sensing control, respectively. Notably, at least under some physiological conditions, Phr signaling may function to coordinate cellular decision-making in the context of a heterogeneous population. Population heterogeneity could be phenotypic, as in the case of PhrA signaling under sporulation conditions, where only a subpopulation of cells that delays sporulation initiation and continues to divide upregulates the expression of the signaling system 18 , 19 , or genetic, as in the case of Phr LS20 signaling, where Phr LS20 is expressed from a plasmid to regulate conjugation to other cells 56 . How signaling contributes to decision-making in heterogeneous microbial populations is very much understudied. In yeast, the pheromone pathway mediates sensing of the sex ratio to control cellular investments in mating 57 . Our experiments show that B. subtilis processes PhrA signals in a dose-dependent manner: the signaling output is determined by the level of extracellular signal per cell. This is a strong indication that pump–probe networks are capable of mediating fractional (ratiometric) population sensing in mixed populations without any additional regulation. Fractional population sensing requires cells that do not produce the signal to take up the signal. This is likely to be the case for signals that rely on nonspecific transport by the conserved oligopeptide permease Opp. For pump–probe signaling networks, the capacity for fractional sensing is built into the basic network architecture. This contrasts with the case in yeast, where ratio sensing is performed by a membrane-bound receptor signaling pathway that requires specific additional regulatory features to perform this function 57 . We therefore propose that fractional sensing could be a widespread function of oligopeptide-based signaling involving the use of Opp pumps to coordinate the behavior of bacteria in mixed populations. It could also be exploited by selfish genetic elements (including plasmids 56 and integrative and conjugative elements (ICE) 58 ) and viruses 47 that are known to carry peptide-based pump–probe signaling circuits in their respective genomes. Methods Mathematical model for pump–probe networks We consider a population of cells that are homogenously distributed in a volume V e . The population grows exponentially at a rate µ , \(N_{\mathrm{c}} = N_{\mathrm{c}}^0e^{\mu t}\) . A fraction of the population f produces the signal at a constant rate π . If f = 1, the population is homogenous, and heterogeneous otherwise. At time t = 0, the extracellular signal concentration is \(C_{\mathrm{e}}^{\mathrm{s}}\) ( t = 0) = C stim , and there is no signal inside the cell, i.e., \(C_{\mathrm{i}}^{\mathrm{s}}\) ( t = 0) = 0. Other peptides are present at concentration \(C_{\mathrm{e}}^{\mathrm{o}}\) and compete for peptide import. Each cell imports peptides at a rate r , which is a function of the total peptide concentration \(C_{\mathrm{e}} = C_{\mathrm{e}}^{\mathrm{s}} + C_{\mathrm{e}}^{\mathrm{o}}\) . Peptide uptake is assumed to occur with Michaelis–Menten kinetics at a maximum particle flux per cell of v max , and approaches saturation with increasing K M. As the signal is imported at rate v , the intracellular signal concentration \(C_{\mathrm{i}}^{\mathrm{s}}\) rises at a rate v / V i . Peptides are degraded extra- or intracellularly at rates λ e and \(\lambda _{\mathrm{i}}^{\mathrm{s}}\) , respectively and intracellular signals are also diluted by cell growth. The following set of ordinary differential equations describes how, for the intracellular signal concentration \(C_{\mathrm{i}}^{\mathrm{s}}\) , the extracellular signal concentration \(C_{\mathrm{e}}^{\mathrm{s}}\) , and the total extracellular peptide concentration C e , change as a function of time: $${\frac{{{\mathrm{d}}C_{\mathrm{i}}^{\mathrm{s}}}}{{{\mathrm{d}}t}} = v_{{\mathrm{max}}}\frac{{C_{\mathrm{e}}^{\mathrm{s}}}}{{K_{\mathrm{M}} + C_{\mathrm{e}}}}\frac{1}{{V_{\mathrm{i}}}} - (\lambda _{\mathrm{i}}^{\mathrm{s}} + {\mathrm{\mu }})C_{\mathrm{i}}^{\mathrm{s}}} ,$$ (1) $${\frac{{{\mathrm{d}}C_{\mathrm{e}}^{\mathrm{s}}}}{{{\mathrm{d}}t}} = \pi f\frac{{N_{\mathrm{c}}}}{{V_e}} - N_{\mathrm{c}}v_{{\mathrm{max}}}\frac{{C_{\mathrm{e}}^{\mathrm{s}}}}{{K_{\mathrm{M}} + C_{\mathrm{e}}}}\frac{1}{{V_{\mathrm{e}}}} - \lambda _{\mathrm{e}}C_{\mathrm{e}}^{\mathrm{s}}} ,$$ (2) $${\frac{{{\mathrm{d}}C_{\mathrm{e}}}}{{{\mathrm{d}}t}} = - N_{\mathrm{c}}v_{{\mathrm{max}}}\frac{{C_{\mathrm{e}}}}{{K_{\mathrm{M}} + C_{\mathrm{e}}}}\frac{1}{{V_{\mathrm{e}}}} - \lambda _{\mathrm{e}}C_{\mathrm{e}}} .$$ (3) We assume that receptor activation and signal transduction occur rapidly. In this case, the output O becomes a function of the intracellular signal concentration and is modeled by a Hill function for simplicity: $${O\left( t \right) = O_{{\mathrm{max}}}\frac{{C_{\mathrm{i}}\left( t \right)^n}}{{EC_{50}^n + C_{\mathrm{i}}\left( t \right)^n}}} .$$ (4) Here O max is the maximal output, EC 50 is the intracellular peptide concentration that yields a half-maximal response, and n is the Hill coefficient. The model Eqs. ( 1 )–( 4 ) were solved numerically with Matlab R2017b (MathWorks Inc.) using the ode15s solver. Media Strains were grown in LB-media (Lennox version) 59 or S7 50 minimal medium 60 , 61 at 37 °C with aeration. Difco sporulation medium (DSM), growth medium (GM), and resuspension medium (RM) were prepared according to standard protocols 62 . LB agar plates were used to select transformants. When required, the appropriate antibiotics and amino acids were added as follows: for E. coli , ampicillin (100 µg ml −1 ); for B. subtilis , spectinomycin (100 µg ml −1 ), erythromycin (2 µg ml −1 ), tetracycline (10 µg ml −1 ), and tryptophan (50 µg ml −1 ). Plasmid construction All plasmids and primers are listed in Supplementary Tables 1 and 2 , respectively. Escherichia coli DH5α (Invitrogen, Carlsbad, CA, USA) was used for cloning. All plasmids were verified by sequencing. FRET reporter plasmids : FRET reporters were constructed with the help of pDR111 by restriction-enzyme ligation cloning (RELC). rapA and spo0F were amplified from B. subtilis 168 genomic DNA and fused via a GSGGV linker to monomeric yfp-venus and ecfp(Bs) , respectively. We first constructed expression plasmids for the individual fusion proteins in order to test for their functionality in B. subtilis . ecfp (Bs) was amplified from pDR200, fused to the N-terminus of rapA by a joining PCR, and cloned into pDR111 by RELC using the enzymes NheI and SphI, resulting in EIB77. yfp-venus was amplified from AEC253, fused to the C-terminus of spo0F , and cloned into pDR111 by RELC using the SalI and NheI enzymes, resulting in EIB283. To obtain the FRET reporter plasmid, spo0F-yfp was excised from EIB283 with SalI and NheI, and ligated into EIB77 to generate EIB284. The FRET reporter contains an operon comprising the spo0F and rapA fusion protein genes under the transcriptional control of the IPTG-inducible P hyperspank promoter. For the FRET-negative control plasmid (EIB152), used to express free cytoplasmic YFP and CFP under the control of IPTG, both genes were cloned into pDR111 by RELC using the enzyme pairs SalI and NheI and NheI and SphI, respectively. For the FRET-positive control plasmid (EIB151), we fused yfp-venus to cfp(Bs) with a GSGGV linker by a joining PCR and cloned the product into pDR111 by RELC using SphI and NheI. Plasmids for gene deletions: The plasmids for clean deletions were constructed by amplifying 500-bp fragments located upstream and downstream of the region of interest from genomic DNA. The DNA fragments were fused by PCR and cloned into the pMAD vector by RELC using the enzymes SalI and BglII. Plasmids for xylose induction of PepF: The pepF coding sequence, including its native transcription terminator, was amplified from genomic DNA. As a ribosome-binding site, the consensus Shine-Dalgarno sequence was added upstream of the start codon. The insert was cloned into pAX01 63 by RELC with the enzymes SpeI and BamHI. Strain construction All B. subtilis strains were derived from 1A700 (W168) and are listed in Supplementary Table 3 . FRET reporter strain: To construct the FRET reporter strain the rapA-phrA operon was deleted from W168 using plasmid EIB185, following a protocol similar to that of Arnaud et al. 64 . The resulting strain was subsequently employed to delete spo0F after transformation with plasmid EIB281 to yield strain BIB415 (Δ rapAphrA Δ spo0F) using the same protocol. At each step, we verified that the gene had been deleted from its chromosomal locus, that the pMAD plasmid had been lost and, finally, that the gene was entirely absent from the chromosome by appropriate PCRs (notably, we found that some transformants had acquired a gene copy in another locus; these were discarded). FRET reporter strains (BIB625 and BIB914) were obtained by transforming BIB415 with the indicated FRET reporter plasmid EIB282 according to standard protocols 62 . Correct integration of the reporter constructs at the amyE locus was verified by an amyE -negative phenotype, while appropriate PCRs were performed to verify the correct size of the integrated construct in the amyE locus and confirm that no additional single crossover had occurred (absence of the ampR -resistance cassette). We note that integrations carried out with pDR111 (and probably many other common amy integration vectors) can result in a ~250-bp deletion in the adjacent ldh locus. We therefore verified by PCR that our transformants retain an intact ldh locus 65 . In addition, we confirmed by PCRs the deletion of rapAphrA and of spo0F from the final strain. The FRET control strains BIB134 and BIB138 were obtained by transforming the wild-type strain with plasmids EIB151 and EIB152, respectively, and verified by PCRs. Mutant reporter strains: Deletions of indicated genes ( kinA, kinB, spo0B, oppA , and pepF ) were made in the FRET reporter strain (BIB625) using pMAD-derived plasmids (Supplementary Table 2 ), and verified as described above. A xylose-inducible pepF construct was introduced into the lacA locus (BIB1612) by transforming BIB625 with EIB544. Correct integration in the lacA locus was verified by a PCR of the lacA locus and a PCR for the ampR -resistance cassette to confirm that no additional single crossover had occurred. Functionality of fluorescent fusion proteins Protein stability was assessed by western blotting 66 . Proteins were harvested from B. subtilis cells grown in 20 mL of LB medium and induced with IPTG. Fusion proteins were detected with anti-GFP conjugated with HRP antibody (Invitrogen, Catalogue no. A10260, lot number 898225). Function of fusion proteins was assessed by plating on DSM agar plates in combination with measurements of colony opacity using ImageJ 67 . Sporulation of FRET reporter cells was induced by the resuspension method 62 by applying a shift from GM to RM media with 10 µM IPTG and the indicated concentrations of PhrA. After 24 h of incubation at 37 °C in shake-flask culture, the sporulation frequency was determined by microscopy. Quantitative FRET assays Induction of FRET reporters: Reporter cells were inoculated from a single colony into 5 ml of LB medium with antibiotics and incubated on a rotary shaker (Infors HT Multitron) at 180 rpm and 37 °C for 7 h. Cells were resuspended at OD 600 nm = 0.003 in 5 ml of S7 50 medium and incubated for 16 h overnight. Expression of the fusion proteins was induced by resuspending the reporter cells at an OD 600 nm =0.04 in 10 ml of fresh S7 50 supplemented with 100 µM IPTG in a 100-ml flask. When applicable, protein expression from a xylose-inducible promoter was induced by adding xylose at a final concentration of 1% (w/v). Cells were grown to a final OD 600 nm ~1.6. Stimulation of reporter cells with synthetic peptides: To stimulate the reporter cells, we added 5 µl of an appropriate concentrate (purity >95%) of synthetically synthesized peptides (Peptides and Elephant, Henningsdorf, Germany) to 500 µl of the induced culture in a conical 2-ml reaction tube. After incubation for time t inc , cells were centrifuged for 1 min at 17,000× g . Thus, after addition of the stimulus, cells spent t s = t inc + 1 min in the medium. If not otherwise specified, the incubation was performed at room temperature for 5 min without shaking, and thus t s = 6 min. The pellet was washed by resuspending cells in 500 µl of phosphate-buffered saline (PBS). In case of subsequent stimulations, the washed reporter cells were resuspended in the culture medium, and a second round of stimulation was started as described above. Finally, the washed pellet was resuspended in 5 µl PBS and spread on an agarose pad (1% ultrapure agarose (Invitrogen) in PBS). Bioassay and supernatant analysis: The supernatants were collected after centrifugation of stimulated cells. Aliquots ( V S = 450 µl) of the cell-free supernatants were mixed with 50 µl of a 10× suspension of fresh unstimulated reporter cells, for t B = 5 min + 1 min (time for incubation plus centrifugation) at room temperature and then processed as described above. To detect extracellular PhrA in S7 50 cultures, wild-type cells (BIB224) were grown for 5 h to an OD 600 nn ~ 1.6 and pelleted by centrifugation. The supernatants were then filtered through a 0.2-µm PES membrane, and analyzed as described above with the following modification. To increase the sensitivity of the bioassay, the volume of analyzed supernatant and the incubation time were increased to V S = 1950 µl and t B = 20 min + 1 min, respectively. Deactivation dynamics of stimulated cells (FRET recovery response) : The deactivation of the pathway after stimulation was studied using two assays. First, pre-stimulated cells were washed and resuspended in 500 µl of culture medium and incubated at 37 °C on a thermoshaker. Second, to monitor the recovery of FRET under growth conditions, PhrA was added directly to a shake-flask culture (9 ml) to a final concentration of 10 nM, and the response dynamics was followed over three hours by withdrawing 500 µl samples at the indicated times. In each case, an unstimulated population served as a control. Samples were processed for FRET measurements as described above. Cell-density-dependent signal processing : The FRET reporter was induced as described above, and cells were grown to an OD 600 nm ~ 1.6. Before stimulation, the density of the reporter cells was adjusted by diluting or concentrating cells in an appropriate volume of cell-free supernatant, respectively. Stimulation then proceeded as described above. FRET acceptor-photobleaching experiments Microscopy: Experiments were performed on an Olympus IX81 inverted fluorescence microscope equipped with a 60× UPlanFLN 0.9 NA objective, a photomultiplier tube (Hamamatsu Photon Counting Head H7421-40, Hamamatsu City, Japan) and a 100 mW 515 nm laser (Cobolt, Sweden) that was coupled into the system via an AHF F73-014 z514 DCRB notch filter (Supplementary Fig. 2a ). Data acquisition from the PMTs was performed as described by Sourjik et al. 35 . Fluorescence was excited with a MT20 illumination system. In order to attenuate CFP and minimize bleaching, the internal neutral density (ND) filter of the MT20 was set to 7.72%, and further reduced by an external (ND = 2) filter. A dense multilayer of cells was illuminated with excitation light centered around the CFP excitation maximum (EX: 438/24 nm, Dual BS 440/520 nm, EM: 475/23 nm) for the entire experiment to achieve continuous bleaching of CFP and avoid recovery effects. Prior to acceptor photobleaching, CFP emission signals were measured for 60 s. The sample was then defocused by −16 µm to increase the bleaching area of the laser. The acceptor was bleached with the laser at maximum power for 20 s. After refocusing the sample, CFP emission was recorded for another 60 s. The efficiency of acceptor photobleaching was monitored by recording YFP fluorescence (EX: 504/12 nm, Dual BS 440/520 nm EM: 542/27 nm) for 6 s before and after each experiment. Furthermore, after bleaching, a YFP image (EX: 504/12 nm, Dual BS 440/520 nm EM: 542/27 nm, 100% illumination intensity, 3 s exposure) was taken with a EMCCD Hamamatsu C9100-2 camera to check for homogenous bleaching of the sample area. Quantification of FRET The FRET efficiency was determined using the formula: $${{\mathrm{FRET}} = \frac{{{\mathrm{CFP}}_{{\mathrm{post}}} - {\mathrm{CFP}}_{{\mathrm{pre}}}}}{{{\mathrm{CFP}}_{{\mathrm{post}}}}} \cdot 100\% } ,$$ (5) where CFP pre and CFP post denote the emission levels before and after acceptor photobleaching, respectively. Since CFP is continuously excited, CFP will also bleach during periods of acceptor photobleaching. We thus correct for donor photobleaching by performing a linear fit to the CFP trajectories prior to and and after bleaching using the robustfit function in Matlab 2017b. CFP pre is then evaluated at the end of the bleaching period by extrapolating the linear fit accordingly (see Fig. 2b ). Each measurement records on the average fluorescence from hundreds of cells. Individual data points record the mean FRET from two technical replicates evaluated on the same gel pad (Supplementary Fig. 2c, d ). For each experiment, we analyzed at least three biological replicates n e ≥ 3. Barplots report the corresponding means, and error bars depict the respective standard deviations. E-FRET microscopy E-FRET imaging experiments were performed on a Nikon ECLIPSE Ti2 inverted fluorescence microscope equipped with a 100 mW 532 nM laser and a 60× Plan Apo λ 1.4 NA objective. Fluorescence was excited with an X-Cite Exact Illuminator, and fluorescence emission was detected with an Andor DU-897 EMCCD camera. The exposure time was 250 ms, and the EM gain was set to 150 for all channels. The following filters (EX/EM) and beam splitters (BS) were used: EX 504/12 nm, BS 520 nm and EM 554/23 nm for YFP; EX 436/10 nm, BS 455 nm, and EM 480/40 nm for CFP; and EX 436/10 nm, BS 455 nm and EM 554/23 nm for FRET. If required, the acceptor was bleached with a 532 nm laser (70% power) for 2 s. Quantification of E-FRET The apparent FRET efficiency E app was calculated as described in Zal and Gascoigne 48 : $${E_{{\mathrm{app}}} = \frac{{I_{{\mathrm{DA}}} - aI_{{\mathrm{AA}}} - dI_{{\mathrm{DD}}}}}{{I_{{\mathrm{DA}}} - aI_{{\mathrm{AA}}} + \left( {{\mathrm{G}} - d} \right)I_{{\mathrm{DD}}}}}} .$$ (6) Here I DA , I DD , and I AA refer to the fluorescence intensities measured in the donor (CFP), FRET, and acceptor (YFP) channels. Image registration of different channels was performed using the ImageJ plugins StackReg and MultiStackReg 68 . Image segmentation was performed using Ilastik 1.3.2 69 . Fluorescence quantification was performed by ImageJ 67 . The measured fluorescence object intensities were corrected by subtracting the background signal and the cellular autofluorescence. The latter was determined by averaging the fluorescence intensities of nonfluorescent BIB1910 cells. \(a = I_{{\mathrm{DA}}}({\mathrm{acc}})/I_{{\mathrm{AA}}}({\mathrm{acc}})\) and \(d = I_{{\mathrm{DA}}}({\mathrm{don}})/I_{{\mathrm{AA}}}({\mathrm{don}})\) correct for acceptor and donor bleed-through, respectively. a and d coefficients were determined from the fluorescence intensities from donor-only (CFP-RapA) and acceptor-only (Spo0F-YFP) samples from two biological replicates. In each case, data were acquired from 15 different fields of view. Further correction parameters \(b = I_{{\mathrm{DD}}}({\mathrm{acc}})/I_{{\mathrm{AA}}}({\mathrm{acc}})\) and \(b = I_{AA}({\mathrm{don}})/I_{dd}({\mathrm{don}})\) were negligible in our setup. G refers to the G-factor given by: $${{\mathrm{G}} = \frac{{(I_{{\mathrm{DA}}} - a\,I_{{\mathrm{AA}}} - d\,I_{{\mathrm{DD}}}) - (I_{{\mathrm{DA}}}^{{\mathrm{post}}} - a\,I_{{\mathrm{AA}}}^{{\mathrm{post}}} - d\,I_{{\mathrm{DD}}}^{{\mathrm{post}}})}}{{I_{{\mathrm{DD}}}^{{\mathrm{post}}} - I_{{\mathrm{DD}}}}}} ,$$ (7) where \(I_{{\mathrm{xx}}}^{{\mathrm{post}}}\) refers to the fluorescence intensities after bleaching. The G-factor calibration was performed on the unstimulated FRET sample (RapA-CFP Spo0F-YFP) by acquiring images in each of the three fluorescence channels before and after acceptor photobleaching. Statistical analysis Matlab 2017b was used to determine the statistical significance of observed differences. Where applicable, unpaired t test was used or one-way ANOVA with effect sizes given by Hedges’ g and η 2 , respectively. The number of asterisks indicates the P -value with n.s. (nonsignificant): P > 0.05, * P < 0.05, ** P < 0.01, *** P < 0.001. All P -values and relevant statistical parameters are provided in the Supplementary Data 1 . Model of the FRET response Model: To describe how a population of cell processes an extracellular PhrA stimulus, we consider a population of identical (nonproducing) cells that are homogenously distributed in a volume V e . The extra- and intracellular dynamics of the PhrA signal are described with the pump–probe model. The intracellular signal concentration \(C_{\mathrm{i}}(t) = C_{\mathrm{i}}^{\mathrm{s}}\left( t \right)\) follows from solving Eqs. ( 1 )–( 3 ) under the following conditions: initially, there is no signal inside the cell, the extracellular concentration is given by the stimulus \(C_{\mathrm{e}}^{\mathrm{s}}\) ( t = 0) = C stim , and competing peptides are present at concentration \(C_{\mathrm{e}}^{\mathrm{o}}\) , if applicable. If not otherwise indicated, one can neglect cell growth and peptide degradation on the timescale of a typical stimulation experiment, i.e., μ ~0, λ e ~0, λ i ~0 (see Supplementary Fig. 6 ; Fig. 3c ). We assume that upon PhrA receptor binding, signal transduction to Spo0F occurs rapidly, i.e., FRET changes instantaneously as the intracellular PhrA levels vary with time, FRET ( t ) = f ( C i ( t )) . f was modeled by: $${{\mathrm{FRET}}\left( t \right) = f\left( t \right) = {\mathrm{FRET}}_0 - \Delta {\mathrm{FRET}}\frac{{C_{\mathrm{i}}^n\left( t \right)}}{{{\mathrm{EC}}_{50} + C_{\mathrm{i}}^n\left( t \right)}}}$$ (8) Here, FRET 0 is the FRET efficiency of unstimulated cells, Δ FRET the maximal response amplitude, n the Hill coefficient, and EC 50 the intracellular peptide concentration that yields a half-maximal response. Parameter estimation: This model has nine parameters in all, three of which were determined experimentally. First, the volume V i of a rod-shape bacteria was approximated by a cylinder with two semispherical caps, i.e., \(V_{\mathrm{i}} = \pi (L - D)\left( {\frac{D}{2}} \right)^2 + \frac{4}{3}{\uppi}\left( {\frac{D}{2}} \right)^3\) . Here, L is the cell length, and D is the cell diameter. Both were determined experimentally by measuring and averaging the lengths and widths of 150 reporter cells, which were imaged by bright-field microscopy using a 100×/1.4 NA objective. The extracellular volume was fixed at V e = 500 µl, and the number of cells N c was determined by cell counting using a C-Chip (Merck, Darmstadt). The growth rate µ was determined by separate fitting of the OD curve (inset Fig. 3c ). The calculated N c value equivalent to an OD 600 nm =1.6 was 9.5 × 10 7 . All other parameters were estimated from parameter fitting. Fitting was performed using Matlab R2017b (MathWorks Inc.). The model equations were solved numerically using the ode15s solver. Parameters (with the exception of the intracellular peptide-degradation rate λ i ) were globally optimized by minimizing the sum of squared residuals (SSR) of all data sets with the fsolve function and the Levenberg–Marquardt algorithm. λ i was subsequently determined by fitting the FRET recovery experiment (Fig. 3c ). The 95% confidence intervals for each parameter θ were determined from the following nonlinear constraint 70 , 71 : $${\frac{{{\mathrm{SSR}}\left( \theta \right) - {\mathrm{SSR}}\left( {\hat \theta } \right)}}{{{\mathrm{SSR}}\left( {\hat \theta } \right)}} \le \frac{p}{{n - p}}F_{p,n - p}^\alpha }.$$ (9) where p is the number of parameters, n the number of data points, and F α the value of the F -distribution for the α confidence level. Each parameter was minimized or maximized using the fmincon function with the above inequality as a nonlinear constraint, using interior-point optimization. The confidence intervals of the fitted curves were determined by bootstrapping of 10 4 data sets that were randomly sampled from the data points. For each data set, we determined the best fit as described above, and determined the 95% confidence intervals from the 0.025 and 0.975 quantiles of all fits for each experimental condition. Data availability All relevant data supporting the findings of the study are available in this article and its Supplementary Information files. The source data underlying Figs. 3c , Fig. 4b , c, Fig. 5 , Fig. 6b , Fig. 7 , Supplementary Fig. 1 , Supplementary Fig. 2b –d, Supplementary Figs. 3 – 7 , and Supplementary Fig. 8c are provided as a Source Data File.
Bacteria have a sense of their own number. They release and sense signaling molecules that accumulate with increasing cell numbers, which allows them to change their behavior when a certain group size is reached. A team of researchers from the Max Planck Institute for Terrestrial Microbiology in Marburg and Heidelberg University has now been able to show that bacteria might be capable of even more: they could perceive the proportions of different groups of bacteria in their environment. In nature, bacteria often live in complex communities, surrounded by other cells that can differ from each other, even within a species. The principal investigator, Ilka Bischofs, explains: "Imagine yourself in a ballroom full of people. Their sheer number is only of limited relevance to you; it is the gender ratio that tells you how hard it will be to find a dancing partner. Bacteria also collect information about their environment. Information about group ratios could help them make decisions and adapt in the best possible way." The research team studied information retrieval in the bacterium Bacillus subtilis. This species possesses a large number of identically constructed chemical signaling systems that were previously thought to measure cell numbers. Instead the bacteria may utilize these systems to determine the proportions of different groups within a mixed population. The respective signaling molecules are often produced by a subset of cells, but are taken up by all bacteria. Therefore, cells compete with each other for the signaling molecules. The larger the ratio of signal producers in the population, the more signaling molecules will accumulate in the cells where they are being detected. However, as with computers, the specific function of a system depends on its settings. The research team was able to show experimentally that at least the example investigated bacterial signal system is indeed correctly configured for facilitating ratio-sensing. Using high-resolution methods of flurorescence microscopy (Förster Resonance Energy Transfer, FRET), they analysed the signal transduction in detail. The ratio-sensing ability could confer decisive advantages to the bacterium. As research in recent years has shown, Bacillus subtilis often splits its population into subgroups of cells with different properties and functions. Similar to a stock broker, the bacterium diversifies its portfolio of phenotypes. Knowing the composition of a portfolio obviously enables to respond adequately to environmental changes—a strategy that bacteria may have already discovered during evolution.
10.1038/s41467-020-14840-w
Nano
Built from the bottom up, nanoribbons pave the way to 'on-off' states for graphene
Chuanxu Ma et al, Controllable conversion of quasi-freestanding polymer chains to graphene nanoribbons, Nature Communications (2017). DOI: 10.1038/ncomms14815 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms14815
https://phys.org/news/2017-03-built-bottom-nanoribbons-pave-on-off.html
Abstract In the bottom-up synthesis of graphene nanoribbons (GNRs) from self-assembled linear polymer intermediates, surface-assisted cyclodehydrogenations usually take place on catalytic metal surfaces. Here we demonstrate the formation of GNRs from quasi-freestanding polymers assisted by hole injections from a scanning tunnelling microscope (STM) tip. While catalytic cyclodehydrogenations typically occur in a domino-like conversion process during the thermal annealing, the hole-injection-assisted reactions happen at selective molecular sites controlled by the STM tip. The charge injections lower the cyclodehydrogenation barrier in the catalyst-free formation of graphitic lattices, and the orbital symmetry conservation rules favour hole rather than electron injections for the GNR formation. The created polymer–GNR intraribbon heterostructures have a type-I energy level alignment and strongly localized interfacial states. This finding points to a new route towards controllable synthesis of freestanding graphitic layers, facilitating the design of on-surface reactions for GNR-based structures. Introduction In the pursuit of atomically precise and bottom-up fabrication of graphene-based electronics 1 , 2 , 3 , graphene nanoribbons (GNRs) with a variety of widths 4 , 5 , edge structures 6 , 7 and heterojunctions 8 , 9 have been synthesized with self-assembled molecular precursors on different catalytic metal substrates, such as Au (refs 10 , 11 , 12 ), Ag (ref. 13 ) and Cu (refs 14 , 15 ). Surface-assisted cyclodehydrogenations 16 , 17 , 18 , a key step in GNR formation, appear to take place only on the catalytic metal substrates 19 , 20 . Efforts of growing GNRs from molecular precursors directly on an insulating TiO 2 substrate only succeeded in deriving polymerization but not the cyclodehydrogenation 20 . The metallic substrates are found to be essential not only for the GNR growth process, but also for the electronic behaviours of synthesized GNRs. The orbital hybridization between substrate and edge atoms can largely affect the predicted edge states and magnetism of GNRs 21 , 22 , and the dielectric screening interaction from the substrate can greatly modify the quasiparticle bandgaps of GNRs. For example, the armchair GNRs with a width of seven carbon (7-aGNRs) on Au(111) are reported to have an energy gap ranging from 1.8 to 2.8 eV (refs 4 , 10 , 13 , 23 , 24 , 25 , 26 ). These values are much smaller than expected from electronic structure calculations within many-body perturbation theory in the GW approximation, which predicts a gap of ∼ 3.7 eV (ref. 27 ). Controlling the substrate interaction is thus considered a prerequisite for studying the on-surface synthesis process and accessing the intrinsic electronic structure of GNRs. Here we focus on the controllable conversion of quasi-freestanding polymers to form atomically precise armchair graphene nanoribbons, 7-aGNRs. To decouple their electronic structure from the metal substrate, we grow the polymers atop first-layer (1st-layer) GNRs that are in direct contact with the metal substrate. The polymers are isolated from the metal substrate by the 1st-layer GNRs, leading to their quasi-freestanding nature. Using scanning tunnelling microscopy (STM), we find that electronic decoupling of the polymer can greatly slow down the cyclodehydrogenation and an STM tip can be used to inject charges at the selected molecular sites to trigger the reaction and thus create intraribbon heterojunctions. Based on nudged elastic band (NEB) simulations, we reveal a hole-assisted cyclodehydrogenation reaction path that points to an avenue towards the controllable on-surface synthesis of freestanding GNRs and precise intraribbon heterojunctions. Results Synthesis of quasi-freestanding polymer chains The polyanthrylene chains were synthesized on Au(111) using 10,10′-dibromo-9,9′-bianthryl (DBBA) molecules as precursors with a bottom-up method described by Cai et al . 11 (Methods). Figure 1a schematically illustrates the stepwise annealing process for growing the 7-aGNRs and quasi-freestanding polymer chains atop the GNRs. With a molecule coverage θ >1, the sample is subsequently annealed to enable colligation/polymerization (at 470 K) ( Supplementary Fig. 1 ) and cyclodehydrogenation/graphitization (at 670 K), resulting in the polymer chains atop the GNRs ( Fig. 1b ). The STM images in Fig. 1c,d show the 1st-layer 7-aGNRs adsorbed on Au(111) and the second-layer (2nd-layer) polymer chains, respectively. The 2nd-layer polymer chains mainly grow along the GNRs, showing a period of about 8.4 Å, consistent with the simulated STM image shown in Fig. 1e and with previous reports for polymers directly adsorbed on Au(111) surface 11 . The sub-hexagon-ring features in the STM image ( Fig. 1f ) are reproduced by the simulated charge density distribution of the highest occupied crystal orbital of the polymer (HOCO p , Fig. 1g ). The 2nd-layer polymer chains are effectively decoupled from the Au substrate due to the GNRs underneath, which enables imaging of the quasi-atomic polymer structures. Figure 1: Bottom-up synthesis of polymer chains on armchair graphene nanoribbons with a width of seven carbon (7-aGNRs). ( a ) Sketch for synthesis of the second-layer (2nd-layer) polyanthrylene chains on 7-aGNRs from 10,10′-dibromo-9,9′-bianthryl (DBBA) molecules with stepwise annealing at 470 and 670 K, respectively. ( b ) Large-area scanning tunnelling microscopy (STM) image showing the polymer chains on 7-aGNRs (sample voltage V s =−2 V, tunnelling current I t =60 pA). Scale bar, 20 nm. ( c ) High-resolution STM image of the first-layer (1st-layer) 7-aGNR ( V s =−0.6 V, I t =100 pA) superposed with an atomic structure. Scale bar, 1 nm. ( d ) Small-scale STM image of the 2nd-layer polymer chains ( V s =+1 V, I t =60 pA). Scale bar, 2 nm. ( e ) The simulated STM image and an atomic structure of the polymer superposed on the magnified image of the top polymer chain in d . Scale bar, 1 nm. ( f ) High-resolution STM image showing the detailed structure of the polymer ( V s =−2 V, I t =10 pA). Scale bar, 2 nm. ( g ) Charge density distribution of the highest occupied crystal orbital of the polymer (HOCO p ). Dashed boxes mark the polymer unit in the polymer. Full size image The freestanding nature of the 2nd-layer polymer chains becomes clearer by comparing their geometric and electronic structures with the 1st-layer. The STM image in Fig. 2a shows a 2nd-layer polymer chain and a 2nd-layer GNR on two adjacent Au(111) terraces, with height profiles shown in Supplementary Fig. 2 . The apparent height of the 2nd-layer polymer ( ∼ 4.3 Å) is greater than that of the 1st-layer polymer directly adsorbed on Au ( ∼ 3.9 Å, Supplementary Fig. 3 ), and so is the 2nd-layer GNR ( ∼ 2.9 Å) as compared to the 1st-layer GNR ( ∼ 2.1 Å). Figure 2b shows the tunnelling conductance d I /d V spectra acquired at different locations marked in Fig. 2a . The 2nd-layer polymer (location 1) exhibits a large energy gap of about 4.3 eV with the highest occupied and lowest unoccupied crystal orbitals of the polymer (HOCO p and LUCO p ) in the density of states (DOS) at sample voltage V s =−2.1 and +2.2 V, respectively. This gap is significantly greater than that for the 1st-layer polymer with a bandgap about 3.4 eV ( Supplementary Fig. 3 ), indicating reduced dielectric screening of the substrate 10 , 24 , 28 . Note that the image in Fig. 1e acquired at V s =−2 V is very close to the HOCO p and thus it reflects the intrinsic electronic structures of the polymer chain. For the 1st-layer GNR (location 4), the HOCO g and LUCO g are located at V s =−0.9 and +1.4 V, respectively, with a bandgap about 2.3 eV, consistent with previous reports 24 , 25 , 28 . The 2nd-layer GNR (location 2) shows a larger gap of about 2.6 eV. It is found that the gap in the 2nd-layer GNR is generally about 0.1–0.4 eV greater than that in the 1st-layer GNR, where the gap difference between the two layers is comparable with previously reported difference ( ∼ 0.5 eV) between GNR on an insulator NaCl (with a bandgap ∼ 2.8 eV) 29 and on Au (with a bandgap ∼ 2.3 eV) 24 . Moreover, the d I /d V spectra from both the 2nd-layer polymer and the 2nd-layer GNR show cleaner gaps with lower densities of in-gap states than the 1st-layer GNR. Thus the 1st-layer GNR, similar to graphene 30 , can largely isolate the 2nd layers from the Au(111) substrate and render the 2nd-layer polymers quasi-freestanding. Figure 2: Domino-like thermally induced cyclodehydrogenation. ( a ) STM image showing a 2nd-layer polymer and 7-aGNR, marked with white arrows ( V s =−2 V, I t =100 pA). Scale bar, 5 nm. ( b ) Representative differential conductance, d I /d V , curves acquired on the cross marked sites 1–4 in a , respectively ( V s =−2 V, I t =100 pA). The highest occupied and lowest unoccupied crystal orbitals of the polymer (HOCO p and LUCO p ) are approximately −2.1 and +2.2 eV, respectively. The HOCO g is approximately −1.0 eV and LUCO g is approximately +1.6 eV for the 2nd-layer GNR, while respectively approximately −0.9 and +1.4 eV for the 1st-layer GNR. ( c ) STM image showing an intraribbon heterojunction of a polymer chain with a GNR tail ( V s =−2 V, I t =100 pA), as illustrated by the schematic. Scale bar, 5 nm. ( d ) Sketch of the domino-like cyclodehydrogenation during thermal annealing. Hydrogen atoms in each step are highlighted. Full size image Thermally induced domino-like polymer to GNR conversion The existence of the 1st-layer GNRs significantly suppresses the catalytic effect of Au substrate and slows down the cyclodehydrogenation reactions in the 2nd-layer polymers, which facilitates the control and evaluation of the cyclodehydrogenation process. After annealing at 670 K, polymers only exist on the 2nd layer atop the GNRs, while the 1st-layer polymers have all been converted into GNRs. The full conversion of the 2nd-layer GNRs can occur when they have direct local contacts with the Au substrate, such as location 3 in Fig. 2a . The d I /d V curve measured at location 3 ( Fig. 2b ) is similar to that of the 1st-layer GNRs (location 4). Without the direct Au contact, only partially converted 2nd-layer polymers are observed with GNR tails ( Fig. 2c , more examples in Supplementary Fig. 4 ), which may be attributed to a charge transfer effect promoted by work function mismatch between the polymer, GNR and the Au substrate (as explained in the Supplementary Fig. 3 ). The GNR tail has the characteristic height ( ∼ 2.9 Å) and tunnelling spectra of the 2nd-layer GNR ( Supplementary Fig. 5 ). Moreover, the GNR tail shows enhanced DOS at the edges compared to the 1st-layer GNR, similarly to the GNR on an insulating substrate 29 . The GNR segment always appears at an end of the polymer chain, indicating that the cyclodehydrogenation prefers to start at the polymer end and then propagate along the polymer chain. Such a domino-like cyclodehydrogenation process can drastically lower the reaction energy barrier 31 during thermal annealing as illustrated in Fig. 2d . This observation is in contrast with the previously reported one-side-domino conversions for polymers directly adsorbed on Au(111) (ref. 32 ; Supplementary Fig. 6 ). STM tip-induced polymer to GNR conversion To facilitate the cyclodehydrogenation reaction in the freestanding polymer chains, an STM tip is used to inject charge carriers at selected molecular sites. Figure 3a shows a 2nd-layer polymer chain on 7-aGNRs, on which a series of d I /d V spectra are acquired along the polymer chain by moving the STM tip step-by-step (5 Å intervals) beyond the top end of the polymer. The d I /d V spectra are displayed in Fig. 3b , where curves 1–7 are on the polymer chain and 8–10 on the 1st-layer GNR. On the polymer chain, while curves 1–3 exhibit typical electronic features of the 2nd-layer polymer with LUCO p at V s =+2.1 V (black dashed line), a new peak at V s =+1.7 V emerges in curves 4–7, corresponding to the LUCO g of the GNR in curves 8–10 (marked with red dashed line). Thus at locations 4–7 near the end of the polymer chain, the polymer has been converted into GNR during the d I /d V measurements. Indeed, a GNR tail becomes discernable after the d I /d V measurements as shown in Fig. 3c . The local conversion of the polymer creates a polymer/GNR junction, and the d I /d V mapping at V s =+1.7 V shows strong localized interfacial states at the junction ( Fig. 3d ). Figure 3: Formation of GNR segments in polymer chain induced by tunnelling electrons. ( a ) STM image of a 2nd-layer polymer chain ( V s =−2 V, I t =60 pA). ( b ) d I /d V curves sequentially acquired along the red-arrow line in a from equally separated site 1 to 10 ( V s =−2 V, I t =60 pA). Sites 1–7 are on the polymer chain. Sites 8–10 are on the 1st-layer GNR. The dashed black line marks LUCO p of the polymer in curves 1–3. The dashed red line marks the peak in curves 4–7, showing same position as LUCO g of the GNR in curves 8–10. ( c ) STM image showing a GNR segment formed at the top end of the polymer chain (white box) ( V s =+1.7 V, I t =60 pA). ( d ) d I /d V mapping at V s =+1.7 V ( I t =60 pA), within the same area as c . ( e ) d I /d V curves sequentially acquired along the red-arrow line in c from equally separated sites 1–8 ( V s =−2 V, I t =60 pA). Sites 1 and 8 are on the 7-aGNRs. Sites 2–7 are on the polymer chain. The dashed black line marks LUCO p of polymer in curves 2 and 3. The dashed red line marks the peak in curves 4–7, showing same position as LUCO g of GNRs in curves 1 and 8. ( f ) STM image of the black box marked region in c with a defect (white box) ( V s =+1.7 V, I t =60 pA). Insets in a , c , f : schematics of the polymer chain before and after manipulations. ( g ) Profile along the dashed line in f . ( h ) Atomic structure of a polymer chain embedded with a short GNR segment. ( i , j ) d I /d V mapping at V s =+1.7 and V s =−2 V respectively ( I t =60 pA), within the same area as f . ( k , l ) Charge density distribution of the states in the intraribbon heterojunction at +0.5 eV ( k ) and −1 eV ( l ), respectively. All scale bars, 2 nm. Full size image The charge injection effect on the cyclodehydrogenation process is corroborated by another set of experiments where d I /d V curves are sequentially acquired along the red arrow across the polymer chain in Fig. 3c . Here locations 1 and 8 are on GNRs and 2–7 on the polymer. As shown in Fig. 3e , while d I /d V curves 2 and 3 exhibit the typical electronic features of the polymer, a new peak (red dashed line) corresponding to the LUCO g of the GNR emerges in curves 4–7. The newly formed GNR segment appears like a defect in polymer chain after the d I /d V measurements ( Fig. 3f ). The defect has a height of about 2.8 Å ( Fig. 3g ) that is very close to that of the 2nd-layer 7-aGNR (2.9 Å), and a width is about 1.65 nm that is about twice the period in the polymer (8.4 Å). Thus, the STM tip treatment has converted one polyanthrylene unit into GNR segment, which consists of three hexagon rows of 7-aGNR with a proposed structural model shown in Fig. 3h . The measured electronic states at +1.7 eV ( Fig. 3i ) are strongly localized at the interfaces between the GNR and the polymer, while the states at −2 eV ( Fig. 3j ) are suppressed at the junction as compared to those in the polymer. According to density functional theory calculations, the states above the Fermi level (for example, at +0.5 eV, Fig. 3k ) are mainly located in the GNR segment, while those below the Fermi level (for example, at −1 eV, Fig. 3l ) are in the polymer segment. Notably, the energy differences between the experiments and the calculations may arise from an underestimate of the bandgap in density functional theory calculations 9 , 33 ( Supplementary Fig. 7 ). Since the LUCO (HOCO) in the GNR segment is lower (higher) than that in the polymer segment, the polymer–GNR heterojunction is analogous to a type-I semiconductor junctions with a band misalignment of ∼ 0.5–0.8 eV. The tip-induced cyclodehydrogenation is examined by comparing the effects of electron and hole injections from an STM tip. The tip-treatment process is illustrated in Fig. 4a . At a selected site, the STM feedback loop is turned off (tip treatment with the STM feedback loop on can also work, Supplementary Fig. 9 ) and then a current pulse is applied between the tip and the sample. With a negative sample bias in the range of V s =−2 to −4 V, hole injections are found to induce cyclodehydrogenation in the freestanding polymer chains. However, electron injections with a sample bias in the range of V s =+2.5 to +6 V just damage the polymers without triggering cyclodehydrogenation (see Supplementary Note 1 on the yield of tip treatments with different operational parameters). The measured tunnelling current ( I t ) is shown in Fig. 4b as a function of time ( t ) for three different tip treatment processes with a sample bias V s =−3.6 V. Obvious drops of the current from terrace 1 to terrace 3 with a non-zero smaller value are observed in all three curves, indicating the occurrence of the cyclodehydrogenation event. An additional terrace, terrace 2, is seen in one of the curves (red), suggesting an additional state in the cyclodehydrogenation process. As the polymer is slightly taller than the GNR, the conversion of the polymer to GNR enlarges the tip–sample distance and thus leads to a current drop. After the pulse treatment, local conversion of the polymer to GNR can be seen in the STM image shown in the inset of Fig. 4b (more examples in Supplementary Figs 8 and 9 ). During the experiment, polymer segments with up to three bianthrylene units ( ∼ 2.5 nm) can be fully converted to GNRs by a single pulse. Figure 4: Mechanism of the holes-assisted cyclodehydrogenation induced by an STM tip. ( a ) Sketch of applying a hole pulse to a polymer chain. ( b ) Three typical tunnelling current–time ( I t – t ) curves during pulses ( V s =−3.6 V, t =1.5 s) with feedback loop off ( V s =−2 V, I t =100 pA). The three terraces are marked as 1–3. Inset: STM image of forming a GNR segment (red arrow) in a polymer chain ( V s =−2 V, I t =100 pA). ( c ) Proposed cyclodehydrogenation reaction path, with 1 as the initial, 2 after one-side cyclodehydrogenation and 3 as final, while int1–int4 are intermediates. A visualization of the entire reaction path including transition states can be found in Supplementary Fig. 10 . ( d ) Energy diagrams of cyclodehydrogenation in vacuum for neutral (black), two-electron (green) and two-hole (red) assisted bianthrylenes, respectively. ( e ) Simulated charge density distributions of LUCO p and HOCO p in the polymer where the blue and red colours indicate the different signs of wavefunctions. The out-of-phase overlap (that is, opposite signs) in LUCO p is marked by red arrows and the in-phase overlap (that is, same signs) in HOCO p is marked by green arrows, indicating that the C–C bond formation in the cyclodehydrogenation is symmetry forbidden in LUCO p but symmetry allowed in HOCO p . Full size image Hole-assisted cyclodehydrogenation mechanism Figure 4c shows the proposed three-state reaction path for the cyclodehydrogenations rationalized by NEB simulations 34 , in correspondence with the observed three terraces in the I t – t curves ( Fig. 4b ). Besides the three main reaction states, multiple transition states (TS) and intermediate states (int) are identified based on the NEB simulations in vacuum ( Supplementary Fig. 10 ). In the initial state 1, the neighbouring anthrylene units first rotate around the single C–C bond, allowing two benzyne groups (C 6 H 4 ) on the same side to form a single C–C bond giving int1. This step is followed by a [1,3]-sigmatropic H migration to an edge C atom giving int2. Subsequently, the elimination of a H 2 molecule leads to the rearomatization of the system giving state 2. Likewise, the benzyne groups on the other side repeat the process to form a graphitic lattice in state 3. Figure 4d shows the corresponding reaction energy diagrams for the neutral and two-electron- and two-hole-assisted bianthrylenes. Compared to the neutral case, the total barrier from state 1 to state 2 can be reduced from 4.5 to 2.8 eV by injecting two holes, whereas the barrier remains essentially the same for the electron injection case. In the key step of the C–C bond formation (state 1 to int1), hole injections reduce the barrier from 2.5 to 1.2 eV. Although electron injections can also reduce this barrier, the corresponding int1 would not be stable because the transition from int1 back to state 1 has a zero-energy barrier, similarly to the neutral case. Thus, hole injections can significantly facilitate cyclodehydrogenations as compared to neutral and electron injection processes, as observed in the experiment. For the subsequent reaction from state 2 to state 3, the unphysical neutral and electron injection processes are excluded from the discussion. For the hole injection case, the highest barrier is comparable to that from state 1 to state 2. However, state 3 is stabilized with respect to state 2 by about 0.7 eV, even more than that between state 2 and state 1 ( ∼ 0.3 eV). These results suggest that state 3 can be thermodynamically favoured over state 2, implying that once state 2 is formed, it may be converted to state 3 with ease, showing a cooperative cyclodehydrogenation. Indeed, state 2 corresponding to the terrace 2 in the I t – t curve is rarely detected when state 3 is observed during the tip treatment ( Fig. 4b ). We also calculated the single-charge injection cases and found that the two-hole injection mechanism is more favourable ( Supplementary Fig. 11 ). The hole-assisted cyclodehydrogenations are believed to be associated with inelastic tunnelling at the polymer HOCO p resonance state 35 . Figure 4e shows the simulated charge density distribution of LUCO p and HOCO p in a polymer, where the different signs as represented by blue and red colours are adopted from the orbital wavefunctions. According to the Woodward–Hoffmann rules for orbital symmetry conservation in pericyclic reactions 36 , 37 , the formation of a C–C bond through electron injections into the LUCO p state is symmetry forbidden (red arrows) due to the opposite phase relationship of wavefunctions, while it is symmetry allowed (green arrows) through hole injections into the HOCO p state as the involved wavefunctions have the same phase relationship. Such a difference in orbital symmetries may be responsible for the different reaction barriers shown in Fig. 4d , especially between state 1 and int1. Interestingly, the hole-assisted cyclodehydrogenations are similar to the well-known Scholl reaction 38 ( Supplementary Fig. 12 ). In organic chemistry, oxidants such as FeCl 3 are often used to extract electrons (inject holes) in the Scholl reaction 39 , 40 , with which GNRs have been synthesized in liquid 41 , 42 . The ability of controlling the cyclodehydrogenations at selected molecular sites with an STM tip, even without a catalytic metal substrate or oxidants, provides an opportunity to synthesize freestanding GNRs and create novel intraribbon heterojunctions bottom-up. Discussion We have established how the bottom-up synthesis of a graphene nanoribbon can be controlled by charge injections from an STM tip. From our experiments and first-principles calculations, it was found that the hole injections from an STM tip can trigger a cooperative domino-like cyclodehydrogenation even when the polymers are quasi-freestanding with suppressed substrate effect. The hole injections greatly reduce the energy barrier in the key step of the C–C bond formation. The H atoms migrate to the edge and dissociate into the vacuum as H 2 molecules. The cylodehydrogenation process can be traced back to the classical Woodward–Hoffmann rules, showing that the formation of a C–C bond is symmetry allowed with hole injections but is instead symmetry forbidden with electron injections due to the phase mismatch of wavefunctions, corroborating the experimental observations. As the STM tip treatment can be performed at selective molecular sites without involving a catalytic effect from the metal substrate, the results point to a new way for bottom-up and controllable synthesis of freestanding GNRs and heterojunctions, which is critical for practical GNR-based nanodevices. Methods Sample preparation and STM measurements The Au(111) single crystal is cleaned by repeated cycles of argon ion bombardment and annealing to 740 K. DBBA molecules with a purity of 98.7% are used, which are degassed at 450 K overnight in a Knudsen cell (SVT Associates, INC.). Then, the molecules are evaporated at 485 K for 5 min from the cell with an effective coverage θ >1, while the Au substrate is held at 470 K. They dehalogenate upon adsorption. The sample is subsequently annealed at 470 and 670 K for 30 min, respectively, to induce colligation/polymerization (at 470 K) and cyclodehydrogenation/graphitization (at 670 K), resulting in polyanthrylene chains on 7-aGNRs. The STM characterizations are performed with a homemade variable temperature system at 105 K under ultrahigh vacuum conditions. A cleaned commercial PtIr tip is used. All STM images are acquired in a constant-current mode. The d I /d V spectra are recorded using a lock-in amplifier with a sinusoidal modulation ( f =1,000 Hz, V mod =20 mV) by turning off the feedback loop-gain. The polarity of the applied voltage refers to the sample bias with respect to the tip. Calculation methods The ab initio calculations are performed with the Quantum Espresso code 43 , using ultrasoft pseudopotentials 44 and Perdew–Burke–Ernzerhof (PBE) exchange correlation functional 45 . The PBE0 hybrid exchange correlation functional is used to correct the band gap 46 . The energy cutoff for the plane wave basis of Kohn–Sham wavefunctions is 24 Ry, and that for the charge density is 200 Ry. The structures are relaxed until forces on atoms reach a threshold of 0.026 eV Å −1 . The adsorption energies of the polymer and the GNR on the metal substrate are calculated by using a non-local van der Waals correction 47 . The charge density distributions are acquired as the square of the wavefunctions. The STM images are simulated based on Tersoff’s method 48 . The energy barriers of the reaction are calculated using the NEB method 34 . The forces on images are relaxed until they reach a threshold of 0.1 eV Å −1 . Data availability The data that support the findings of this study, including the Supplementary Information , are available from the corresponding author A.-P.L. on request. Additional information How to cite this article: Ma, C. et al . Controllable conversion of quasi-freestanding polymer chains to graphene nanoribbons. Nat. Commun. 8, 14815 doi: 10.1038/ncomms14815 (2017). Publisher's note : Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A new way to grow narrow ribbons of graphene, a lightweight and strong structure of single-atom-thick carbon atoms linked into hexagons, may address a shortcoming that has prevented the material from achieving its full potential in electronic applications. Graphene nanoribbons, mere billionths of a meter wide, exhibit different electronic properties than two-dimensional sheets of the material. "Confinement changes graphene's behavior," said An-Ping Li, a physicist at the Department of Energy's Oak Ridge National Laboratory. Graphene in sheets is an excellent electrical conductor, but narrowing graphene can turn the material into a semiconductor if the ribbons are made with a specific edge shape. Previous efforts to make graphene nanoribbons employed a metal substrate that hindered the ribbons' useful electronic properties. Now, scientists at ORNL and North Carolina State University report in the journal Nature Communications that they are the first to grow graphene nanoribbons without a metal substrate. Instead, they injected charge carriers that promote a chemical reaction that converts a polymer precursor into a graphene nanoribbon. At selected sites, this new technique can create interfaces between materials with different electronic properties. Such interfaces are the basis of semiconductor electronic devices from integrated circuits and transistors to light-emitting diodes and solar cells. "Graphene is wonderful, but it has limits," said Li. "In wide sheets, it doesn't have an energy gap—an energy range in a solid where no electronic states can exist. That means you cannot turn it on or off." When a voltage is applied to a sheet of graphene in a device, electrons flow freely as they do in metals, severely limiting graphene's application in digital electronics. "When graphene becomes very narrow, it creates an energy gap," Li said. "The narrower the ribbon is, the wider is the energy gap." A graphene nanoribbon is born. A scanning tunneling microscope injects charge carriers called “holes” into a polymer precursor, triggering a reaction called cyclodehydrogenation at that site, creating a specific place at which a freestanding graphene nanoribbon forms from the bottom up. Credit: Oak Ridge National Laboratory, U.S. Dept. of Energy In very narrow graphene nanoribbons, with a width of a nanometer or even less, how structures terminate at the edge of the ribbon is important too. For example, cutting graphene along the side of a hexagon creates an edge that resembles an armchair; this material can act like a semiconductor. Excising triangles from graphene creates a zigzag edge—and a material with metallic behavior. To grow graphene nanoribbons with controlled width and edge structure from polymer precursors, previous researchers had used a metal substrate to catalyze a chemical reaction. However, the metal substrate suppresses useful edge states and shrinks the desired band gap. Li and colleagues set out to get rid of this troublesome metal substrate. At the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility at ORNL, they used the tip of a scanning tunneling microscope to inject either negative charge carriers (electrons) or positive charge carriers ("holes") to try to trigger the key chemical reaction. They discovered that only holes triggered it. They were subsequently able to make a ribbon that was only seven carbon atoms wide—less than one nanometer wide—with edges in the armchair conformation. "We figured out the fundamental mechanism, that is, how charge injection can lower the reaction barrier to promote this chemical reaction," Li said. Moving the tip along the polymer chain, the researchers could select where they triggered this reaction and convert one hexagon of the graphene lattice at a time. Next, the researchers will make heterojunctions with different precursor molecules and explore functionalities. They are also eager to see how long electrons can travel in these ribbons before scattering, and will compare it with a graphene nanoribbon made another way and known to conduct electrons extremely well. Using electrons like photons could provide the basis for a new electronic device that could carry current with virtually no resistance, even at room temperature. "It's a way to tailor physical properties for energy applications," Li said. "This is an excellent example of direct writing. You can direct the transformation process at the molecular or atomic level." Plus, the process could be scaled up and automated. The title of the current paper is "Controllable conversion of quasi-freestanding polymer chains to graphene nanoribbons."
10.1038/ncomms14815
Physics
Data storage: Measuring the downside of downsizing
Zhao, J. M., Zhang, M. S., Yang, M. C. & Ji, R. Ellipsometric measurement accuracy of ultrathin lubricant thickness on magnetic head slider. Microsystem Technologies 18, 1283–1288 (2012). dx.doi.org/10.1007/s00542-012-1519-8
http://dx.doi.org/10.1007/s00542-012-1519-8
https://phys.org/news/2013-07-storage-downside-downsizing.html
Abstract The quantitative analysis of lubricant transferred from disk to slider is important in understanding the interaction in head-disk interface and designing a stable head-disk system. When applying ellipsometric technology to determine the lubricant thickness on slider, the measurement accuracy is of concern due to the location-to-location variations of slider optical constants. This paper carried out a systematic and quantitative study on how the variations of slider optical constants affect the measurement accuracy of lubricant thickness. In this study, the distribution of slider optical constant was obtained; a differential method was used to calculate the uncertainty in lubricant thickness and the calculated results were experimentally verified. The results show that for the state-of-art sliders, the uncertainty in lubricant thickness is about 20 % for below 2 nm thicknesses and less than 15 % for around 3 nm thicknesses when measured at 632.8 nm wavelength. The results of this study might be also useful for the other optical instruments used to determine the amount of the transferred lubricant. Access provided by DEAL DE / Springer Compact Clearingstelle Uni Freiburg _ Working on a manuscript? Avoid the common mistakes 1 Introduction The continuing pursuit for higher magnetic recording areal density of hard disk drive (HDD) demands the smaller head-media spacing (HMS) (Ambeka and Bogy 2008 ). The lubricant transfer from the disk surface to a flying magnetic head slider (“slider” for short) increases with decreasing slider flying height. Transferred lubricant will affect HMS and deteriorate the slider flying stability, causing overall HDD reliability issues. The quantitative analysis of transferred lubricant is important in helping to understand the physical mechanism of interaction between lubricant and slider and design a stable head-disk interface system. Nowadays, several nanometrological techniques have been applied to determine the lubricant amount or thickness on slider (Kim et al. 2009 ; Leavitt 1992 ; Yanagisawa et al. 2010 ; Chiba and Ogata 2008 ; Tani et al. 2011 ). These include time-of-flight secondary ion mass spectroscope (TOF–SIMS), X-ray photoelectron spectroscope (XPS), optical surface analyzer (OSA), interferometry and spectroscopic ellipsometry (SE). Of these techniques, TOF–SIMS, XPS and SE have spot sizes small enough to directly measure lubricant thickness on slider. TOF–SIMS and XPS must be performed under the ultra-high vacuum conditions which might cause the evaporation and/or redistribution of transferred lubricant, whereas SE enables a fast and non-destructive measurement under ambient environment. However, when applied to determine the lubricant thickness on slider, the measurement accuracy is of concern due to the location-to-location variations of slider optical constants (refractive index n and extinction coefficient k ). SE is not direct measurement technique, it is impossible to measure the optical constants and thickness of a given sample directly with light. Instead, it measures the change of polarization in light reflected from or tramsmitted through the given sample. The polarization change is described by an amplitude ratio tanΨ and phase difference Δ between light oriented in the p- and s- directions relative to the sample surface. These measured data must be modeled in order to determine the sample properties of interest (optical constants or thickness of the film). The model-generated data are then compared to the measured data while the sample properties are varied. Through specific regression analysis, the unknown sample properties whose response best match the measured data are found. Generally, an optical model consists of the optical constants of both substrate and sample layers as well as the thicknesses of sample layers, in which the substrate is treated as a special type of layer having 1 mm optical thickness and accurately known optical constants. In SE measurement, the accuracy of an optical model usually contributes the significant accuracy to the measurement. In other word, if the optical model is not accurate, then its model fit result is not good and even might be completely wrong as through the original measured data are accurate and error free. To ensure the model accuracy, substrate optical constants used in the optical model should be the same as the measured ones for a given sample. For a case where substrate optical constants are not uniformly distributed, the location-to-location measurement should be applied, namely, before and after film deposition the same spot on the substrate will be measured and analyzed. Slider material is a two-phase composite consisting of Al 2 O 3 and TiC grains. The TiC grains are of random size, shape and separation at the order of micrometers, which makes slider optical constants vary with the measured locations and thus causes the variations in optical constants (Yuan et al. 2008 ). As stated above, the location-to-location measurement should be carried out for this case. However, the presence of positioning errors induced by the cycle of loading/unloading slider makes it very difficult to exactly get back to the original location before and after lubricant transfer, as a result, slider optical constants in the optical model might be not the same as the measured ones, thus causing the inaccurate result of determined lubricant thickness. There has not been a systematic and quantitative study on how the variation of slider optical constants affects the accuracy of the measured thickness. This paper presents a systematic and quantitative study on this problem. In this study, the statistical distribution of slider optical constants was obtained first. Then a differential method was described which was used to calculate the uncertainty (%) in lubricant thickness induced mainly by the variation of slider optical constants. The calculated results were finally verified experimentally. The results of this study might be also useful for the other optical instruments such as interferometry and reflectometry when they are used to determine the amount of the transferred lubricant (Tani et al. 2011 ). 2 Slider optical constants Normally, sliders are coated with ~0.5 nm-thick amorphous-Si adhesive layer and ~1.5 nm-thick diamond-like carbon (DLC) layer orderly. We treat them as part of slider substrate and use a combined pseudo substrate approximation to simulate the DLC/a-Si/AlTiC structure. Ellipsometry is the only technology to uniquely determine the optical constants for slider substrate. The direct inversion of the measured ellipsometric Ψ and Δ can provide slider optical constants at each measurement wavelength. In this study, ellipsometry measurements were carried out over the spectral range of 360–800 nm using M-2000VF (J. A. Woollam Co., Inc. 2000 ) at an incident angle of 65° with 25 μm by 60 μm spot size. In total, 400 spots on slider row bars were measured. Figure 1 plots the n and k values of 400 testing spots at 632.8 nm wavelength. It can be seen that there are the scatters of both n and k values, n changes between 2.218 and 2.297 whereas k is between 0.404 and 0.466. The average n and k values of 400 spots are 2.256 and 0.428 with the corresponding standard deviations of 0.015 and 0.011 (resulting \( \delta \tilde{n} = \sqrt {\delta n^{2} + \delta k^{2} } \approx \, 0.0 1 9 \) ) respectively. Figure 2 plots the n and k dispersions of two different spots. It shows that head slider optical constants vary with the measured location and wavelength. The variation in n is larger than k for this type slider under test. Fig. 1 Scatter diagram of n and k values of 400 spots measured by ellipsometry at λ = 632.8 nm Full size image Fig. 2 Dispersion of n and k values over spectral range of 375–725 nm for two different spots Full size image 3 Analysis of uncertainty in lubricant thickness 3.1 Calculation of uncertainty in lubricant thickness In this study, the uncertainty in lubricant thickness was defined as the relative error (Δt/t) of the measured thickness (t) which was written as a percentage while the uncertainty in slider optical constants was referred to as the standard deviation of slider optical constant distribution. For an ambient-lubricant-slider substrate structure as shown in Fig. 3 , ellipsometric parameters Ψ and Δ are expressed by $$ \tan \Uppsi e^{j\Updelta } = \frac{{\tilde{r}_{p1} + \tilde{r}_{p2} e^{ - j\Upgamma } }}{{1 + \tilde{r}_{p1} \tilde{r}_{p2} e^{ - j\Upgamma } }} \times \frac{{1 + \tilde{r}_{s1} \tilde{r}_{s2} e^{ - j\Upgamma } }}{{\tilde{r}_{s1} + \tilde{r}_{s2} e^{ - j\Upgamma } }} .$$ (1) Fig. 3 Schematic diagram of ambient-lubricant-slider geometry Full size image where, \( \tilde{r}_{p1} \) , \( \tilde{r}_{p2} \) and \( \tilde{r}_{s1} \) , \( \tilde{r}_{s2} \) are the Fresnel reflection coefficients at the ambient-lubricant interface (denoted as the subscript 1) and lubricant-slider interface (denoted as the subscript 2) respectively for the p- and s- polarizations. Γ is the phase change which is induced by the film thickness when the reflected light traverses the film. They are described by $$ r_{p1} = \frac{{n_{1}^{2} \cos \theta - n_{0} (n_{1}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} }}{{n_{1}^{2} \cos \theta + n_{0} (n_{1}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} }} $$ (2) $$ r_{p2} = \frac{{n_{2}^{2} (n_{1}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} - n_{1} (n_{2}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} }}{{n_{2}^{2} (n_{1}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} + n_{1} (n_{2}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} }} $$ (3) $$ r_{s1} = \frac{{n_{0}^{2} \cos \theta - (n_{1}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} }}{{n_{0}^{2} \cos \theta + (n_{1}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} }} $$ (4) $$ r_{s2} = \frac{{(n_{1}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} - (n_{2}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} }}{{(n_{1}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} + (n_{2}^{2} - n_{0}^{2} \sin^{2} \theta )^{1/2} }} $$ (5) $$ \Upgamma = (4\pi t/\lambda )(\tilde{n}_{1}^{2} - \tilde{n}_{0}^{2} \sin^{2} \theta )^{1/2}. $$ (6) Here \( \tilde{n}_{0} \) , \( \tilde{n}_{1} \) , \( \tilde{n}_{2} \) are the optical constants of the ambient, lubricant film and substrate, respectively, θ is the incident angle, λ is the optical wavelength and t is the film thickness (J. A. Woollam Co., Inc. 2000 ; Tompkins and McGahan 1999 ). When lubricant film is very thin, the film optical constants are always taken to be constant and its uncertainty is negligible, only the film thickness is to be determined. For this reason, ellipsomtric parameter Ψ and Δ can be written as the functions of slider optical constant \( \tilde{n}_{2} \) , incident angle θ and film thickness t: $$ \Uppsi = \Uppsi (\tilde{n}_{2} ,\theta ,t) $$ (7) $$ \Updelta = \Updelta (\tilde{n}_{2} ,\theta ,t).$$ (8) If \( \tilde{n}_{2} \) is known, t can be determined from the measured Ψ and Δ with either Eq. ( 7 ) or ( 8 ). Furthermore, by expanding these equations in a Taylor series and considering the first-order items, the effect of small experimental uncertainties in Ψ, Δ, θ and the structural uncertainty in \( \tilde{n}_{2} \) on the uncertainty in the thickness t can be calculated from $$ (\delta t)_{RMS} = \left[ {(A\delta \Uppsi )^{2} + (B\delta \theta )^{2} + (C\delta \tilde{n}_{2} )^{2} } \right]^{1/2} $$ (9) $$ (\delta t)_{RMS} = \left[ {\left( {D\delta \Updelta } \right)^{2} + \left( {E\delta \theta } \right)^{2} + \left( {F\delta \tilde{n}_{2} } \right)^{2} } \right]^{1/2}. $$ (10) Where, (δt) RMS is the root mean square uncertainty in t, δΨ, δΔ, δθ and \( \delta \tilde{n}_{2} \) are the uncertainties in Ψ, Δ, θ and \( \tilde{n}_{2} \) whose values are given and they are independent random deviations from their corresponding values. \( \tilde{n}_{2} \) is the measured value of slider optical constants. As Eqs. ( 9 ) and ( 10 ) give the two independent determinations for δt, the smaller of two calculated values was used. The coefficients A through F are shown in Table 1 . Table 1 The coefficients A through F Full size table 3.2 Analysis and discussion We first used the above analysis method to calculate the effect of wavelength on the uncertainty in lubricant thickness due to the wavelength-dependent nature of slider optical constants. During calculating, we found that \( \delta \tilde{n}_{2} \) was more than tenfold larger than δΨ, δΔ, δθ, it contributed the most to the overall uncertainty in lubricant thickness. Figure 4 plots the uncertainty in lubricant thickness as a function of wavelengths for various lubricant thicknesses under some given conditions. It is clear that the uncertainty in lubricant thickness follows the similar trend over the spectral region for all the thicknesses. It rapidly increases in the spectral region of 380–520 nm and starts to slowly grow from 520 nm wavelength, and then turns to slightly decrease from 580 nm wavelength. The uncertainty can be greatly reduced if the short wavelength is used for below 2 nm thickness. e.g., it falls from 35 to 20 % as the wavelength decreases from 520 to 400 nm for 1 nm thickness. Furthermore, it can be also observed that the uncertainty in lubricant thickness is highly depends on the thickness. The uncertainty tends to significantly decrease as the thickness becomes thicker. e.g., at 520 nm wavelength, the uncertainty drops from 35 to 17 % as thickness increases from 1 nm to 2 nm. It further falls to <5 % when the thickness gets to 5 nm. Fig. 4 Thickness uncertainty as function of wavelength for case of δΨ = δθ = 0.01°, θ = 65°, \( \tilde{n}_{0} \) = 1, \( \tilde{n}_{1} \) = 1.3, and δ \( \tilde{n}_{2} \) = 0.02 Full size image In a similar manner, we also calculated the uncertainty in lubricant thickness as a function of uncertainty in slider optical constants. Figure 5 displays the uncertainty in thickness versus the uncertainty in slider optical constants for various lubricant thicknesses corresponding to λ = 632.8 nm. It can be seen that the uncertainty in thickness is approximately linearly proportional to the uncertainty in slider optical constants for all the thicknesses. The larger the uncertainty in slider optical constants, the larger the uncertainty in lubricant thickness. This is very pronounced for some cases where the thicknesses are below 2 nm. e.g., for 1 nm thickness, the uncertainty in thickness drops by 17 % whereas it dips only 1.7 % for 5 nm thickness when the uncertainty in slider optical constants decreases from 0.025 to 0.015. On the other hand, Fig. 5 also reveals the same phenomenon as observed in Fig. 3 that the effect of uncertainty in slider optical constants diminishes with increasing the lubricant thickness. e.g., ~3 nm lubricant film on the slider with the uncertainty of ~0.02 in optical constants could be measured with the more than 85 % accuracy. Fig. 5 Thickness uncertainty as for case of δΨ = δ∆ = δθ = 0.01°, θ = 65°, \( \tilde{n}_{0} \) = 1, \( \tilde{n}_{1} \) = 1.3, and λ = 632.8 nm Full size image 4 Experimental study To verify the results of above theoretical analysis, we carried out the ellipsometry measurement and XPS measurement. As non-optical metrology tool, XPS is widely used for accurately measuring the lubricant thickness and also for calibrating ellipsometry (Leavitt 1992 ). In this study, it served with the assumed real thickness in order to evaluate the uncertainty in thickness measured by ellipsometry. 4.1 Samples The sliders used in the experimental study were from the same batch as the mapping of slider optical constants in Sect. 2 . To facilitate the verification purpose, two uniform lubricant films were prepared by dip-coating technique. Two slider row bars under test were immersed in a 0.1 % solution of Z-DOL4000 in Vertrel XF and pulled out at different speeds, respectively. The dwell time was approximately 7 min for each dip-coating process. 4.2 Ellipsometry measurement Ellipsomertric measurement was carried out using the same experimental setup as used in Sect. 2 . Before and after lubricant deposition, the ellipsometric Ψ and Δ were acquired from the same 15 marked locations along each row bar, respectively. With an optical model as shown in Fig. 6 , the lubricant thickness was calculated through the model fit to the measured Ψ and Δ. Figure 6 illustrates a typical best-fit result for ~2.5 nm lubricant film on the slider row bar. Fig. 6 Generated and experimental Ψ and Δ data from lubricant film on slider row bar. Generated data were calculated from best-fit Cauchy model Full size image 4.3 XPS measurement Shortly after ellipsometric measurement, the XPS data were obtained from 5 spots along each sample surface using PHI Quantera SXM Scanning X-ray Microprobe with a monochromatic Al Ka source. The system was operated at 15 kV, 25 W and pass energy of 55 eV. The lubricant thickness was calculated based on the ratio of C1s peak from the lubricant (C–F peak) and the slider (C–C/C–H peak) after the correction of the overlap of the C–C/C–H peak from lubricant. The resulting average values of the lubricant thicknesses of two samples were 1.89 and 2.97 nm with the standard deviations of 0.09 and 0.08 nm, respectively. Figure 7 displays the measured XPS data of one of them. Fig. 7 XPS spectrum of C1 s photoemission peaks for 5 spots on slider row bar, shown for 1.89 nm average thickness of lubricant film Full size image 4.4 Analysis and discussion Figure 8 plots the measured thickness of each individual spot measured by ellipsometry against the corresponding average thickness obtained by XPS. The uncertainty in measured thickness were calculated by Fig. 8 Comparison of XPS and ellipsometric measurement for lubricant thickness on slider Full size image $$ \delta {\text{t}}(\% ) = {{\sqrt {\frac{{\sum\nolimits_{{{\text{i}} = 1}}^{\text{n}} {({\text{t}}_{{{\text{i\_ellip}}}} - {\bar{\text{t}}}_{\text{xps}} )^{2} } }}{\text{n}}} } \mathord{\left/ {\vphantom {{\sqrt {\frac{{\sum\limits_{{{\text{i}} = 1}}^{\text{n}} {({\text{t}}_{{{\text{i\_ellip}}}} - {\bar{\text{t}}}_{\text{xps}} )^{2} } }}{\text{n}}} } {{\bar{\text{t}}}_{\text{xps}} }}} \right. \kern-\nulldelimiterspace} {{\bar{\text{t}}}_{\text{xps}} }} \times 100\,\% $$ (11) Here, n is the total number of measured spots in ellipsometry measurement, t i_ellip is the ith thickness of the film measured by ellipsometry, and \( \bar{t}_{xps} \) is the average thickness of the film obtained by XPS. It is clear from Fig. 8 that the thicknesses measured by ellipsometry randomly distribute around the ones measured by XPS with different deviations. This is attributed to the location-to-location variations in slider optical constants. It can be also seen that the uncertainty in experimental thickness reduces with increasing the thickness, which is in a good agreement with the result of theoretic analysis as shown in Fig. 5 . Furthermore, the experimentally measured uncertainty in lubricant thickness is 23.9 % for 1.89 nm thickness and 13.9 % for 2.97 nm thickness. As mentioned before, the slider row bar under study had the uncertainty in optical constants of ~0.019. Correspondingly, the theoretically calculated uncertainty is 15.2 % for 1.9 nm thickness and 7.3 % for 3.0 nm thickness, respectively. Obviously, the overall experimental values are larger than the theoretical values. It is reasonable because in addition to the measurement random errors, the optical model induced further systematic errors such as neglecting the roughness of slider surface. 5 Conclusion The measurement accuracy of lubricant thickness on slider was investigated theoretically and experimentally. Based on those results, we conclude: 1. The uncertainty in lubricant thickness is approximately proportional to the uncertainty in slider optical constants. It is very pronounced for below 2 nm thickness. Controlling the variation of slider optical constants is critical for ellipsometric measurement. 2. The uncertainty in lubricant thickness also depends on measurement wavelength. It is up to maximum at around 580 nm wavelength whereas it goes down to the minimum at around 400 nm wavelength in the spectral range of 380–640 nm. Using the short wavelength can improve the measurement accuracy. 3. The uncertainty in lubricant thickness is related to the thickness itself. For the state-of-art sliders, it falls from 35 to 5 % as thickness increases from 1 to 5 nm at 632.8 nm wavelength and it is <15 % as thickness is ~3 nm. 4. The theoretical results are in the agreement with the experimental results.
To keep pace with the rapidly growing consumer demand for data storage, hardware engineers are striving to cram as much electronic information into as small a space as possible. Jinmin Zhao, Mingsheng Zhang and co‐workers at the A*STAR Data Storage Institute, Singapore, have now devised a technique to assess the impact of making these devices more compact. Insights resulting from this work will guide the future design of stable disk drives. The primary components of a hard disk drive are a rotating disk coated with a thin film of magnetic material and a magnetic head on a moving arm, also called a slider (see image). The slider includes magnetic write/read elements that can encode a single bit of binary information by altering the properties of the thin film at a small spot on the surface. A smaller spot enables a higher density of data storage. Current technology is rapidly approaching one trillion bits per square inch, but this requires the separation between the head and disk to be less than 2 nanometers. This narrow requirement, however, creates its own problems. Lubricant used on the surface of the disk to protect it from corrosion can attach to the slider, which adversely affects the reliability of the hard disk drive. "We have carried out a systematic and quantitative study on how the variation of slider optical properties affects the accuracy of the measured lubricant thickness on the slider surface," says Zhang. Zhao, Zhang and their co-workers analyzed a lubricant-coated slider using a technique known as spectroscopic ellipsometry. Measuring the intensity of light reflected from a sample slider provided a highly accurate estimate of the thickness of the lubricant film. Ellipsometry is a fast and non-destructive technique that, unlike some of the alternative approaches, does not require ultra-high vacuum conditions. This technique, however, does require accurate knowledge of the optical properties of the slider. A typical slider is made of aluminum oxide and grains of titanium carbide of many different shapes and sizes; thus, its optical properties vary from position to position. Zhao and the team's study demonstrated that the uncertainty in lubricant thickness is approximately proportional to the uncertainty in the slider's optical constants, and it becomes particularly pronounced for thicknesses below 2 nanometers. "This lubricant transfer will be more serious in future heat-assisted magnetic recording," explains Zhang. "The next step in this research will focus on how to reduce the lubricant transfer, especially in this type of device."
dx.doi.org/10.1007/s00542-012-1519-8
Biology
Invasive insects—an underestimated cost to the world economy
Massive yet grossly underestimated global costs of invasive insects. Nature Communications. 4 october 2016. DOI: 10.1038/ncomms12986 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms12986
https://phys.org/news/2016-10-invasive-insectsan-underestimated-world-economy.html
Abstract Insects have presented human society with some of its greatest development challenges by spreading diseases, consuming crops and damaging infrastructure. Despite the massive human and financial toll of invasive insects, cost estimates of their impacts remain sporadic, spatially incomplete and of questionable quality. Here we compile a comprehensive database of economic costs of invasive insects. Taking all reported goods and service estimates, invasive insects cost a minimum of US$70.0 billion per year globally, while associated health costs exceed US$6.9 billion per year. Total costs rise as the number of estimate increases, although many of the worst costs have already been estimated (especially those related to human health). A lack of dedicated studies, especially for reproducible goods and service estimates, implies gross underestimation of global costs. Global warming as a consequence of climate change, rising human population densities and intensifying international trade will allow these costly insects to spread into new areas, but substantial savings could be achieved by increasing surveillance, containment and public awareness. Introduction For millennia, insects have been responsible for spreading devastating infectious diseases in both humans 1 and livestock 2 , ravaging crops and food stocks 3 , damaging forests 4 , destroying infrastructure 5 , altering ecosystem functions 6 and weakening the resilience of ecosystems to other disturbances 7 . This single invertebrate class ( ∼ 2.5 million species 8 ) is therefore probably the costliest animal group to human society. A global challenge this century will be meeting the world’s food requirements while maintaining economic productivity and conserving biodiversity. Globally, insect pests have been reported to reduce agricultural yields by 10–16% before harvest, and to consume a similar amount following harvest 9 . In fact, the largest food-producing countries, China and the United States, exhibit the highest potential losses from invasive insects 10 . Several other insect pests defoliate trees 4 and degrade plant biodiversity, threaten commercial forestry and hamper climate change mitigation via increased tree mortality and associated increases in greenhouse-gas emissions 11 . Many other insects are nuisance species or disease vectors that directly erode public health—from the Seventeenth to Twentieth centuries, insect-borne diseases caused more human disease and death than all other causes combined 12 . Insects are also among the most pervasive of invasive species. For example, 87% of the ∼ 2,500 non-native terrestrial invertebrates in Europe are insects 13 . Yet, reliable estimates of their impacts are difficult to obtain, in particular for economic assessments. Most cost estimates are disparate, regionally focused, cover variable periods and are not always grounded in verifiable data (see Methods). The types of costs also vary and include both direct and indirect components ( Fig. 1 ). Consequently, extrapolating local costs to global scales is challenging and few have attempted to overcome the many inherent flaws in this approach. Figure 1: Market and non-market cost categories associated with invasive insect damages. Costs are subdivided into ‘goods and services’ (yellow) and ‘human health’ (red), ‘regulating services’ ( sensu non-commercial, but potentially monetizable, such as carbon regulation and pollination not otherwise quantified in agricultural yield estimates; blue) and ‘ecological’ costs (not typically monetizable; green). Owing mainly to a lack of monetary estimates, we could not compile costs for the categories and subcategories coloured in grey. The inner circle (darkest colours) encapsulates costs associated with prevention; the middle circle (mid-range colours) includes costs associated with damage from invasive insects; the outer circle (lightest colours) covers costs associated with responses or follow-up to invasive insect incursions. The outermost purple arrow indicates the general increase in our ability to estimate monetizable costs, and the direct relevance to human commerce and well being. DALY, disability-adjusted life year (lifespan lost because of burden of insect-borne disease; not assessed). Full size image Reliable global cost summaries therefore remain a major challenge. Indeed, there are currently only 86 insect species listed in the International Union for Conservation of Nature (IUCN) Global Invasive Species Database 14 , and of those there are no cost estimates for 81.4%, while 12.8% of them have insufficient (unsourced) estimates. We therefore compiled the most comprehensive database of economic costs for invasive insects available to date (737 screened articles, chapters and reports), standardizing historical estimates as annual 2014 US dollars (US$; Methods). We determined the reproducibility of each study’s cost estimates by identifying the source of all values used to extrapolate regional costs. When values were based on actual measures as opposed to non-sourced estimates and had a clear methodology provided, we deemed the resulting costs ‘reproducible’ (although we did not assess quality per se because of a lack of standard, objective criteria to assess the accuracy of published estimates; Methods). We categorized studies that did not meet these criteria as ‘irreproducible’. We further divided all costs into two main categories: ‘goods and services’ (including production of agricultural and forestry goods, and cultural services; Fig. 1 ) and ‘human health’, further splitting the former into agriculture, forestry, infrastructure, mixed or urban categories, and the latter into seven disease categories (Methods). Taking all reported goods and services estimates, and avoiding the extrapolation of limited data, invasive insects cost a minimum of US$70.0 billion per year globally, while associated health costs exceed US$6.9 billion per year. Total costs rise as the number of estimates increases; therefore, the true costs of invasive insects to human society are substantially larger (but by a currently unquantifiable amount) than we report here. Further, future costs are likely to increase as invasive insects expand their ranges in response to climate change, as well as to increasing human movements and international trade. Results Goods and services We determined that invasive insects cost a minimum of US$70.0 billion per year globally for goods and services, of which US$25.2 billion per year comes from reproducible studies ( Fig. 2 and Supplementary Data 1 ). There was no temporal pattern in annual cost rates ( Supplementary Fig. 1 ), and most estimates were direct measures (although estimated costs were higher for extrapolated costs; see ‘Expenditure types and targets’ in the Supplementary Methods and Supplementary Fig. 2 ). Regionally, North America reported the highest annual costs (>US$27.3 billion), followed by Europe (US$3.6 billion per year; Fig. 2a,b ), although this is likely more a function of the intensity of research effort (see ‘Research effort’ below) rather than a true reflection of relative regional costs. The 10 costliest species change little whether including all or only reproducible estimates ( Fig. 2e,f ). Figure 2: Goods and services costs associated with invasive insects. Direct goods and services costs are categorized by major region ( a , b ), type ( c , d ) and by the 10 costliest insects ( e , f ). The first column includes all estimates regardless of reproducibility ( a , c , e ), whereas the second only includes costs for which estimates can be verified (‘reproducible’; b , d , f ). All costs expressed as annual 2014 US dollars. Bracketed numbers in the x axis labels indicate the number of estimates per category. Full size image According to a single study 5 , the most expensive insect is purportedly the Formosan subterranean termite Coptotermes formosanus estimated at >US$30.2 billion per year globally ( Fig. 2e ). However, that irreproducible estimate is based on a single non-sourced value of US$2.2 billion per year for the United States of America, a personal communication supporting a ratio of 1:4 of control:repair costs in a single US city (New Orleans) and an unvalidated assumption that the US costs represent 50% of the global total 5 . A more realistic ranking based on the reproducible estimates only ( Fig. 2f ) places the diamondback moth Plutella xylostella as the most expensive (US$4.6 billion per year) 15 . Other costly insects include the brown spruce longhorn beetle Tetropium fuscum (US$4.5 billion per year in Canada), the gypsy moth Lymantria dispar (US$3.2 billion per year in North America) and the Asian long-horned beetle Anoplophora glabripennis (US$3.0 billion per year in North America and Europe; Fig. 2f ). Human health Global health costs directly attributable to invasive insects exceed US$6.9 billion per year ( Fig. 3 ); however, these exclude malaria costs because that disease is not due to the invasion of an insect vector throughout most of its distribution (although malaria cases ‘imported’ into non-endemic areas do incur treatment and prophylaxis costs 16 ). Our summary also excludes the economic impacts on productivity, income, tourism, blood-supply system, personal protection and quality of life ( Supplementary Note 1 ), as well as historical epidemics of yellow fever and dengue because no relevant cost estimates exist (Methods). Most health-related estimates are a combination of direct and indirect costs (79% and 93% for all estimates and reproducible-only estimates, respectively; see Supplementary Note 1 ), represent actual estimates as opposed to extrapolations or model predictions (66% and 77%, respectively) and are primarily related to medical care (75% and 88%, respectively; see ‘Expenditure types and targets’ in the Supplementary Methods and Supplementary Figs 3 and 4 ). Dengue (from a virus transmitted by Aedes albopictus and Ae. aegypti ) costs represent 84% of total health costs, followed by 15% for West Nile virus transmitted by Culex spp. ( Fig. 3c,d ). Asia (US$2.84 billion) and North America (US$2.06 billion) and Central/South America (US$1.85 billion) recorded the highest annual health costs ( Fig. 3a,b ). Figure 3: Human health costs associated with invasive insects. Direct human health costs are categorized by major region ( a , b ) and disease ( c , d ). The first column includes all estimates regardless of reproducibility ( a , c ), whereas the second only includes costs for which estimates can be verified (‘reproducible’; b , d ). All costs expressed as annual 2014 US dollars. Bracketed numbers in the x axis labels indicate the number of estimates per category. Full size image Research effort The regional summaries for both goods and services and health costs belie a strong positive relationship between total costs and the number of individual estimates (see ‘Sampling bias’ in the Supplementary Methods and Supplementary Fig. 5 ). Across regions, goods and services costs increase by 10 times for each additional 5.5 (reproducible-only) or 13.0 (all) estimates ( Supplementary Fig. 5a,b ). This strong positive relationship remains when expressed across species ( Supplementary Fig. 6 ), but is necessarily more variable, given that most species have only one cost estimate each. The same type of relationship also exists for health costs, with total costs increasing by 10 times for each additional 18.5–19.1 estimates ( Supplementary Fig. 6c,d ). This regional bias in sampling corroborates the established phenomenon of a spatial mismatch between invader impacts on threatened species and research publications 17 , suggesting that large additional costs because of invasive insects remain to be estimated in lesser-sampled regions of the world, and reinforcing our hypothesis that the total costs have been grossly underestimated. Cumulative costs Given that the regions to which these sums apply do not have the same spatial area, have different climates, have important crop and infrastructure differences, and are likely to experience different insect invasion and detection probabilities, extrapolating regional costs to correct for potential undersampling is dubious. We therefore expressed total costs and the number of associated estimates as temporally cumulative values to identify possible thresholds within the sampled regions and categories (see ‘Sampling bias’ in the Supplementary Methods and Fig. 4 ). For both global goods and services and human health costs, there was evidence for an asymptote among the sampled species based on fitted logistic models ( Fig. 4 ); however, reproducible-only goods and services costs had more support for a non-asymptotic linear model ( Fig. 4b ). This asymptotic behaviour is driven principally by North American goods and services costs ( Supplementary Table 1 and Supplementary Fig. 7 ); in contrast, asymptotic behaviour was more prevalent across compared regions for human health costs ( Supplementary Table 1 and Supplementary Fig. 8 ). For human health costs dominated by those associated with dengue fever, potential undersampling appears less problematic than for clearly underestimated costs from reproducible studies of goods and services. This variable asymptotic behaviour means that only some regions and cost types demonstrate possible evidence of decelerating accumulation rates (that is, the costliest insects are assessed initially, with smaller damages estimated thereafter). Figure 4: Global cumulative costs due to invasive insects. Costs are expressed relative to the number of estimates for goods and services ( a , b ) and human health-related ( c , d ) costs, and for all estimates ( a , c ) and reproducible-only estimates ( b , d ). For a given year t , we summed all values (costs and number of estimates) up to t (see ‘Sampling bias’ in the Supplementary Methods for model fitting and comparison methods). We fitted linear, exponential, logarithmic and logistic models to each curve to examine evidence for asymptotic behaviour (identified by the dominance of a logarithmic or logistic model). For all categories except reproducible-only goods and services costs ( b ), the logistic model (curvilinear grey dashed lines) had the highest Akaike’s information criterion (AIC) weights ( w AIC≈relative model probability) and explained >96% of the deviance in the data (%DE≈coefficient of determination). For reproducible-only goods and services costs ( b ), the linear model (straight grey dashed line) had the highest w AIC, indicating that the logistic asymptote was likely an underestimate. For each fit, we also show the approximate asymptotic cost and the associated number of cumulative estimates required to achieve the asymptote (red lines). See also Supplementary Figs 7 and 8 for accumulation curves expressed by region. All costs expressed as 2014 US dollars. Full size image Discussion The estimated total global costs, even after attempting to correct for sampling bias, are therefore necessarily gross underestimates. We found only 86 (goods and services) and 117 (health) estimates globally, of which only 55% of the former ( n =47) and 85% of the latter ( n =99) we deemed reproducible. Ecosystem-regulating services, which have high economic value worldwide 18 , are notoriously difficult to estimate 19 ; hence, estimating the cost of their erosion arising from invasive insects is still unknown. In fact, we identified only one study 20 that provided reproducible economic costs of the erosion of ecosystem-regulating services (that is, costs not directly associated with goods and services or health, such as the erosion of pollination; Fig. 1 ) because of invasive insects (two Vespula wasps in New Zealand). That study showed that damages arising mainly from reduced pollination are comparable to the direct costs to goods and services (for example, lost apicultural production and control) and are much higher than associated health costs 20 . While many non-native species are clearly beneficial to human society ( Supplementary Fig. 9 ) by providing food, fibre, ecosystem services and even ecological benefits (habitats and resources for native species 21 ; ex situ conservation 22 ; increasing reproductive success of native plants 23 ), the net outcome from non-native insects is strongly negative. This net outcome arises because most invasive non-native insects are not directly consumed or used in any way by humans, and their overall benefits to society remain limited 24 . There are two main phenomena leading to an increased frequency of introductions and potentially expanding distributions of the costliest insect invaders: international trade 25 and global warming 9 . Invasions and subsequent expansions are exacerbated by rising human populations, movement, migration, wealth and international trade 25 , despite more national and international policies targeting invasive species 26 . Climate change projections to 2050 also predict a net average increase of 18% in the area of occurrence of current arthropod invaders 27 . Given that available economic estimates are sporadic, spatially incomplete (especially outside Europe and North America), of variable reproducibility and are likely to increase as the planet warms and international trade expands, we conclude that the costs of invasive insects to human society are underestimated and will escalate with time. The available data describe only the costliest insects of mainly industrial and/or biosecurity concern, and non-market costs are rarely estimated (but see Supplementary Note 1 ), even though they can at times exceed market costs (for example, for forest pests 28 ). In contrast, summaries of direct costs at the scale of the broader economy might not always adequately capture the true net costs of invasive insects because some investments can potentially lead to savings arising from mitigation (for example, costs of purchasing pesticides resulting in reduced damage from targeted pests). It is therefore difficult to estimate total costs from different values of direct and indirect categories of invasive insects impacts; therefore, we recommend that cost summaries always be reported by type and target (for example, Supplementary Figs 2 and 3 ). Effective, early response and vigilant biosecurity are often cheaper (by up to 10 times for mosquito-borne disease 29 ) than waiting to pay for accrued damages 4 , 9 , although this might not always be the case when prevention investment occurs long before any impacts are experienced 30 . In the rare cases where those responsible for novel invasions are identified, ‘polluter pays’ legislation has been proposed 31 . However, most costs appear to be borne ultimately by individuals via out-of-pocket expenses 32 , higher consumer prices and taxes to fund management 31 , thus reinforcing the poverty-illness nexus 33 . In addition to improving guidelines for estimating the full costs of invasive insects, vigilant planning, public-awareness campaigns and community participation could potentially relieve society of billions of dollars of annual expense, and reduce the contribution of invasive insects to human suffering. Methods Literature review We began our review of the literature on the economic impacts of invasive insects using the ISI Web of Science database with a specific search string to identify relevant papers (see below). We then used the Web of Science’s ‘refine’ function to restrict the studies identified to the relevant fields, yielding 488 sources from 1911 to January 2014. We analysed each source to reject irrelevant papers and retained those containing economic estimates. We completed our database with 267 relevant studies up to December 2015 (including grey literature) opportunistically gathered. In total, we screened 737 sources, 470 of which were relevant to the economic impacts of invasive insects and from which 158 yielded useable economic estimates ( Supplementary Data 1 and 2 ). When economic values were cited from studies not already included in the database, we searched and gathered papers, reports or chapters providing the initial estimates. For each value, we extracted the estimation methodology, and spatial and temporal coverage (full databases available in Supplementary Data 1 and 2 ). Owing to the diversity of the methods reviewed, we classified the reproducibility of economic values as ‘reproducible’ or ‘irreproducible’ based on qualitative criteria because of the diversity of methods reviewed (see ‘Determining cost estimate reproducibility’ below). We attributed ‘reproducible’ to values with demonstrated calculation methodologies, including uncertainties, and with available original references. ‘Irreproducible’ values were those without calculation methodologies, uncertainty estimates or unavailable original references (see ‘Determining cost estimate reproducibility’ below). We expressed all costs in 2014 US$ 34 . We averaged multiple values (for example, to provide an annual average over a specified period) or uncertainty ranges before conversion to 2014 US$. In many cases we deemed some of the multiple estimates for the same invasive insect species/disease and region as redundant (that is, generally older, obsolete, incomplete or irreproducible estimates). If monetary costs were provided as a range, we used the median value for each estimate. Detailed calculations for each estimate are available in Supplementary Data 1 and 2 . For health costs, we limited the criteria for invasive vector-borne diseases and their related vector mosquitoes following Juliano and Lounibos 35 , based on several life-history traits such as desiccation-resistant eggs, development in small, human-made containers, occupying human-dominated habitats, diapause and autogeny. We added chikungunya and zika and excluded historical epidemics of yellow fever, dengue and malaria in South America because no estimates of these exist. Most of the economic estimates of invasive mosquito-borne diseases that we obtained concerned dengue, while only a few concerned West Nile, chikungunya and zika viruses ( Fig. 3c,d ), and we therefore considered the costs of these diseases to be under-represented. For this reason, we could not evaluate the many costs of epidemics (zika, chikungunya, yellow fever and dengue). Nor did we include estimates of the contribution of each disease to disability-adjusted life years ( Fig. 1 ) because these rarely include associated financial components. To estimate annual health costs based on the outbreaks of particular diseases covering multiple years, we calculated national outbreak frequencies (annual probabilities) of disease epidemics arising from invasive insects ( Supplementary Data 3 ). Search criteria for constructing the costs databases We searched on Web of Science in February 2014 and extracted records from 1911 to January 2014. Our search string was composed of three elements: ‘invasive’ AND ‘insects’ and ‘economic impacts’. For each element we used a range of synonyms widely found in the literature. For example, for ‘invasive’ we used invasi*, invader, alien, exotic, non-native, introduced, naturaliz*. For ‘insects’, we also specified the names of a range of taxa that we identified a priori as having potentially important economic impacts. In addition, the search string included exclusion terms to reject irrelevant studies, for example, those related to medicine. We completed the search for citations in Google Scholar and internal government reports. Full search string: TS=(invasi* OR invader OR alien OR exotic OR non-native OR introduced OR naturaliz*) AND TS=(insect* OR hymenoptera OR ant OR coleoptera OR mosquito* OR lepidoptera OR diptera OR hemiptera OR Anoplophora chinesis OR Anoplophora glabripennis OR Dendroctonus ponderosae OR Diabrotica virgifera OR Harmonia axyridis OR Leptinotarsa decemlineata OR Trogoderma granarium OR Aedes aegypti OR Aedes albopictus OR Anopheles gambiae OR Ceratitis capitata OR Culex pipiens OR Culex quinquefasciatus OR Liriomyza huidobrensis OR Aphis gossypii OR Bemisia tabaci OR Linepithema humile OR Solenopsis invicta OR Vespa velutina OR Wasmania auropunctata OR Cameraria ohridella OR Helicoverpa armigera OR Lymantria dispar OR Plutella xylostella OR Spodoptera littoralis OR Frankliniella occidentalis OR Coptotermes formosanus) AND TS=(economi* OR monetary OR dollar*) NOT TS=(cancer* OR cardio* OR surg* OR carcin* OR engineer* OR operation OR medic* OR rotation OR ovar* OR polynom* OR purif* OR respirat* OR invasive technique). Removing potential double counts We made every effort to eliminate redundant amounts from the monetary values we used to estimate cost sums. First, we removed values that were obvious re-estimates of older values (with the more recent estimates tending to be more reproducible than older ones; for example, Supplementary Data 1 , column E). We further separated costs into ‘extrapolation’ versus ‘actual estimate’ categories (columns G and H in Supplementary Data 1 , respectively). Further removing those estimates already deemed irreproducible (column F), column I indicates with absolute certainty which estimates should be retained to avoid any potential case of double counting (that is, species with reproducible estimates that do not include both extrapolated and actual estimates). The sum of estimates in column I ($22,629,029,314) versus our sum of the total costs (US$25,166,603,981) reported in the main text is only 10.1%, which suggest that even in the unlikely case of double counting, the bias is minimal, and well within the margin of error expected for a sum of median cost rates across the globe. It is essential to note that even if a species includes both extrapolations and actual values, it does not necessarily equate to double counting because often the different estimates apply to different regions of the insect’s distribution or different economic components of their costs. However, this does not exclude the possibility of double counting within the irreproducible category, simply because we cannot verify how the estimates were derived to check for instances of potential double counting. Validity of annual cost rate metric It is possible that the impact rate of any invasive species will vary over time, with rates being initially low following original establishment, and then increasing as the species expands its range and possibly declining as hosts are eliminated or humans adapt to the invasion. Consequently, a simple sum of rates from many species that invaded at different points in time might not provide a practical measure of standardized costs. However, ascertaining the year of invasion of all species we examined was impossible or suspect, given a lack of monitoring data for many species. To examine the potential problem indirectly, we plotted the cost rates versus the applicable year (median or publishing year for most goods and services estimates; initial year of reporting interval for human health estimates) for the goods and services and human health estimates separately. The subsequent bivariate plots ( Supplementary Fig. 1 ) do not reveal any relationship with time. We therefore consider the use of cost rates as an appropriate metric for standardizing costs across species, regions and time intervals. Determining cost estimate reproducibility We determined the reliability of the cost estimates given in each study by identifying the source of all the figures used to extrapolate regional costs. When monetary values were based on available calculation methodologies, traceable original references and clearly identified uncertainties, we deemed the resulting final costs to be ‘reproducible’. This reproducibility is not an assessment of quality or realism of the estimation; rather, it is a qualitative assessment of whether the initial values, assumptions and methodology applied to obtain the monetary value can be fully understood (and ideally repeated). Conversely, we defined as ‘irreproducible’ any monetary values that could not be fully traced, clearly understood or justified. Thus, we deemed a monetary value to be irreproducible when it was not properly referenced, was not traceable, was derived from a potentially subjective source (for example, a personal communication or a web page with no supporting references), did not have the full details of the calculations or did not provide a clear list of the underlying assumptions. We assessed reproducibility for every monetary value we found in the literature; hence, some values might be reproducible and others irreproducible in the same study (for example, ref. 36 ). We could not apply the criteria in the same way to all types of monetary values. For example, assumptions and calculations are necessary when monetary values result from extrapolations (for example, see the calculations in Table 3 of ref. 37 or the values in ref. 38 ), but not when they are reports of raw expenses and costs (for example, values reported in ref. 39 ). The attribution of reproducibility was therefore a qualitative procedure specific to each monetary value. As a consequence, we supported our choices with narrative details about each value in the database (see, for example, ‘detailed notes’ worksheet in Supplementary Data 1 ). The attribution of reproducibility to monetary estimates was clear in most cases. For example, values provided in refs 28 , 37 , 38 were explained clearly with respect to details, methodologies, assumptions and limits; therefore, we classified them as ‘reproducible’. Conversely, values for Ae. albopictus were classified as irreproducible in ref. 6 because they were associated to a reference on Anoplophora glabripennis . Likewise, some values in ref. 7 were either non-sourced or were associated with personal communications, and were thus deemed irreproducible. However, in some cases, the attribution was less certain. For example, in several cases we were not able to obtain the sources of the estimates, especially for non-English sources; therefore, we conservatively decided to attribute irreproducibility to these (for example, the various values in ref. 8 ), although we acknowledge that they might in fact be reproducible. In the case of raw reports of expenses and costs, we generally classified values provided by official institutions as reproducible (for example, those in ref. 4 ), and from uncertain sources such as personal communications with no more details than the name (for example, those in ref. 8 ) or from conferences (for example, those in ref. 9 ) as irreproducible. Data availability The authors declare that all data supporting the findings of this study are available within the article and its Supplementary Information files. Additional information How to cite this article: Bradshaw, C. J. A. et al . Massive yet grossly underestimated global costs of invasive insects. Nat. Commun. 7, 12986 doi: 10.1038/ncomms12986 (2016).
Invasive insects cause at least 69 billion euros of damage per annum worldwide. Such is the estimation made by an international research team led by Franck Courchamp, CNRS research director at Laboratoire Ecologie, Systématique et Evolution (Université Paris-Sud/CNRS/AgroParisTech) and notably including entomologists from IRD Montpellier and a CNRS economist. Their study brought together the largest database ever developed on economic damage attributable to invasive insects worldwide. Covering damage to goods and services, health care costs and agricultural losses, this study, conducted with the support of ANR and the BNP Paribas Foundation, considered 737 articles, books and reports. This work was published in Nature Communications on 4 October 2016. Why study insects? For thousands of years, insects have been responsible for the spread of diseases in humans and livestock, and cause considerable damage on many levels: from attacks on crops and stocks, through the destruction of infrastructure, to the devastation of forests, altering and weakening ecosystems. In the living world, insects alone (about 2.5 million species) are probably the group responsible for the greatest expense. In addition, they are among the most aggressive invasive species: 87% of 2500 terrestrial invertebrates that have colonized new territories are insects. Underestimated damage The scientists estimated the minimum economic damage caused by invasive insects to be 69 billion euros per year. Of the insects studied, the Formosan termite (Coptotermes formosanus) is one of the most destructive, causing over 26.7 billion euros of damage per year in the world. However, according to the research group, this estimate is based on a study that was insufficiently documented. Studies that were more soundly based (considered reproducible by the scientists) also put the cabbage moth (Plutella xylostella), with a cost of 4.1 billion euros per year, in a high-ranking position, like the brown spruce longhorn beetle (Tetropium fuscum), which costs 4 billion euros in Canada alone. Cabbage moth (Plutella xylostella). Credit: Mike Pennington Furthermore, according to this study, North America suffers the largest financial losses, at 24.5 billion euros a year, while Europe is currently at only 3.2 billion euros per year. This difference, however, can be explained more by a lack of evaluation sources than by a difference in exposure to these dangers. Thus, according to the researchers, the total annual cost estimation of 69 billion euros is largely underevaluated. Many parts of the world do not offer enough economic data to produce an accurate estimate, which is therefore minimized. In addition, the research team focused on the study of the ten most costly invasive species, not counting the very large number that cause less damage. Finally, considering the estimated values of ecosystem services on a global scale (hundreds of billions of dollars for crop pollination alone), the disruption caused by invasive insects could reach a level far beyond the current estimate. Health and agriculture are the most affected Insects overall take a heavy toll on agriculture by consuming 40% of the harvest (enough to feed one billion people). As for health, the total cost attributable to invasive insects exceeds 6.1 billion euros per year (without counting malaria, Zika virus, or economic impacts on tourism or productivity, etc.). From a geographic point of view, the regions of the world where medical expenses related to invasive insects prove to be the greatest are Asia (2.55 billion euros a year), North America (1.85 billion euros per year) and the whole of Central and South America (1.66 billion euros a year). Among the diseases that have the greatest economic impact, we firstly find dengue fever, for which the costs account for 84% of the 6.1 billion euros. According to the authors, greater vigilance and the development of procedures to respond to biological invasions would save society tens of billions of euros. These preventive measures could divide the cost of diseases caused by mosquitoes by at least tenfold.
10.1038/ncomms12986
Chemistry
Lead halide perovskites: A horse of a different color
Alexander Kiligaridis et al, Are Shockley-Read-Hall and ABC models valid for lead halide perovskites?, Nature Communications (2021). DOI: 10.1038/s41467-021-23275-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-23275-w
https://phys.org/news/2021-06-halide-perovskites-horse.html
Abstract Metal halide perovskites are an important class of emerging semiconductors. Their charge carrier dynamics is poorly understood due to limited knowledge of defect physics and charge carrier recombination mechanisms. Nevertheless, classical ABC and Shockley-Read-Hall (SRH) models are ubiquitously applied to perovskites without considering their validity. Herein, an advanced technique mapping photoluminescence quantum yield (PLQY) as a function of both the excitation pulse energy and repetition frequency is developed and employed to examine the validity of these models. While ABC and SRH fail to explain the charge dynamics in a broad range of conditions, the addition of Auger recombination and trapping to the SRH model enables a quantitative fitting of PLQY maps and low-power PL decay kinetics, and extracting trap concentrations and efficacies. However, PL kinetics at high power are too fast and cannot be explained. The proposed PLQY mapping technique is ideal for a comprehensive testing of theories and applicable to any semiconductor. Introduction Semiconducting materials often exhibit complex charge dynamics, which strongly depends on the concentration of charge carriers due to the co-existence of both linear and non-linear charge recombination mechanisms 1 , 2 . The emergence of novel semiconductors like metal halide perovskites (MHPs), exhibiting intriguing and often unexpected electronic properties 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , triggered a renewed interest in revisiting the classical textbook theories of charge recombination and the development of more complete, accurate models 11 , 12 , 13 , 14 , 15 , 16 , 17 . Moreover, modern technical advances in experimental and computational capabilities 4 , 18 , 19 , 20 , 21 allow for a detailed quantitative comparison between experiment and theory, far beyond what was once possible. MHP are a novel solution-processable material class with enormous promise for application in a broad range of optoelectronic devices 22 , 23 , 24 . Driven in particular by their remarkable performance in photovoltaics, with power conversion efficiencies surpassing 25% demonstrated to date 25 , significant research efforts have been devoted to study the fundamental electronic properties of these materials 4 , 5 , 7 , 13 , 15 , 18 , 26 , 27 , 28 , 29 , 30 . It was established that for many MHP compositions—with the most notable example being the methylammonium lead triiodide (MA = CH 3 NH 3 + , also referred to as MAPbI 3 or MAPI)—they can be considered as classical crystalline semiconductors at room temperature, in which photoexcitation leads to the formation of charge carriers that exist independently from each other due to the low exciton binding energy 30 . Consequently, conventional models that describe the charge carrier dynamics are ubiquitously used to describe the dynamics of charge carriers in MHPs 11 , 13 , 14 , 15 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 . Historically, the first model describing the kinetics of charge carrier concentrations in a semiconductor was proposed by Shockley and Read 40 and independently by Hall 41 , and is known as the Shockley–Read–Hall (SRH) model. This model considers only the first-order process (trapping of electrons or holes) and the second-order kinetic processes (radiative electron–hole recombination and non-radiative (NR) recombination of the trapped electrons and free holes). It is noteworthy that the SRH model allows the concentrations of free charge carriers to differ due to the presence of trapping. In an intrinsic semiconductor, trapping of, for example, electrons generated by photoexcitation creates an excess of free holes at the valence band. This effect is often referred to as photodoping, in analogy with chemical doping, with the important difference, however, that the material becomes doped only under light irradiation and the degree of doping depends on the light irradiation intensity. Third-order processes, such as NR Auger recombination, via which two charge carriers recombine in the presence of a third charge that uptakes the released energy, have been recognized as particularly important at a high charge carrier concentration regime. To account for this process, Shen et al., instead of adding the Auger recombination term into the SRH model, proposed a simplified ABC model named after the coefficients A, B and C for the first-order (monomolecular), second-order (bi-molecular) and third-order Auger recombination, respectively 42 . These coefficients are also sometimes referred to as k 1 , k 2 and k 3 . Importantly, the concentrations of free electrons and holes in the ABC model are assumed to be equal, thus neglecting the possible influences of chemical and photodoping effects. The ABC model is widely applied in a broad range of semiconductors and in particular, is commonly used to rationalize properties and efficiency limits of LEDs 42 , 43 . The simplicity of the ABC model led to its extreme popularity also for MHPs (see e.g. ref. 15 and references therein) with fewer reports employing SRH or its modifications 13 , 14 , 16 , 17 , 31 , 36 , 39 , 44 , 45 . The ABC and SRH kinetic models are typically employed to describe experimentally acquired data such as the excitation power density dependence of photoluminescence (PL) quantum yield (PLQY) measured upon continuous wave (CW) or pulsed excitation, time-resolved PL decay kinetics and kinetics of the transient absorption signal. These models are applied to semi-quantitatively explain the experimental results and extract different rate constants 13 , 14 , 15 , 20 , 31 , 32 , 33 , 34 , 46 , 47 , often without necessarily considering the models’ limitations. Despite the very large number of published studies describing electronic processes in MHPs using the terminology of classical semiconductor physics, to the best of our knowledge, there have been only very few attempts to fit both PL decay and PLQY dependencies of excitation power using ABC/SRH-based models or at least compare the experimental data with theory 14 , 16 , 17 , 31 , 39 . These attempts, however, were of limited success because large discrepancies between the experimental results and the theoretical fits were often permitted. These observations raise fundamental questions concerning the general validity of the SRH and ABC models to MHPs and the existence of a straightforward experimental method to evaluate this validity. To address these concerns, it is necessary to characterize experimentally the PLQY and PL decay dynamics not only across a large range of excitation power densities, but also simultaneously over a large range of the repetition rates of the laser pulses. We note that PL is sensitive not only to the concentrations of free charge carriers but also, indirectly, to the concentration of trapped charge carriers, as the latter influence the former via charge neutrality. Such trapped carriers may also lead to other non-linear processes, for example, between free and trapped charge carriers (Auger trapping 2 ), which should also be considered, but are excluded from both the ABC and classical SRH models. To expose and probe these processes, it is most crucial to scan the laser repetition rate frequency in the PLQY measurements, with such measurements, to the best of our knowledge, have not been reported to date. In this work, we developed a new experimental methodology that maps the external PLQY in two-dimensional space as a function of both the excitation pulse fluence ( P , in photons/cm 2 ) and excitation pulse frequency ( f , in Hz). Due to scanning of the excitation pulse frequency over a very broad range, this novel technique allows to unambiguously determine the excitation regime of the sample (single pulse vs. quasi-CW), which is critically important for data interpretation and modelling. Obtaining a two-dimensional PLQY( f , P ) map complemented with PL decays provides a clear and unambiguous criterion to test kinetic models: a model is valid if the entire multi-parameter data set can be fitted with fixed model parameters. By applying this method to a series of high-quality MAPbI 3 thin film samples, which when integrated in photovoltaic devices reach power conversion efficiencies of >20% (Supplementary Note 5 ), we demonstrate that despite MAPbI 3 being extensively studied in numerous publications before, neither ABC nor classical SRH model can fit the acquired PLQY maps across the entire excitation parameter space. To tackle this issue, we develop an enhanced SRH model (in the following, the SRH+ model), which accounts for Auger recombination and Auger trapping processes and demonstrates that SRH+ is able to describe and quantitatively fit the PLQY( f , P ) maps over the entire range of excitation conditions with excellent accuracy. PL decays can be also fitted, albeit, with more moderate accuracy. The application of the SRH+ model allowed us to extract the concentration of dominant traps in high electronic quality MAPbI 3 films to be of the order of 10 15 cm 3 and to demonstrate that surface treatments can create a different type of trapping states of much higher concentration. Beyond the quantitative success of the extended SRH+ model, we reveal that even this model is not capable to describe the PL decay at high charge carrier concentrations. This means that there must be further non-linear mechanisms that influence charge dynamics at high charge carrier concentrations in MAPbI 3 . Therefore, further theoretical work is necessary to identify the additional physical process or processes which must be considered in order to completely elucidate the charge dynamics in MAPbI 3 . Results PLQY( f , P ) mapping and elucidation of the excitation regime The acquisition of a PLQY( f,P ) map occurs by measuring the intensity of PL as a function of pulse repetition rate ( f , s −1 ) for a series of fixed pulse fluences ( P , photons/cm 2 ). PL intensity is then plotted as a function of the time-averaged excitation power density \(W={fP}{hv}\) (W/cm 2 ), where \({hv}\) is the excitation photon energy, see further details in Supplementary Notes 1 – 3 . Figure 1b presents PLQY( f,P ) map for a bare MAPbI 3 film, while Fig. 1 a presents the same data in the traditional way as a series of PLQY( W ) dependencies for different frequencies. We use 19 frequencies ranging from 100 Hz to 80 MHz, which corresponds to a lag between pulses varying from 10 ms to 12.5 ns. In our experiments after scanning the frequency for a certain value of P , it is then changed to the next value and the scanning procedure is repeated. The pulse fluence ranges over four orders of magnitude ( P 1 = 4.1 × 10 8 , P 2 = 4.9 × 10 9 , P 3 = 5.1 × 10 10 , P 4 = 5.5 × 10 11 and P 5 = 4.9 × 10 12 photons/cm 2 ). Such fluences, in the single-pulse excitation regime (see below), correspond to charge carrier densities of 1.04 × 10 13 , 1.24 × 10 14 , 1.3 × 10 15 , 1.37 × 10 16 and 1.24 × 10 17 cm −3 , respectively. For clarity, in Fig. 1 and in all the following figures in the manuscript, the data points measured at the same pulse fluence P are shown by the same colour: P 1—violet, P 2—blue, P 3—green, P 4—orange, P 5—red. The family of lines connecting points with P = constant and f scanned make together a pattern that resembles “a horse neck with mane” as illustrated in Fig. 1b . Fig. 1: PLQY( f , P ) map and illustration of the difference between scanning the pulse repetition rate ( f ) and scanning of the pulse fluence ( P ). a PLQY(W) dependence plotted in the traditional way ( P -scanning) for 19 different pulse repetition rates. The datapoints measured at the same frequency are connected by lines, the sample is MAPbI 3 film grown on glass (G/MAPI). The apparent slope of these dependencies ( m , PLQY ~ W m ) depends on the range of W and the value of f and can be anything from 1 to 0.77 for this particular sample. b The same data plotted in the form of a PLQY( f , P ) map where data points measured at the same pulse energy ( P 1, P 2, ..., P 5) are connected by lines ( f -scanning). Data points measured at 50 kHz frequencies are connected by a dashed-dotted line. c The excitation scheme. Illustrations of PL decays in the single pulse ( d ) and quasi-CW ( e , f ) excitation regimes. Here e − trapping is assumed leading to h + photodoping. Full size image The acquisition of a PLQY( f ,P) map is fully automated (Supplementary Note 2 ) and includes precaution measures that minimize the exposure of the sample to light, while controlling for photo-brightening or darkening of the samples (Supplementary Note 4 ). Such measures ensure that PLQY maps are fully reproducible when re-measured again on the same spot (see Supplementary Fig. 4.2 ). These precautions were absolutely necessary for obtaining a consistent data set, because light-assisted transformation of defect states due to ion migration may significantly influence the photophysics of perovskite materials 10 , 48 , 49 making any theoretical analysis unfeasible. We also note that the high degree of uniformity of our samples leads to very similar PLQY maps being measured on different areas of the sample (see Supplementary Notes 2 and 4 ). PLQY( f , P ) map for bare MAPbI 3 samples (G/MAPI) is presented in Fig. 1 . A traditional representation of these data is shown in Fig. 1a , which displays a series of PLQY(W) curves—each for one of the 19 different repetition rates used in our experiment. Overall, such representation shows only minor differences between the curves, in terms of their slope and curvatures, apart from a noticeable horizontal shift at sufficiently low frequencies. By approximating the PLQY to vary as W m over a limited power interval, we observe that the slope m varies between 0.77 at high repetition rates to approximately 1 at low repetition rates for this sample. Traditional representations of PLQY( W ) plots for the other samples investigated in our study are shown in Supplementary Note 8 , where, for example, a PMMA coated MAPbI 3 sample shows the slope m ranging from 0.5 to 1 (see Supplementary Fig. 8.1c). Such a traditional representation of the PLQY( f , P ) map does not offer a clear interpretation of the data, making it difficult to elucidate the charge carrier dynamics. An alternative representation of the PLQY( f , P ) map is shown in Fig. 1b , in which the data points for each laser fluence P are presented as a single curve. Interestingly, the data points for each value of P follow a characteristic line with a specific shape. At low frequencies, and especially at high fluences, the curves are rather horizontal, yet once the frequency f exceeds a certain value, all data points start to follow a certain common dependency, at which the PLQY depends solely on the averaged power density \(W={fP}{hv}\) . The frequency at which this happens depends on P , such that, for example, the data obtained at pulse fluence P5 joins at ca. 500 kHz, while the data collected at P1 joins at below 50 kHz (see Fig. 1b ). Critically, such presentation of the PLQY( f,P ) map allows us to immediately distinguish between two principally different excitation regimes for a semiconductor: 1. Single-pulse regime : In this regime the repetition rate of the laser is so low, that PLQY values and PL decays do not depend on the lag between consecutive laser pulses. In other words, the excited state population created by one pulse had enough time to decay to such a low level, that it does not influence the decay of the population generated by the next pulse (Fig. 1d ). In this case, PLQY does not depend on the lag between pulses (i.e. the pulse frequency). This regime is observed when PLQY follows the horizontal lines upon frequency scanning (highlighted in green in Fig. 1b ). 2. Quasi-continuous wave (quasi-CW) regime : In this regime, the decay of the population generated by one pulse is dependent on the history of the excitations by previous pulses. This happens when some essential excited species did not decay completely during the lag time between the laser pulses (Fig. 1e, f ). In this regime, the data points follow the same trend and fall on the line highlighted in yellow in Fig. 1b . The transition between the two regimes occurs when the data points at fixed values of P start to match with each other upon increasing the f . Examining the vast literature of MHPs reveals that, to the best of our knowledge, no study has utilized such a broad range of pulse repetition rates f when measuring PLQY(W). Without scanning of f over a significant range of values, a distinction between the single pulse and quasi-CW regimes is not possible, and this reflects the current situation in the literature where the standard scanning over P is implemented with, at best, a few different repetition rates of a pulsed laser, which is sometimes complemented by excitation by a CW laser 16 , 17 , 39 . For example, Trimpl et al. 39 studied FA 0.95 Cs 0.05 PbI 3 with the focus on temperature-dependent PL decay kinetics measured at three repetition rates (61.5, 250, and 1000 kHz) and PLQY at one repetition rate and three pulse fluences approximately corresponding to P2, P3 and P4 in our experiments. A qualitative similarity between PLQY predicted from PL decay kinetics and experiments data was obtained and temperature dependence of the model parameters was extracted. The condition for charge accumulation (photodoping) in this work was addressed solely using PL decays where an initial fast drop at the ns time scale clearly visible at low temperature was assigned to trapping 39 . In another example, Kudriashova et al. studied PL decay over a quite broad pulse repetition rate range (10 kHz–10 MHz) in order to distinguish between surface and bulk charge recombination in MAPbI 3 films with charge transport layers, however, PLQY was not measured in this study 34 . In general, these studies addressed the important question of the excitation regime within the limits of their experimental approaches, however, the only robust way to clarify the excitation condition for a given pulse fluence is to explicitly scan f while detecting PLQY. One may assume that choosing a low repetition rate guarantees that the excitation is in the single-pulse regime. However, this is not true. As Fig. 1a and b show, if the system is at the single-pulse regime at a high pulse fluence at a given repetition rate, there is always such low pulse energy that the excitation regime becomes quasi-CW. The cause for this effect is the presence of a non-exponential decay of the excited state population as will be discussed later. Thus, at a very low repetition rate, the excitation may still be in a quasi-CW regime so long as the pulse fluence is low enough. Without scanning the pulse frequency, this cannot be disentangled. To illustrate this, the data points measured at 50 Hz were connected by a dash-dot line in the PLQY( f,P ) map in Fig. 1b . For a pulse fluence P 1 (i.e. the lowest fluence), the excitation regime is quasi-CW. Increasing the pulse fluence by an order of magnitude ( P 2) brings the system close to the single-pulse regime, with further increase of the pulse fluence ( P 3, P 4 and P 5) making the excitation fall purely in the single-pulse regime. We highlight the existence of a rather extended intermediate region, at which the regime is neither a single pulse nor quasi-CW. For example, for the pulse fluence P 2 (charge carrier concentration ≈10 14 cm −3 ), this intermediate region starts at 50 kHz and continues down to at least 2 kHz. We underscore that in order to identify the excitation regime without any additional assumptions, one must scan the pulse frequency and measure PLQY. As a result, the PLQY( f , P ) mapping technique described here allows for an unambiguous and very easy discernment between the single pulse and quasi-CW excitation regimes. PLQY( f , P ) maps and PL decays( f , P ) of polycrystalline MAPbI 3 Figure 2 compares the PLQY( f,P ) maps measured for MAPbI 3 films prepared with four different combinations of the interfaces (Fig. 2e and Supplementary Note 5 ): MAPbI 3 deposited on glass (G/MAPI), MAPbI 3 deposited on PMMA-coated glass (G/P/MAPI), MAPbI 3 deposited on glass and coated with PMMA (G/MAPI/P) and MAPbI 3 deposited on PMMA/glass and then coated by PMMA (G/P/MAPI/P). All samples exhibit the same PL and absorption spectra (Supplementary Note 6 ). Scanning electron microscopy (SEM) images show that all samples exhibit a very similar microstructure, which is not affected by the presence of PMMA layers (Supplementary Note 6 ). Despite all these similarities, the PLQY( f , P ) maps are clearly different. To emphasize the differences, we added three horizontal lines that mark the PLQY at the single pulse regimes for the pulse fluences P3, P4, and P5 for the G/MAPI sample in Fig. 1a . Black arrows highlight the reduction in PLQY in the single pulse regimes when compared with G/MAPI sample. Fig. 2: PLQY maps of the samples under study plotted in the same scale for comparison. a Glass/MAPI, b glass/PMMA/MAPI, c glass/MAPI/PMMA, and d glass/PMMA/MAPI/PMMA. The horizontal grey lines show the values of PLQY (0.4, 0.2 and 0.02) in the single-pulse regime for the glass/MAPI sample ( a ) to set the benchmarks. Deviations from these values for other samples are shown by black arrows. The tilted grey line is the W 0.5 dependence as predicted by the SRH model. It is shown to see better the difference in the quasi-CW regime from sample to sample. The pulse fluence ( P 1– P 5) is indicated by the same colour code (shown in a ) for all PLQY maps. Panel e shows the structure of the samples and geometry of the measurements. Full size image The decrease of PLQY upon the addition of PMMA differs for different values of P . Moreover, when comparing the slope m of the quasi-CW region in (a) and (b) with that of (c) and (d), it is evident that it is strongly influenced by the exact sample stack. To visualize this difference, a line with the slope of m = 0.5 (i.e. PLQY ~ W 0.5 ) is shown in each plot. The PLQY( f , P ) map is most affected when MAPbI 3 film is coated by PMMA, while its presence at the interface with the glass substrate has only a minor effect. Similar to the PLQY maps, PL decay kinetics also depends on the pulse fluence and excitation regime (single pulse vs. quasi-CW). Such kinetics should be considered together with PLQY( f , P ) map to complete the physical picture of charge recombination. Figure 3 shows PL decays measured at f = 100 kHz and pulse fluences P2 (low) and P5 (high). MAPbI 3 films deposited on glass (G/MAPI) exhibit the slowest of all PL decay kinetics both at a low and a high pulse fluences. The addition of PMMA to the sample stack accelerates the PL decay with the shortest decays observed for G/P/MAPI/P samples. Fig. 3: PL decays of all samples at 100 kHz repetition rate (10 µs distance between the laser pulses). a Low pulse fluence (P2). b High pulse fluence (P5). Note that all decays in a are in the quasi-CW excitation regime, while all decays in b are in the single-pulse excitation regime. Adding PMMA accelerates the PL decay. Full size image The observation that modification of the sample interfaces by PMMA results in a faster PL decay not only for the low, but also for the high ( P 5) pulse fluence is particularly interesting. While the influence of surface modification on NR recombination at low charge carrier concentrations is expected due to the changes in trapping, the same is not expected to occur at high pulse fluences. It is generally considered that at such fluences, the decays will be solely determined by non-linear processes such as Auger recombination and are thus not influenced by surface treatments. However, the change in decay dynamics in PMMA interfaced MAPbI 3 serves as the first indication that additional non-linear processes that involve trap states must be at play. The second interesting observation is that according to the PLQY( f , P ) map, the repetition frequency 100 kHz used for the PL decay measurements falls in the quasi-CW excitation regime for the low pulse fluence P 2, but in the single-pulse excitation regime for high pulse fluence P5. It is remarkable, however, that the PL intensity in the quasi-CW regime (Fig. 3a ) decays until the next laser pulse by almost two orders of magnitude for MAPbI 3 without PMMA and by four orders of magnitude for the sample coated with PMMA. This is an excellent example for the inability to correctly assign the excitation with P 2 fluence to the quasi-CW excitation regime without the knowledge gained from the PLQY( f,P ) map, considering the population observed in the PL kinetics decays completely prior to the arrival of the next pulse. The cause for the quasi-CW regime, in this case, is the presence of a population of trapped carriers which lives much longer than 10 microseconds and that influences the dynamics via photodoping 9 , 13 , 14 . This example illustrates the ‘hidden quasi-CW regime’ shown schematically in Fig. 1e (see also Supplementary Note 7 ). These effects will be quantitatively explained by the theory detailed in the next section. Theory and modelling Kinetic models: from ABC and SRH to SRH+ Figure 4a schematically illustrates the key processes included in the ABC, SRH and extended SRH (SRH+) kinetic models. The SRH+ model contains terms for radiative (second-order k r np ) and NR (all other terms) recombination of charge carriers. Note, that the processes included in the SRH+ model also naturally include photon re-absorption and recycling as discussed in detail in Supplementary Note 9.1 and Supplementary Note 10 leading to effective renormalization of k r and k E rate constants. NR recombination occurs via a trap state or due to Auger recombination. The trapping process can be linear and quadratic (Auger trapping). We note that we consider only one type of band-gap states. It is assumed that these states are placed above the Fermi level (electron traps), but that they are deep enough to make thermally activated de-trapping negligible. Similarly, one could consider hole traps instead under the same conditions—the equations are symmetric in this regard. Auger trapping refers to the process by which the trapping of a photoexcited electron provides excess energy to an adjacent photoexcited hole 2 . The possible importance of this process in perovskites has been suggested in a few publications 46 , 50 . The complete set of equations and additional description is provided in Supplementary Note 9 . We note that in the SRH and SRH+ models, the complete set of equations for free and trapped charged carriers is solved, contrary to the studies where equations for only one of the charge carriers (e.g. electrons) are used (see ref. 44 ). The latter simplification can work only if the concentration of holes is very large and constant (for example, in the case of chemical doping) which is not applicable for intrinsic MAPbI 3 and other perovskites, see also below. Due to the inclusion of Auger trapping in the SRH+ model, setting the parameter k n to infinity reduces it to the ABC model (see Supplementary Note 9.6 ), where the coefficient B contains both radiative and NR contributions. Finally, the SRH+ model reduces to the SRH model by ignoring all Auger processes. Fig. 4: CW regimes of the ABC, SRH and SRH+ models and their comparison with the experiment. a The energy level scheme, the processes and parameters of all models (see the text and Supplementary Note 9 for details). b The experimental dependence (G/MAPI and G/P/MAPI/P samples) of PLQY on the excitation power density W in the quasi-CW excitation regime, m is the exponent in the dependence W m . c PLQY(W) in the CW regime for different models and trap feeling conditions. “-A”— adding Auger recombination, “-ATr”—adding Auger trapping (Supplementary Note 11 ). d Evolution of the PLQY( W ) upon the transformation of the SRH model with Auger recombination to the ABC model with increasing of the parameter k n (see Supplementary Note 11 for the model parameters). Full size image In the considered models the origin of the difference in the concentration of free electrons and holes is the trapping of one of the charge carriers, i.e. photodoping. We do not assume any unintentional chemical doping 45 , and this assumption is supported by solid experimental evidence. Indeed, in the case of chemical doping and the presence of electron traps, the PLQY( W ) in the quasi-CW regime should change from its square root dependence on W to either linear (n-doping) or become independent of W (p-doping) upon further decreasing of W (see Supplementary Note 8 ). Note also that the situation is symmetrical relative to the type of traps in the material. This behaviour, however, was never observed in our samples where \({{\rm{{PLQY}}}}\propto {W}^{m}\) at low excitation power with the slope m being either 0.5 or 0.77, depending on the sample, without changing upon decreasing of W (Figs. 1 and 2 ). This means, that even if there was unintentional doping in our samples, its level was so low, that we do not observe any of its effects in the PLQY( f,P ) maps (Supplementary Note 9.7 ). Photon reabsorption and recycling are considered to be important processes influencing the charge dynamics in MHPs 11 , 46 . In our experimental study, we compare samples of very similar geometries and microstructure ensuring that the effects of photon reabsorption/recycling remain similar, such that they cannot serve as the reasons for the differences between PLQY( f,P ) maps and PL decay kinetics amongst the different samples. As we discuss in detail in Supplementary Note 10 , effects on the charge dynamics related to photon recycling in broad terms (both far-field (photon reabsorption in the perovskite) and near field (energy transfer) effects), are included in our models via renormalized radiative rate k r and the Auger trapping coefficient k E , respectively. We also do not explicitly include charge diffusion in the model. The rationale here is that charge carrier diffusion in MAPbI 3 occurs so fast that equilibrated homogeneous distribution of charge carriers over the thickness of the film can be assumed at a time scale of 10 ns and longer (Supplementary Note 9.1 ). Applying the ABC, SRH and SRH+ models to the quasi-CW excitation regime We first consider the CW excitation regime at low power densities. In this regime, the SRH and SRH+ models are identical since the contribution of Auger processes is largely negligible. Figure 4b shows the experimental dependencies of PLQY on the power density ( W ) for G/MAPI and G/P/MAPI/P samples and Fig. 4c and d show the dependence calculated based on the three different models. At low power densities, the concentration \(n\) is small and PLQY is low. In the ABC model, the main contribution to the recombination rate comes from the first-order term, which is equal to the photogeneration rate. Thus, \({An}\propto W\) and consequently \(n\propto W\) . Therefore, we can write: $${{\rm{{PLQY}}}}=\frac{{{\rm{{flux}}}\; {\rm{{of}}}\; {\rm{{emitted}}}\; {\rm{{photons}}}}}{{{\rm{{flux}}}\; {\rm{{of}}}\; {\rm{{absorbed}}}\; {\rm{{photons}}}}}=\frac{{k}_{{\rm{{r}}}}{n}^{2}}{B{n}^{2}+{An}}\approx \frac{{k}_{{\rm{{r}}}}{n}^{2}}{{An}}=\frac{{k}_{{\rm{{r}}}}n}{A}\propto W$$ In the SRH model, at a very low excitation power density the fastest process is that of trapping of electrons. With most of the electrons trapped and \({n}_{{\rm{{t}}}}\approx p\) , the trapping rate is equal to the photogeneration rate \({k}_{{\rm{{t}}}}{nN}\propto W\) , and the remaining electron density \(n\propto W\) . The limiting step in the charge carrier kinetics is the NR recombination of the trapped electrons and holes. The rate of this process is equal to the generation rate, therefore \({k}_{{\rm{{n}}}}{n}_{{\rm{{t}}}}p{=k}_{{\rm{{n}}}}{p}^{2}\propto W\) , and \(p\propto \sqrt{W}\) . Thus $${{\rm{{PLQY}}}} = \frac{{{\rm{{flux}}}\; {\rm{{of}}}\; {\rm{{emitted}}}\; {\rm{{photons}}}}}{{{\rm{{flux}}}\; {\rm{{of}}}\; {\rm{{absorbed}}}\; {\rm{{photons}}}}}=\frac{{k}_{{\rm{{r}}}}{np}}{{k}_{{\rm{{r}}}}{np}+{k}_{{\rm{{n}}}}{n}_{{\rm{{t}}}}p}\\ \approx \frac{{k}_{{\rm{{r}}}}{np}}{{k}_{{\rm{{n}}}}{n}_{{\rm{{t}}}}p}=\frac{{k}_{{\rm{{r}}}}n}{{k}_{{\rm{{n}}}}{n}_{{\rm{{t}}}}}\approx \frac{{k}_{{\rm{{r}}}}n}{{k}_{{\rm{{n}}}}p}\propto \sqrt{W}$$ We refer the reader to the Supplementary Note 9.6 for the detailed derivation of these equations and their applicability conditions. To summarize, at low power densities when PLQY ≪ 1, PLQY( W ) is a straight line in the double logarithmic scale ( \({{\rm{{PLQY}}}}\propto {W}^{m}\) ) with the slope m = 0.5 for the SRH and SRH+ models with no trap filling effect (see below) and m = 1 for the ABC model 1 . Experimentally, we observe m ≈ 0.45 for those perovskite samples which are coated with PMMA (e.g. G/MAPI/P is shown in Fig. 4b ). This value is in a good agreement to the m = 0.5 predicted by the SRH/SRH+ models in the case of the absence of trap filling. However, the other two samples, in which the MAPbI 3 surface is not coated with PMMA, exhibit m ≈ 0.77 (e.g. G/MAPI sample is shown in Fig. 4b ), which lies between the values of 1 and 0.5 predicted by the ABC and SRH/SRH+ models, respectively. These slopes are observed over at least four orders of magnitude in the excitation power density. Based on these results, we must conclude that MAPbI 3 samples with and without PMMA coating behave very differently in the quasi-CW regime. In the framework of the SRH/SRH+ models, there are two possibilities that would lead to an increase in the coefficient m : (i) transformation toward the ABC model and (ii) trap filling effect in the SRH model. Figure 4d shows the transformation of the SRH model, which includes Auger recombination to the ABC model by increasing the parameter k n . At the condition k n ≫ k r , k t there is a limited range of excitation power where one can obtain an intermediate slope m laying between 0.5 and 1 for a limited range of W (Supplementary Notes 9.6 and 11 ). The second possibility is to allow for the trap filling effect to occur at the excitation power densities which are below the saturation of the PLQY due to the radiative recombination and Auger processes. The effect of trap filling is caused when the number of available traps starts to decrease with increasing W . Consequently, the PLQY increases not only because the radiative process becomes faster (quadratic term), but also because the NR recombination (trapping and further recombination) becomes smaller. As the result, PLQY grows faster than W 0.5 over a certain range of W . The effect is not trivial, because it is not the concentration of traps N as one would think, but rather the relation of k t to k r and k n (the necessary condition is k t ≫ k r , k n ), which determines if the trap-filling effect is observed in PLQY maps or not ( Supplementary Notes 9.6 , 9.8 and 11 ). The trap filling effect is illustrated in Fig. 4c , in which the parameter k t is increased whilst maintaining all other parameters fixed. Obviously, the resulting dependence is too strong and occurs over a too narrow range of excitation power densities (one order of magnitude) to fit the experimental data directly. Nevertheless, as will be shown below, such processes are present in MAPbI 3 samples which are not coated with PMMA, where PLQY( W ) in the quasi-CW regime deviates from the straight line bending upwards before reaching saturation at high power. At high excitation densities, non-linear recombination processes begin to be particularly important. Since Auger processes are NR, with further increase of W the PLQY cannot reach unity and instead decreases after reaching a certain maximum. SRH cannot account for this effect considering it does not include any NR non-linear terms and leads to PLQY = 1 at high W . The ABC and SRH+ models can potentially describe this regime since they contain Auger recombination terms (Fig. 4c and d ). Fitting of the PLQY( f,P ) maps and PL decays kinetics by ABC, SRH and SRH+ models To examine the validity of the three theories, we attempt to fit the experimental PLQY( f,P ) plots and PL decays using all models and the results are shown in Fig. 5 . Before we discuss the fitting results, it is important to stress that each simulated value of PLQY( f,P ) at the PLQY maps and each PL decay curve shown in Fig. 5 are obtained from a periodic solution of the kinetic equations of the corresponding model under pulsed excitation with the required pulse fluence P and repetition frequency f . In practice it means that we excited the system again and again until the solution PL( t ) stabilizes and begins to repeat itself after each pulse. Details of the simulations are provided in Supplementary Note 12 . Fig. 5: Fitting of the PLQY( f , P ) maps and PL decays by all models. a ABC, b SRH, c SRH+ models applied to the MAPbI 3 film and e ABC, f SRH, g SRH+ models applied to the MAPbI 3 film with PMMA interfaces. In PLQY maps the symbols are experimental points, the lines of the same colour are the theoretical curves. d and h show experimental and theoretical (black lines) PL decays according to the SRH+ model for both samples, laser repetition rate—100 kHz. The pulse fluences are indicated according to the colour scheme shown in e in the whole figure. Theoretical CW regime is shown by the yellow lines in all PLQY maps. The model parameters can be found in Supplementary Note 13 . Full size image When fitting experimental data, it is important to minimize the number of fitting parameters and maximize the number of parameters explicitly calculated from the experimental data. We exploit the experimental data to extract several parameters. First, considering that in all three models, the decay of PL at low pulse fluences is determined exclusively by linear trapping and is thus mono-exponential, we can extract the parameter \({k}_{{\rm{{t}}}}N\) of the SRH and SRH+ models. Indeed, such behaviour is observed experimentally for the studied samples (see Fig. 3a ) allowing us to use the decays at low pulse energies ( P 1–P3) to directly determine the trapping rates \({k}_{{\rm{{t}}}}N\) . We note, however, to obtain the best fit using the ABC model, the PL decays were not used to fix the parameter A. Secondly, in a single pulse excitation regime (i.e. the horizontal lines in the PLQY map), the magnitudes of PLQY at pulse fluence P 3 and P 4 allow to determine the ratio \(\frac{{k}_{{\rm{{r}}}}}{{k}_{{\rm{{t}}}}N}\) in the SRH/SRH+ models and the ratio \(\frac{{k}_{{\rm{{r}}}}}{A}\) for the ABC model. Detailed block schemes of the fitting procedures are provided in Supplementary Note 12 . As has been discussed above, MAPbI 3 samples coated with PMMA cannot be described using the ABC model due to mismatch of the slope within the quasi-CW regime (Fig. 5a ), while both SRH and SRH+ models are well-suitable in this case (Fig. 5b, c ). However, at a high excitation regime (i.e. the saturated region of the quasi-CW and the single pulse regime at P 5 pulse fluence) SRH+ works much better, highlighting the limitations of the SRH model on its own. Consequently, the entire PLQY( f,P ) map of the PMMA-coated films can be fitted using the SRH+ model with excellent agreement between the theoretical and experimental data (Fig. 5c ). The behaviour of MAPbI 3 samples whose surface is left bare (where the PLQY( W ) dependence in quasi-CW shows extra up-bending before reaching saturation) can be approximated using the ABC model (Fig. 5e ) and well-fitted by the SRH+ (Fig. 5g ) model. ABC indeed works quite well with, however, an obvious discrepancy in the tilt of the quasi-CW regime. Very good fit can be obtained by the SRH/SRH+ models by adjusting of the k t , k n and N to allow for the trap filling effect to occur in the medium excitation power range and, at the same time, making the dynamics closer to that in the ABC model by a relative increase of the recombination coefficient k n (see the section above and Fig. 5g ). As was mentioned above, the PL decay rate at low power densities ( P 1–P3) was used to extract the product \({k}_{{\rm{{t}}}}N\) . This is the only occasion for which the PL decays are used in the fitting procedure of the SRH and SRH+ models. In the fitting procedure for the ABC model the PL decays are not used at all. Upon determining the fit parameters for each of the models, it is possible to calculate the PL decays at each condition and compare them with those decays measured experimentally. Importantly, PL decay rates calculated using the ABC model significantly underestimate the measured decay dynamics at all fluencies (Supplementary Note 13 ). On the other hand, as is shown in Fig. 5d and h , the SRH+ model (as well as SRH, Supplementary Note 13 ) fit well the low fluence decay dynamics, but systematically underestimate the decay rate at high power fluences. It is noteworthy that the mismatch of the initial decay rate at the highest pulse fluence reaches a factor ranging from three to five depending on the sample, still significantly outperforming the fit using the ABC model. Insights regarding the applicability of the ABC, SRH and SRH+ models to the PLQY maps and PL decays are summarized in Table 1 , see also Supplementary Fig. 13.3 . Table 1 Comparison of the ability of the three models to describe the PLQY( f , P ) maps and PL decays. Full size table Discussion Scanning the excitation pulse repetition rate as proposed herein represents a novel experimental approach that transforms routine power-dependent PLQY measurements to a universal methodology for elucidating charge carrier dynamics in semiconductors. Adding the second dimension of pulse repetition rate to the standard PLQY( W ) experiment is not just an update, it is a principle, qualitative change of the information content of the experiment. The difference between PLQY( f,P ) mapping and the standard PLQY( W ) experiment in the CW regime or at some fixed pulse repetition rate is analogous to the difference between the standard NMR spectrum and 2D NMR spectrum. In our method, we monitor not only the concentrations of free charge carriers, but also the concentration of trapped charges due to the total electro-neutrality of the system. Therefore, together with the time-resolved PL decays, the PLQY map in the repetition frequency—pulse fluence 2D parameter space comprise an experimental series which contains all the information concerning the charge dynamics in a given sample. We stress the absolute necessity of the unambiguous determination of the excitation regime of the experiment, which would not be possible without scanning the pulse repetition rate. For example, PL intensity decay kinetic showing the signal decay by several orders of magnitude prior to the arrival of the next laser pulse (Fig. 3a ) can still be in the quasi-CW regime due to presence of long-lived trapped charges (“dark” charges). Such trapped charges cause the so-called photodoping effect, which lingers until the millisecond timescale, and thus holds the “memory” of the previous laser pulse, leading to a stark influence on the PLQY map. While the importance of distinguishing between the single pulse and quasi-CW regimes has been noted in several publications before 9 , 31 , it has never been accomplished for MHPs experimentally. Indeed, in none of the published works presenting theoretical fits of experimental PLQY( W ) dependencies was this determination possible simply because either only CW excitation 14 , 44 or pulse excitation with only one 17 , 31 , 39 or two (20 MHz and 250 kHz) 16 repetition rates of the laser pulses were employed. Understanding the excitation conditions is also critically important for interpretation of the classical experiments in which the PL intensity (or PLQY) is measured as a function of excitation power density ( W ) using a CW light source or a pulsed laser with a fixed repetition rate. Traditionally the intensity of PL is approximated using a \({W}^{m+1}\) dependence or in case PLQY is measured, with \({W}^{m}\) (because \({{\rm{{PLQY}}}}\propto {{\rm{{PL}}}}/W\) ), with both leading to a straight line in the double logarithmic scale 1 , 13 , 31 , 33 , 47 . According to the SRH and ABC models, approximations like these can be valid for a large range of W at low excitation power density only, when there is no trap-filling effect, Auger processes can be neglected and PLQY is far from saturation. In all other cases, the dependence is not linear in the double logarithmic scale. As discussed above, SRH predicts m = 0.5 in the CW excitation regime while ABC always predicts m = 1. However, our experiments reveal that when the excitation is pulsed, one can obtain intermediate m values because upon increasing the power density, the experimental excitation regime is almost certainly switched from a quasi-CW to a single pulse. The change of the slope can be seen in Fig. 1a and in Supplementary Note 8 . Consequently, the extracted m cannot be reliably used for interpretation of the photophysics of the sample since any value of m can be obtained depending on the conditions of the pulsed excitation. The main message of our work is that any model of charge carrier dynamics which is considered to be correct should be able to fit not only standard one-dimensional PLQY( W ) data, but also the full PLQY( f,P ) map and PL decays at different powers and pulse repetition rates. This criterion is strict and universally applied. With standard one-dimensional PLQY( W ) data—even upon the inclusion of the PL decay data—one can find several principally different models that are capable of fitting the data. However, when the multi-dimensional data space consisted of the PLQY( f,P ) map and PL decays is available, this ambiguity becomes highly unlikely. As we have shown above, neither the standard ABC nor the SRH model are capable of describing the complete PLQY maps and predicting PL decays of the investigated MAPbI 3 samples. On the other hand, the addition of Auger recombination and Auger trapping processes to the SRH model (SRH+ model) leads to an excellent fit of PLQY maps of all the studied samples. We emphasize that the ( f,P ) space used in this work is very large with f varying from 100 Hz to 80 MHz (6 orders of magnitude) and pulse fluence P changing over 4 orders of magnitude corresponding to charge carrier densities in the single-pulse excitation regime from ca. 10 13 to 10 17 cm −3 . SRH+ model also agrees well with the PL decay kinetics for low and medium pulse energies (charge carrier concentrations from 10 13 to 10 15 cm −3 ). However, for high pulse fluences (10 16 –10 17 cm −3 ) the model underestimates the initial decay rate by up to a factor of five for the higher pulse energies. The Auger rates obtained from the fittings (2.8 × 10 −29 cm 6 s −1 for the PMMA-coated MAPbI 3 sample and 1.7 × 10 −29 cm 6 s −1 for the bare MAPbI 3 ) are in a reasonable agreement with theoretical estimation 7.3 × 10 −29 cm 6 s −1 for MAPbI 3 from ref. 51 which is at the low limit from 2 × 10 −29 to 1 × 10 −27 cm 6 s −1 range reported in literature 52 . Note that increasing of the Auger rate constant cannot help because a fit of PL decay will result in lower PLQY than experimentally observed. Therefore, we must conclude that the SRH+ model has limitations. One possible explanation for the mismatch of decay rates at high excitation powers might be provided by considering experimental errors. It is well documented that the PL of perovskite samples is sensitive to both illumination and environmental conditions, which, may lead to both photodarkening or photobrightening of the sample 9 , 48 , 49 , 53 . To account for these effects, we paid a special attention to monitoring the evolution of the sample under light irradiation throughout the entire measurement sequence. As is shown in Supplementary Note 4 , the maximum change in PL intensity during the entire measurement series is smaller than a factor of two. Taking this uncertainty together with other errors inherent to absolute PLQY and excitation power density measurements, missing the decay rates by several times at the highest pulse fluence is not impossible. However, there is strong indication that the discrepancy reflects a problem of the model rather than in the experiment: the deviation between the theoretical and experimental PL decays is systematic. Experimental PL decay rates at high charge carrier concentrations are faster than predicted for all samples despite of the excellent matching of the PLQY( f,P ) maps. In our future work we are going to test several additional concepts which might help to increase the PL decay rate without a strong effect on PLQY. One of them is based on the idea that at high charge carrier density, the time (few ns) required to reach an equilibration of the charge carrier concentration over the thickness of the sample (300 nm) becomes comparable with the initial fast PL decay induced by Auger. In other words, the diffusion length becomes smaller in the high excitation regime 54 . In this case, diffusion cannot be ignored and an additional PL decay should appear reflecting the decreasing charge carrier concentration due to their diffusion from the initially excited layer determined by the excitation light penetration depth (100 nm) towards the opposite surface of the 300 nm-thick film. This process is often discussed in the context of charge carrier dynamics in large single crystals regardless of the excitation conditions 12 . Supporting this notion is the fact that in order to model a MAPbI 3 solar cell under operation 55 , a much lower charge carrier mobility (around 10 −2 cm 2 V s −1 ) than that obtained spectroscopically (1–30 cm 2 V s −1 ) 15 has to be assumed, which suggests that the actual diffusion coefficient might be smaller than expected. Another possible contributing factor originates from a local charge carrier distribution inside the sample, caused by, for example, funnelling of charge carries due to the energy landscape or/and variations of charge mobilities 13 . Presence of a small fraction of charge carriers concentrated in local nano-scale regions can lead to an apparent fast PL decay at early times, accompanied by a relatively small effect on the total PLQY. In addition, high charge concentrations may cause carrier degeneracy effects. This happens because charge carriers occupy all the possible states with energies below kT (degenerated Fermi gas). Considering that the effective density of states in perovskite materials is relatively low 56 , such degeneracy effects should be seriously examined. If present, all rate constants would depend on the charge concentration, which may lead to unexpected effects. Further investigations will reveal which of these—or other—mechanisms can help in describing of the PLQY( f,P ) and PL decay data space. Despite of the moderate success at high charge concentration regime, the results of the SRH+ fitting still significantly outperform all previous attempts to explain charge carrier dynamics in MAPbI 3 samples and allow us to gain valuable insights concerning the photophysics of the samples investigated herein and the roles of traps within them. This is supported by the fact that the effect of charge trapping is the most crucial in the low and middle power ranges where the SRH+ model works very well for both the PLQY maps and PL decays. Note that with the current experimental accuracy we have no reason to complicate the SRH+ model by adding another type of traps and/or thermal de-trapping. The analysis of PLQY maps reveals that the concentration of dominant traps in high-quality MAPbI 3 films (without PMMA coating) is ~1.2 × 10 15 cm −3 . Very recently, practically the same value for trap concentration was obtained using impedance spectroscopy and deep-level transient spectroscopy for MAPbI 3 samples prepared by exactly the same method 57 . This concentration is also in excellent agreement with the range of values previously proposed by Stranks et al. 14 , where the trap concentration was estimated by assuming that PL decays become non-exponential exclusively due to the trap filling. We note, however, that trap filling is not a necessary condition to observe non-exponentiality in a PL decay. For that to occur, the non-linear recombination rate (radiative, Auger, etc.) should just be faster than the trapping rate, which is determined not only by the trap concentration, but also by the capture coefficient. All these and related effects are considered when the data is modelled by the SRH+ model developed and employed here, thus allowing the extraction of the trap concentrations without any special assumptions. Note, however, that for the bare MAPbI 3 sample the estimation of the trap concentration is reliable, because the influence of trap concentration alone is clearly decoupled from that of the trapping constant in the regime of trap filling and conversion from the SRH to ABC model as observed for the bare MAPbI 3 sample at a moderate excitation power. Several studies have established the important role that surface defects play in determining the optoelectronic properties of perovskite thin films 58 , 59 , 60 , yet traditional PLQY measurements do not offer a reliable method to extract the defect density in perovskite films and investigate how surface modifications influence this density of defects. Considering that a PLQY( f,P ) mapping allowed us to extract the density of defects in bare MAPbI 3 films, we apply the same analysis to the PMMA-coated samples. We find that coating the top surface of MAPbI 3 with PMMA changes the picture drastically in terms of both the concentration and the nature of dominant traps. No indication of trap filling is observed in the PLQY( f,P ) maps, which allows us to provide only the lower estimate for the trap density in these samples (≈2 × 10 17 cm −3 s −1 ). The only part of the PLQY( f,P ) map where the trap concentration and the trapping rate are decoupled is the region in which PLQY saturates, so the estimation of the high-limit of the trap concentration is not reliable due to dependence of this region on parameters related to the Auger processes. The strong increase in the trap concentration is accompanied by a decrease of the trapping rate constant \({k}_{{\rm{{t}}}}\) and the nonradiative recombination rate constant \({k}_{{\rm{{n}}}}\) by at least one order of magnitude. This can be interpreted by considering the traps in the PMMA-coated sample to be more prevalent, yet “weaker” than those in the bare MAPbI 3 sample in terms of the trapping and recombination rate introduced by each of these traps. These results suggest that the addition of PMMA at the top surface leads to the creation of weak traps, which, however, due to their very large concentration override the effect of the stronger, yet less common, traps present in MAPbI 3 films that did not undergo the surface treatment. We note that coating with polymers (including PMMA) and organic molecules in general is a common method employed in literature to protect MAPbI 3 samples from environmental effects when performing PL studies 61 , 62 , and also to reduce NR recombination and improve PLQY of the material 63 . Yet our results, using PMMA as an example, reveal that such a treatment fundamentally modifies the photophysics in the perovskite layer. More importantly, the supreme sensitivity of PLQY( f,P ) mapping method to the influences of interfacial modifications illustrates its efficacy for studying charge carrier dynamics not only in films, but also in multilayers and complete photovoltaic devices. Another question that remains under debate in the perovskite community is the role of bulk defects on charge carrier dynamics in perovskite films. While some reports claim that such bulk defects, found for example at the grain boundaries, do not influence charge recombination 64 , 65 other reports suggest such defects influence the optoelectronic quality of the perovskite layer 66 , 67 . Considering these contradicting reports, it is clear that traditional PLQY measurements are not capable to idenitfy the role of bulk defects. We believe that PLQY( f,P ) map is the best possible fingerprint of the sample in the context of its charge recombination pathways and may aid at resolving this and other open questions in the field. We predict that this non-invasive, simple and non-expensive method will find practical applications in controlling and optimizing semiconducting materials and the devices that are based on them. Conclusions To summarize, we examined the validity of the commonly employed ABC and SRH kinetic models in describing the charge dynamics of metal halide perovskite MAPbI 3 semiconductor. For this purpose, we developed a novel experimental methodology based on PL measurements (PLQY and time resolved decays) performed in the two-dimensional space of the excitation energy and the repetition frequency of the laser pulses. The measured PLQY maps allow for an unmistakable distinction between samples, and more importantly, between the single-pulse and quasi-continuous excitation regimes. We found that neither the ABC nor the SRH model can explain the complete PLQY maps for MAPbI 3 samples and predict the PL decays at the same time. Each model is valid only in a limited range of parameters, which may strongly vary between different samples. On the other hand, we show that the extension of the SRH model by the addition of Auger recombination and Auger trapping (SRH+ model) results in an excellent fit of the complete PLQY maps for all the studied samples. Nevertheless, even this extended model systematically underestimate the PL decay rates at high pulse fluences pointing towards the existence of additional processes in MAPbI 3 which must be considered to fully explain the charge carrier dynamics. Our study clearly shows that neither PL decay nor PLQY data alone are sufficient to elucidate the photophysical processes in perovskite semiconductors. Instead, a combined PLQY mapping and time-resolved PL decays should be used to elucidate the excitation dynamics and energy loss mechanisms in luminescent semiconductors. Our experimental approach provides a strict criteria for testing any theoretic model of charge dynamics which is the requirement to be able to fit PLQY( f , P ) map and PL decays at different powers and pulse repetition rates. Methods Thin film preparation All samples were prepared from same perovskite precursor which was prepared with 1:3 molar ratio of lead acetate trihydrate and methylammonium iodide dissolved in dimethylformamide (Supplementary Note 5 ). For the samples with PMMA between the glass and perovskite layer, PMMA was spin-coated on the clean substrates at 3000 rpm for 30 s and annealed at 100 °C for 10 min. The perovskite precursor was spin-coated at 2000 rpm for 60 s on glass or glass/PMMA substrates, following by a 25 s dry air blowing, a 5 min room temperature drying and a 10 min 100 °C annealing. For the samples with PMMA on top of the perovskite layer, no further annealing was applied after the PMMA deposition. PL measurements Photoluminescence microscopy measurements were carried out using a home-built wide-field microscope based on an inverted fluorescence microscope (Olympus IX-71) (Supplementary Note 1 ). We used 485 nm pulsed laser (ca. 150 ps pulse duration) driven by Sepia PDL 808 controller (PicoQuant) for excitation with repetition rate tuned from 100 Hz to 80 MHz. The laser irradiated the sample through an objective lens (Olympus ×40, NA = 0.6) with ~30 µm excitation spot size. The emission of the sample was collected by the same objective and detected by an EMCCD camera (Princeton Instruments, ProEM 512B). Two motorized neutral optical density (OD) filter wheels were used: one in the excitation beam path in order to vary the excitation fluence over 4 orders of magnitude and one in the emission path to avoid saturation of the EMCCD camera. The whole measurement of a PLQY( f , P ) map was fully automated and took ~3 h (see Supplementary Note 2 for details). Automation was crucial for avoiding human errors in the measurements of so many data points (about 100 data points per PLQY( f,P ) map and to minimize light exposure of the sample. Time-resolved photoluminescence (TRPL) measurements were carried out using the same microscope, by adding a beam splitter in front of the EMCCD and redirecting a part of the emission light to a single photon counting detector (Picoquant PMA Hybrid-42) connected to a TCSPC module (Picoharp 300). Absolute PLQY measurements were performed using a 150 mm Spectralon Integrating Sphere (Quanta-φ, Horiba) coupled through an optical fibre to a compact spectrometer (Thorlabs CCS200). Sample PL was excited by the same laser with 80 MHz excitation repetition rate and 0.01 W/cm 2 excitation power density. This reference point was then used to calculate the absolute PLQY for all pulse fluences and frequency combinations (Supplementary Notes 2 and 3 ). It is important to stress that the whole acquisition of PLQY( f , P ) was fully automated and the sample was exposed to light only for the measurements. This led to a rather small total irradiation dose of about 200 J/cm 2 (equivalent to 2000 s of 1 Sun power) per one PLQY( f , P ) map which accumulated over 85 acquisitions during about 4 h for one PLQY map. Note, that 90% of this doze was accumulated with the maximum power P5 which gives 1600 Sun when the highest frequency 80 MHz is used. This allowed us to have minimal effects of light induced PL enhancement/bleaching on the measurements. Data availability The data that support the findings of this study are available from the corresponding authors upon reasonable request. Code availability The codes and algorithms used for data fitting are available from the corresponding authors upon reasonable request.
Metal halide perovskites have been under intense investigation over the last decade due to the remarkable rise in their performance in optoelectronic devices such as solar cells or light-emitting diodes. Despite tremendous progress in this field, many fundamental aspects of the photophysics of perovskite materials remain unknown, such as a detailed understanding of their defect physics and charge recombination mechanisms. These are typically studied by measuring the photoluminescence—i.e., the emission of light upon photoexcitation—of the material in both the steady-state and transient regimes. While such measurements are ubiquitous in literature, they do not capture the full range of the photophysical processes that occur in metal halide perovskites and thus represent only a partial picture of their charge carrier dynamics. Moreover, while several theories are commonly applied to interpret these results, their validity and limitations have not been explored, raising concerns regarding the insights they offer. To tackle this challenging question, a trinational team of researchers from Lund University (Sweden), the Russian Academy of Science (Russia) and the Technical University of Dresden (Germany) have developed a new methodology for the study of lead halide perovskites. This methodology is based on the complete mapping of the photoluminescence quantum yield and decay dynamics in the two-dimensional (2D) space of both fluence and frequency of the excitation light pulse. Such 2D maps not only offer a complete representation of the sample's photophysics, but also allow to examine the validity of theories, by applying a single set of theoretical equations and parameters to the entire data set. "Mapping a perovskite film using our new method is like taking its fingerprints—it provides us with a great deal of information about each individual sample." says Prof. Ivan Scheblykin, a Professor of Chemical Physics at Lund University. "Interestingly, each map resembles the shape of a horse's neck and mane, leading us to fondly refer to them as 'perovskite horses," which are all unique in their own way." "The wealth of information contained in each 2D map allows us to explore different possible theories that may explain the complex behavior of charge carriers in metal halide perovskites" adds Dr. Pavel Frantsuzov from the Siberian Brunch of the Russian Academy of Science. Indeed, the researchers discovered that the two most commonly applied theories (the so called "ABC theory' and the Shockley-Read-Hall theory) cannot explain the 2D maps across the entire range of excitation parameters. They propose a more advanced theory that includes additional nonlinear processes to explain the photophysics of metal halide perovskites. The diagram depicts a typical 2D photoluminescence map that resembles the shape of a horse's neck and mane. Credit: I. Scheblykin / Y. Vaynzof. The researchers show that their method has important implications for the development of more efficient perovskite solar cells. Prof. Dr. Yana Vaynzof, Chair for Emerging Electronic Technologies at the Institute for Applied Physics and Photonic Materials and the Center for Advancing Electronics Dresden (cfaed) explains: "By applying the new methodology to perovskite samples with modified interfaces, we were able to quantify their influence on the charge carrier dynamics in the perovskite layer by changing, for example, the density and efficacy of traps. This will allow us to develop interfacial modification procedures that will lead to optimal properties and more efficient photovoltaic devices." Importantly, the new method is not limited to the study of metal halide perovskites and can be applied to any semiconducting material. "The versatility of our method and the ease with which we can apply it to new material systems is very exciting! We anticipate many new discoveries of fascinating photophysics in novel semiconductors." adds Prof. Scheblykin. The work was now published in Nature Communications.
10.1038/s41467-021-23275-w
Physics
Switching of K-Q intervalley trions fine structure and their dynamics in n-doped monolayer WS2
Jiajie Pei et al, Switching of K-Q intervalley trions fine structure and their dynamics in n-doped monolayer WS2, Opto-Electronic Advances (2022). DOI: 10.29026/oea.2023.220034
https://dx.doi.org/10.29026/oea.2023.220034
https://phys.org/news/2022-12-k-q-intervalley-trions-fine-dynamics.html
Abstract DOI Classify Address Funds Advanced Search ngRepeat: newsColumn in topMenus ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' Journal Information ngIf: newsColumn.subColumns.length > 0 ngIf: newsColumn.abbreviation == 'journals' ngIf: newsColumn.subColumns.length > 0 end ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' end ngRepeat: newsColumn in topMenus ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' For Authors ngIf: newsColumn.subColumns.length > 0 ngIf: newsColumn.abbreviation == 'journals' ngIf: newsColumn.subColumns.length > 0 end ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' end ngRepeat: newsColumn in topMenus ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' end ngRepeat: newsColumn in topMenus ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' For Referees ngIf: newsColumn.subColumns.length > 0 ngIf: newsColumn.abbreviation == 'journals' ngIf: newsColumn.subColumns.length > 0 end ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' end ngRepeat: newsColumn in topMenus E-mail Alert RSS 手机菜单 Opto-Electronic Advances <img src="/style/web/images/logo-new_03.png" alt=""> Advanced Search ngRepeat: newsColumn in topMenus ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' Journal Information <ol class="data-show i-menu-journals" ng-if="newsColumn.abbreviation == 'journals'"></ol> <ol class="data-show" ng-controller="j-content-journals" ng-if="newsColumn.abbreviation == 'journals'"> <li ng-repeat="journal in journalTypes" > <a ng-if="journal.linkedWebsite != null && journal.linkedWebsite !=''" href="{{journal.linkedWebsite}}" target="_blank"> {{journal.titleEn}} </a> <a ng-if="journal.dblLanguage != 1 && journal.language == 'cn' && (journal.linkedWebsite == null || journal.linkedWebsite =='')" href=" {{journal.titleEn}} </a> <a ng-if="journal.dblLanguage != 1 && journal.language == 'en' && (journal.linkedWebsite == null || journal.linkedWebsite =='')" href=" {{journal.titleEn}} </a> <a ng-if="journal.dblLanguage == 1 && (journal.linkedWebsite == null || journal.linkedWebsite =='')" href="/{{journal.publisherId}}"> {{journal.titleEn}} </a> </li> </ol> ngIf: newsColumn.subColumns.length > 0 end ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' end ngRepeat: newsColumn in topMenus ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' For Authors <ol class="data-show i-menu-journals" ng-if="newsColumn.abbreviation == 'journals'"></ol> <ol class="data-show" ng-controller="j-content-journals" ng-if="newsColumn.abbreviation == 'journals'"> <li ng-repeat="journal in journalTypes" > <a ng-if="journal.linkedWebsite != null && journal.linkedWebsite !=''" href="{{journal.linkedWebsite}}" target="_blank"> {{journal.titleEn}} </a> <a ng-if="journal.dblLanguage != 1 && journal.language == 'cn' && (journal.linkedWebsite == null || journal.linkedWebsite =='')" href=" {{journal.titleEn}} </a> <a ng-if="journal.dblLanguage != 1 && journal.language == 'en' && (journal.linkedWebsite == null || journal.linkedWebsite =='')" href=" {{journal.titleEn}} </a> <a ng-if="journal.dblLanguage == 1 && (journal.linkedWebsite == null || journal.linkedWebsite =='')" href="/{{journal.publisherId}}"> {{journal.titleEn}} </a> </li> </ol> ngIf: newsColumn.subColumns.length > 0 end ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' end ngRepeat: newsColumn in topMenus ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' end ngRepeat: newsColumn in topMenus ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' For Referees <ol class="data-show i-menu-journals" ng-if="newsColumn.abbreviation == 'journals'"></ol> <ol class="data-show" ng-controller="j-content-journals" ng-if="newsColumn.abbreviation == 'journals'"> <li ng-repeat="journal in journalTypes" > <a ng-if="journal.linkedWebsite != null && journal.linkedWebsite !=''" href="{{journal.linkedWebsite}}" target="_blank"> {{journal.titleEn}} </a> <a ng-if="journal.dblLanguage != 1 && journal.language == 'cn' && (journal.linkedWebsite == null || journal.linkedWebsite =='')" href=" {{journal.titleEn}} </a> <a ng-if="journal.dblLanguage != 1 && journal.language == 'en' && (journal.linkedWebsite == null || journal.linkedWebsite =='')" href=" {{journal.titleEn}} </a> <a ng-if="journal.dblLanguage == 1 && (journal.linkedWebsite == null || journal.linkedWebsite =='')" href="/{{journal.publisherId}}"> {{journal.titleEn}} </a> </li> </ol> ngIf: newsColumn.subColumns.length > 0 end ngIf: newsColumn.columnNewsShowLocation == '1' && newsColumn.abbreviation !='journals' end ngRepeat: newsColumn in topMenus Home 头部 结束 底部暂时渲染在这 电脑端 开始 Previous Article Next Article PDF Cite Share facebook twitter google LinkedIn weibo wechat Share the QR code with wechat scanning code to friends and circle of friends. All Title Author Keyword Abstract DOI Category Address Fund Article navigation > Opto-Electronic Advances > 2023 Vol. 6 > No. 4 > 220034 Next Article Previous Article Pei JJ, Liu X, del Águila AG, Bao D, Liu S et al. Switching of K-Q intervalley trions fine structure and their dynamics in n-doped monolayer WS2. Opto-Electron Adv 6, 220034 (2023). doi: 10.29026/oea.2023.220034 Citation: Pei JJ, Liu X, del Águila AG, Bao D, Liu S et al. Switching of K-Q intervalley trions fine structure and their dynamics in n-doped monolayer WS 2 . Opto-Electron Adv 6 , 220034 (2023). doi: 10.29026/oea.2023.220034 Article Open Access Switching of K-Q intervalley trions fine structure and their dynamics in n-doped monolayer WS 2 Jiajie Pei 1,2 , Xue Liu 3 , Andrés Granados del Águila 3 , Di Bao 3 , Sheng Liu 3 , Mohamed-Raouf Amara 3 , Weijie Zhao 3 , Feng Zhang 1 , Congya You 4 , Yongzhe Zhang 4 , Kenji Watanabe 5 , Takashi Taniguchi 5 , Han Zhang 1 , , , Qihua Xiong 6 , , 英文作者地址 1. Collaborative Innovation Center for Optoelectronic Science and Technology, International Collaborative Laboratory of 2D Materials for Optoelectronic Science and Technology of Ministry of Education and Guangdong Province, College of Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China 2. College of Materials Science and Engineering, Fuzhou University, Fuzhou 350108, China 3. Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore 4. College of Materials Science and Engineering, Beijing University of Technology, Beijing 100124, China 5. Research Center for Functional Materials, International Center for Materials Nanoarchitectonics, National Institute for Materials Science, Tsukuba, Ibaraki 305-0044, Japan 6. State Key Laboratory of Low Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China More Information 作者简介和通讯作者 Corresponding authors: H Zhang, E-mail: hzhang@szu.edu.cn ; QH Xiong, E-mail: qihua_xiong@tsinghua.edu.cn <li class="com-author-info"> </li> 稿件日期和基金项目11 稿件日期 Received Date 12 February 2022 Accepted Date 11 May 2022 Available Online 28 October 2022 Published Date 28 April 2023 摘要 Abstract Abstract Monolayer group VI transition metal dichalcogenides (TMDs) have recently emerged as promising candidates for photonic and opto-valleytronic applications. The optoelectronic properties of these atomically-thin semiconducting crystals are strongly governed by the tightly bound electron-hole pairs such as excitons and trions (charged excitons). The anomalous spin and valley configurations at the conduction band edges in monolayer WS 2 give rise to even more fascinating valley many-body complexes. Here we find that the indirect Q valley in the first Brillouin zone of monolayer WS 2 plays a critical role in the formation of a new excitonic state, which has not been well studied. By employing a high-quality h-BN encapsulated WS 2 field-effect transistor, we are able to switch the electron concentration within K-Q valleys at conduction band edges. Consequently, a distinct emission feature could be excited at the high electron doping region. Such feature has a competing population with the K valley trion, and experiences nonlinear power-law response and lifetime dynamics under doping. Our findings open up a new avenue for the study of valley many-body physics and quantum optics in semiconducting 2D materials, as well as provide a promising way of valley manipulation for next-generation entangled photonic devices. Keywords: 2D materials / WS 2 / charged excitons / trions / indirect Q-valley / valleytronics 全文 FullText(HTML) 遍历章节列表 输出章节标题 start Introduction Transition metal dichalcogenides (TMDs) have attracted great attention as potential candidates for novel optoelectronic applications 1 , 2 in recent years, due to their unique excitonic properties and strong many-body effects 3 . The tightly bound electron-hole quasiparticles (exciton, trion, biexciton, etc.) that originated from the excitonic effect are extremely crucial for the optoelectronic properties of TMDs as well as their devices 4 - 9 . The coupling of valleys to excitonic states gives rise to the so-called valley many-body complexes 10 - 13 , which provide the possibility of manipulating the valley index via optical probes. Their unique properties provide an attractive platform for research in fundamental physics, quantum optics, valleytronics, etc 14 - 24 . For molybdenum compounds (MoS 2 , MoSe 2 ), normally two main excitonic species are observed in the photoluminescence (PL) spectra, namely exciton and trion 7 , 8 , 25 , 26 , owning to their aligned spin in the conduction band minimum (CB) and the valance band maximum (VB). In contrast, recent studies have proved that the tungsten compounds (WS 2 , WSe 2 ) have an opposite spin configuration in the conduction band minimum and the valence band maximum 27 - 29 , which significantly affects their excitonic emissions, underpinning even more fascinating valley excitonic states in those compounds. For instance, the brightening of spin-dark exciton 9 , 30 , 31 , the observation of biexciton 32 - 34 , or even higher-order many-body complex 6 , 35 , 36 have been reported. Normally the direct K and K’ valleys were considered during the interpretation of such excitonic states. The indirect Q valley (sometimes also referred to as Λ or Σ), which possesses the same spin and very close energy level to the K valley in the first Brillouin zone of monolayer WS 2 37 - 39 , has received less attention. Such valley was recently found to significantly influence the excited-state distribution under the above-gap excitation 40 - 42 . Whereas the impact of Q valley on the formation of valley many-body complexes has not been well studied so far. To unravel the exact nature of the emissions, it is crucial to probe the transition processes of the many-body species while modulating the carrier densities within different valleys at conduction band edges. Here we probed the indirect Q-valley charged states by tuning the Fermi energy with a high-quality h-BN encapsulated WS 2 field-effect transistor. A distinct emission feature manifests itself as ~20 meV lower in energy and competing population with the conventional trion of monolayer WS 2 was stimulated when the sample is exposed to high excitation power or under high electron doping. We found that the actual doping level of the sample has a significant impact on the power-law response of this emission feature. And the carrier lifetime of such an excitonic state probed by the time-resolved photoluminescence (TRPL) measurement also shows a strong gate dependence. The nonlinear power and gate response were due to the changing Fermi level-induced variation of the dark exciton population. Our study reveals the critical role of the indirect Q valley in the formation of valley many-body complexes, as well as provides an efficient way of manipulating such complexes in transition metal dichalcogenides for future opto-valleytronic applications. 输出章节标题 start Results Upon photoexcitation at the WS 2 monolayer, the electrons and holes are generated and then bound together at energy degenerate valleys of the first Brillouin zone ( Fig. 1(a) ), giving rise to various types of valley excitons and trions 10 - 13 . Previous studies found that the conduction band of monolayer WS 2 has ~35 meV of spin splitting at the K valley of the first Brillouin zone, while the spin of conduction band minimum is opposite to the valance band maximum 27 , 29 , 37 . The spin configurations of the conduction band and valence band are illustrated in Fig. 1(a) with arrows and different colors. Such spin-opposite valleys at K/K ' points of the conduction band have been studied extensively. Recent calculations show that the Coulomb interaction of electron-hole pairs in the intermediate Q valley is rather strong (70~100 meV larger than the K valley exciton) in WSe 2 /WS 2 , due to the much larger effective mass of the Q valley compared to the K valley 27 , 29 . This has been proved recently in experiments with time-resolved XUV micro-angle resolved photoemission spectroscopy for WSe 2 40 and WS 2 41 , where the momentum-indirect Q valley excitons were found to be rather significant. Figure 1. Schematic diagram and device characterization. ( a ) Schematic drawing of the spin configurations for monolayer WS 2 in the conduction band and valence band at K and K’ point of the first Brillouin zone. The bands with two different spin configurations are schematically drawn using two different colors (blue and red), annotated with arrows representing different spins. The symbols “e” and “h” represent electrons and holes, respectively. Scattering pathways of electrons are denoted by the orange arrows. The green wavy arrow represents the excitation photons. ( b ) The possible valley exciton and trions emissions from K V valley in the momentum space. Upon linear optical excitation, the landscape of excitons and trions with opposite spin configurations degenerates. Here, we display the valley excitons and trions for only one spin configuration. Fermi level changes are represented by orange and green dashed lines. The Coulomb interactions of exciton and trions are denoted by the filled areas with red, orange, and green colors. ( c ) Optical micrograph of the h-BN/1L-WS 2 /h-BN sandwiched sample. The scale bar is 5 μm. ( d ) Schematic plot of the heterostructure device. ( e ) The drain-source current as a function of back-gate voltage for various source-drain biases. With this transition curve, it is found the WS 2 monolayer is an n-type semiconductor. Note that the gate-dependent PL measurement in the main text is measured at zero source-drain bias. DownLoad: Full-Size Img PowerPoint The photoexcited electrons at K valley could be scattered either to the K ' or Q valley via phonon ( Fig. 1(a) ) to form the so-called dark excitons 27 , 41 , 42 . Strong exciton binding energies of such excitons make them energetically more favorable than the K valley bright exciton in WS 2 . However, they are normally not observable in the PL spectra due to the nonzero center-of-mass momentum. When the sample is electrically n doped, both the K ' or Q valleys can be filled with electrons that interact with the K valley exciton directly to form the so-called intervalley trions, as illustrated in Fig. 1(b) . These configurations give rise to two types of intervalley trions X T and X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} in WS 2 . The former type has been classified as a trion fine structure previously 11 , 13 , while the latter one X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} has not been reported before. We conducted optical spectroscopy measurement of a monolayer WS 2 sample, configured as a typical field-effect transistor, to unravel the emission species upon the back-gate potential modulation. As shown in Fig. 1(c) and 1(d) , the monolayer WS 2 sample is encapsulated by two pieces of few-layer high-quality h-BN to minimize the influence of surface defect states on the PL spectra 6 , 30 . The WS 2 sample is grounded and the back gate voltage is applied to the degenerately doped n + Si substrate with a 300 nm thickness SiO 2 as the dielectric layer. The as-prepared FET displays an n -channel depletion mode behavior with a turn-on voltage of ~45 V ( Fig. 1(e) ). The PL spectrum of the sample measured at zero gate voltage at 10 K is shown in Fig. 2(a) . Six PL emission features are clearly observed: the peak with the highest energy ~2.075 eV arising from the exciton (X 0 ); the associated negatively charged exciton (X T ) at ~37 meV below X 0 ; three lower-energy peaks labeled as L s which are normally attributed to localized states 30 , 34 or related to the valley phonon replicas of dark trions 43 . The fine structure of trion X T could not be resolved in our measurements, possibly because of peak broadening due to electron doping or insufficiently low sample temperature. The most prominent peak ( X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} ) located at ~2.025 eV is the focus of our current study. Fig. 2(b) shows the color contour plot of the PL spectra at different doping densities (corresponding to back-gate voltages from –60 V to 60 V). As the doping density of electrons is increased, the intensity of the X 0 peak decreases gradually, while the intensity of X T peak and X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} peak increase slowly from –60 V to 0 V. When the back gate voltage increases from 0 V to 60 V, the X 0 peak disappears and the intensity of X T peak drops rapidly. On the contrary, the intensity of X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} peak grows dramatically until it becomes dominant in the spectrum at 60 V as a result of increased Fermi level. On the other hand, the increase of Fermi level will enlarge the magnitude of Stokes shift 8 , resulting in a peak red-shift of the trion X T and X T Q with increasing electron doping ( Fig. 2(b) ). Figure 2(c) shows the integrated PL intensities of such emission features, extracted from detailed fitting results of all peaks as shown in Fig. S1 . It is interesting to found the intensities of X T and X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} appear to be a competitive relationship that can be switched by the gate ( Fig. 2(c) ), which follows the Boltzmann distribution (will be calculated later in Supplementary information Section 2). Figure 2. Electrical tuning of the PL spectra of excitonic species . ( a ) A typical PL spectrum of the monolayer WS 2 at T =10 K excited by 2.33 eV laser. The label X 0 and X T represent bright exciton and trion, respectively. The labels X T Q {\text{X}}_{\text{T}}^{\text{Q}} represents Q-valley charged state, and the rest three peaks are labeled as L s representing localized states. ( b ) Color plot of the measured PL spectra for monolayer WS 2 as a function of back-gate voltage at 25 μW excitation power. The black dashed lines are a guide to the eye showing the positions of the emission peaks. The red dashed arrow illustrates the transition trend for XX – peak intensity under doping, which should be opposite to the X T Q {\text{X}}_{\text{T}}^{\text{Q}} feature. ( c ) Integrated PL intensities of the X 0 (black circle), X T (blue circle), and X T Q {\text{X}}_{\text{T}}^{\text{Q}} (red triangle) as a function of back-gate voltage. The solid lines are fitting results with the equations in Supplementary information Section 2. DownLoad: Full-Size Img PowerPoint The power-law analysis was normally used to further characterize the nature of excitonic complexes 3 . However, we found that the power-law trend of X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} varies with the actual doping of the sample ( Fig. 3(a, b ), which could be due to the variation of exciton/trion population induced by Fermi level changing. The corresponding integrated PL intensity for X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} peak as a function of excitation power is shown in Fig. 3(c) . When the sample is at –60 V back gate tuning, the power-law slope of such a peak is α ~1.42. Meanwhile, the increase of exciton X 0 as a function of excitation power becomes sublinear with a power-law slope of α ~0.84, while the power-law slope for the trion X T is α ~1.16 ( Fig. S2 ). A transition from excitons to trions occurs as the excitation power increases. In contrast, when the sample is heavily n-doped (at 60 V back gate tuning), the power-law slope of X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} becomes α ~0.95 ( Fig. 3(c) ), which means that the PL intensity of such feature is increasing near linear with the excitation. The power-law slope of the X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} could be tuned continuously from 1.42 to 0.95 by swapping the back gate voltage from –60 V to 60 V ( Fig. 3(c) ). Due to the variation of the power-law slope with sample doping, it is important to note that the X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} feature is unlikely to be a biexciton emission as previously reported 32 , 44 . Actually, it has been well accepted recently that the binding energy of biexciton (~24 meV) is smaller than that of the trion 36 , 45 . Figure 3. Influence of electrical doping on the carrier dynamics of excitonic species. ( a , b ) Normalized PL spectra at per μW excitation power for monolayer WS 2 as a function of excitation power at –60 V and 60 V back gate voltages. The dashed lines are guided to the eye showing the peak positions of X 0 and X T Q {\text{X}}_{\text{T}}^{\text{Q}} . ( c ) Log-log plot of the integrated PL intensity for X T Q {\text{X}}_{\text{T}}^{\text{Q}} as a function of excitation power and gate voltage from –60 V to 60 V. The solid lines are power-law fittings with I PL = P α . The dashed line is guided to the eye showing the power-low slope of α =1. Insert illustrates the electron concentration in the direct and indirect valley at low and high doping and pumping. ( d ) Measured time-resolved PL traces (dots) for the X T and X T Q {\text{X}}_{\text{T}}^{\text{Q}} at a temperature of 10 K with pulsed laser excitation at a photon energy of 3.1 eV and power of 0.5 μW. The dashed line IRF is the instrument response function. The solid lines are double exponential fitting using the equation I = A exp(− t /τ 1 ) + B exp(− t /τ 2 ) + C based on the convolution with respect to the instrument response. The fast decay lifetime τ 1 for the X T and X T Q {\text{X}}_{\text{T}}^{\text{Q}} were extracted to be 20.9±2 ps and 38.5±2 ps, respectively. The slow decay lifetime τ 2 for the X T and X T Q {\text{X}}_{\text{T}}^{\text{Q}} were extracted to be 249±8 ps and 316±5 ps, respectively. ( e ) Measured time-resolved PL traces (dots) for the X T Q {\text{X}}_{\text{T}}^{\text{Q}} at different back-gate voltages (from –60 V to 60 V). The solid lines are corresponding double exponential fitting curves. ( f ) The statistical values of the fast decay lifetime τ 1 and slow decay lifetime τ 2 for the fitting results of X T and X T Q {\text{X}}_{\text{T}}^{\text{Q}} at different back-gate voltages. DownLoad: Full-Size Img PowerPoint The gate-dependent power-law responses can be explained as follows. For the –60 V back gate tuning, the Fermi level is below the lower indirect valley. Upon low-power excitation, most of the electrons are scattered into the lower-lying valley forming the dark excitons. Only a small amount of the photoexcited electrons participates in the photoluminescence process from the direct valley, as illustrated in the schematic diagram inserted at the bottom of Fig. 3(c) . As the excitation power increases, the electron density in the indirect valley will continue to increase until it finally reaches saturation. Most of the electrons still remain in the direct valley, as illustrated in the right upper panel of Fig. 3(c) . Thus, the total PL emission experienced a super-linear increase from low power to high power. For the 60 V situation, the indirect valley has already been filled with electrons by doping (left upper panel in Fig. 3(c) ). At each power, the photoexcited electrons will recombine through the direct valley efficiently rather than scattering to other valleys. That explains why the power-law slope of the PL emission was close to 1 under 60 V back gate voltage. Time-resolved photoluminescence (TRPL) measurement (Methods) was used to further probe the carrier dynamics of such excitonic species. Figure 3(d) shows the normalized TRPL spectra for the X T and X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} emission features. Note that the fast X 0 process is not discussed here since it approaches our instrument limit. The TRPL spectra experience a similar fast rise at the beginning, but the following decay processes differ significantly. For the trion X T , the decay process contains two distinct time scales, namely a fast decay ( τ 1 ) with ~85% of the weight and a slow decay ( τ 2 ) with ~15% of the weight. By a double exponential fitting, the values of τ 1 and τ 2 for the X T are extracted to be (20.9±2) ps and (249±8) ps, respectively. The fast decay τ 1 of tens of picoseconds has been attributed to the nonradiative decay lifetime caused by the carrier-carrier scattering or carrier-phonon scattering, while the slow decay τ 2 of hundreds of picoseconds is the radiative decay lifetime that is related to the interband electron-hole recombination 46 , 47 . In contrast, the values of τ 1 (~38.5±2 ps) and τ 2 (~316±5 ps) for the X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} emission are much longer, and the weight of radiative decay (~60%) is more prominent ( Fig. 3(d) ). More interestingly, the decay process of the X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} emission shows a strong gate dependence ( Fig. 3(e–f) ), whereas the trion X T has negligible change as the gate changes ( Fig. 3(f) and Fig. S3 ). On the other hand, we observed similar results by changing the excitation powers, as shown in Fig. S4 . The radiative lifetime ( τ 2 ) of X T does not show a noticeable change as a function of laser power. However, the radiative lifetime ( τ 2 ) of X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} decreases from 530 ps to 328 ps, when we increase the laser power from 0.2 μW to 4 μW. These results indicate that the increase of laser power experiences a similar process of carrier dynamics to the electron doping in monolayer WS 2 . The nonlinear carrier dynamics of the X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} feature implies the transition relationship between these two types of trions. At each Fermi level, the electrons scattering to the K ' C valley are always efficient due to their lower energy. Thus, the radiative lifetime ( τ 2 ) of X T trion is always short and has an inconspicuous response to the doping ( Fig. 3(f) ). While in terms of the X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} trion formation, it is suppressed at low Fermi levels but becomes more favorable when the Fermi level reaches the Q C valley as a result of the stronger binding strength of Q C -K V electron-hole pairs 27 , 29 . Thus, the radiative lifetime ( τ 2 ) of X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} trion experiences a dramatic decrease until it reaches the same timescale (250 ps) as the X T ( Fig. 3(f) ) under doping. At last, we show that the switching between such two types of intervalley trions could be observed as well due to the thermalization induced K-Q valley energy variation ( Fig. 4(a) ). As the temperature decreases from 295 K, the intensity of X 0 peak decreases gradually, while the intensity of X T peak increases first and then followed by a fast decay after 110 K. Meanwhile, the X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} peak starts to emerge at around 110 K. For quantitative analysis, the peak energies and integrated PL intensities of such emission features are collected in Fig. 4(b, c) , extracted from detailed fitting results as shown in Fig. S5 . All the three peaks experience a blue shift when the temperature decreases, and the peak energies can be fit well ( Fig. 4(b) ) using a standard semiconductor bandgap dependence equation 7 . The integrated PL intensities of such emission features fit with the model provided in Supplementary information Section 1. Noticed that the total emission of X T and X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} follows a monotonic increasing trend ( Fig. 4(c) , violet curve) as a function of temperature. Thus, in the calculation, we first estimated the populations of the neutral (X) and charged states (X – ) using the mass action model 7 . Then, the populations of neutral and charged states were further split into two substructures, as a result of the energy splitting of them. The concentration of energy-split substructures at each temperature could be estimated with the Boltzmann distribution 28 (Supplementary information Section 1). Here, a calibration of the Q-K valley energy difference (Δ E QK ) at each temperature is necessary. According to the density functional theory calculation 39 , the temperature variation induced strain can result in a band renormalization that changes the value of Δ E QK , i.e., the bandgap expands at the K valley and contracts at the Q valley when the sample is subjected to compressive strain, as illustrated in Fig. 4(d) . The calculated result matches well with the experimental data ( Fig. 4(c) & Fig. S6 ) by adding this term into the Boltzmann distribution equation (Supplementary information Section 1). The Q-K valley energy difference as a function of temperature was fit to be as Δ E QK = 15+0.24( k B T ) 2 , and the temperature-dependent curve is shown in Fig. 4(e) . Since the peak energy of exciton X 0 is blue-shifted ( Fig. 4(b) ) and the value of Δ E QK decreases at low temperature ( Fig. 4(e) ), which implies that the sample tends to experience compressive strain as temperature decreases. Figure 4. Temperature-dependent PL spectra of the monolayer WS 2 . ( a ) PL spectra at different temperatures from 295 K to 10 K. The spectra are vertically shifted for clarity. The dashed lines are a guide to the eye showing the peak positions of X 0 , X T and X T Q {\text{X}}_{\text{T}}^{\text{Q}} . ( b ) Peak energy of X 0 , X T and X T Q {\text{X}}_{\text{T}}^{\text{Q}} emissions in dependence of temperature. The solid lines are fitting curves with a standard semiconductor bandgap dependence: E g ( T ) = E g (0)− S ℏ \hslash ω [coth( ℏ \hslash ω /2 k B T )−1], where E g (0) is the ground state transition energy at 0 K, S is a dimensionless coupling constant and ħω is an average phonon energy. The fitting parameters for X 0 are: E g (0)=2.08 eV, S =1.807, ħω =11.91 meV; the fitting parameters for X T are: E g (0)= 2.043 eV, S = 2.038, ħω = 14.69 meV; the fitting parameters for X T Q {\text{X}}_{\text{T}}^{\text{Q}} are: E g (0)= 2.022 eV, S = 1.373, ħω = 13.6 meV. ( c ) Integrated PL intensity of X 0 , X T , and X T Q {\text{X}}_{\text{T}}^{\text{Q}} emissions in dependence of temperature. The solid lines are fitting curves with the equations in Supplementary information Section 1. The violet dashed line is the sum of the red and blue solid curves. ( d ) Schematic drawing of the thermalization induced shrinking and expansion of lattice and related bandgap renormalization. ( e ) The Q-K valley energy difference at each temperature that extracted from the fit of the experimental results in c based on the equations in Supplementary information Section 1. The temperature-dependent Δ E QK is about twice the energy offset value of the X 0 peak, which implies that the offset between the K-Q valleys is in the opposite direction. DownLoad: Full-Size Img PowerPoint The population of X T and X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} as a function of the doping level can be estimated by further taking into account the gate-dependent Fermi energy change (Δ E F ) (Supplementary information Section 2). The Fermi energy as a function of back-gate voltage was extracted from the reflectance contrast spectra 48 ( Fig. S7 ). In the calculation, the range of Q-K valley energy difference Δ E QK was set as from –200 meV to 200 meV to involve the configurations of other TMDs materials. The calculated results are shown in Fig. S8 , in which the Q-K valley energy differences for the monolayer MoS 2 (~60 meV), MoSe 2 (~190 meV), WSe 2 (~30 meV), mono- and bilayer WS 2 (~20 and ~–150 meV) marked with dashed lines were extracted from the density functional theory calculation 37 . The variation of electronic band structure with electric field can be ignored here, since the threshold for band structure tuning is about 2 V/Å (1 Å=0.1 nm) according to the calculation 49 , while the maximum value in our experiments is 0.02 V/Å (60 V/300 nm). The calculated gate-dependent PL transition curves between X T and X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} are shown in Fig. S8 , which are consistent with the experimental observation ( Fig. 2(c) and Fig. S9 ). The X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} emission peak in bilayer WS 2 dominates the spectra even at –60 V back gate voltage ( Fig. S9 ) due to the much lower Q valley energy level (~–150 meV). For monolayer MoS 2 and MoSe 2 , the Q valley is not normally accessible by gate tuning because of the much higher energy level (~60 and ~190 meV). Nevertheless, it is still possible if the samples have high initial doping or compressive strain. Finally, we note that the X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} feature differs from the previously reported trion-exciton complex (XX – ) 6 , 35 , 36 , 45 . According to the previous observation, the XX – only appeared at the very low doping region of the sample, and the gate-dependent intensity experienced an opposite trend under electron doping (as illustrated with the red dashed arrow in Fig. 2(b) ). In some literature, a similar feature has been observed at the high doping region of monolayer WSe 2 that labeled as a fine structure of trion 10 , 50 , or its next charging state 35 , 45 , yet the interpretation of its exact nature was lacking. Recently, it also has been suggested that the trion feature originates from an exciton interacting with short-range intervalley plasmons (for W compounds) 51 , or an exciton interacting with a Fermi sea of excess carriers termed as exciton polaron (for Mo compounds) 22 , 52 . Actually, our interpretation is compatible with both by considering the indirect Q valley. The trion X T Q {\rm{X}}_{\rm{T}}^{\rm{Q}} /X T could be viewed as a K valley exciton interacting with short-range intervalley plasmons at the indirect Q/K ' valley in the momentum space, or be viewed as an exciton polaron fine structure in the real space. Anyhow, our observations suggest that the indirect Q valley has a significant impact on the relaxation pathways of exciton complexes when its energy level is lower or close to that of the direct K valley, which provides a new perspective for understanding the material. Nevertheless, more experiments are needed in the future to fully reveal the role of Q valley. We also remind that the Q valley at the first Brillouin zone results from the strong hybridization of p- and d-orbitals between the chalcogen atom X and the transition metal atom M 37 , which makes these valleys highly sensitive to the environmental stimulus, such as the strain, doping, magnetic field, dielectric field, etc. The strong Q-K valley interaction suggests that such states are good candidates for tuning the spin/valley entanglement in these materials and their heterostructures. 输出章节标题 start Conclusion We have demonstrated that the indirect Q valley in monolayer WS 2 significantly affects the transition pathways of exciton complexes at the band edges. By varying the back-gate, we are able to switch the electron concentration within the K and Q valleys in the conduction band. As a result, a remarkable PL emission feature located at the ~20 meV lower energy side of the conventional trion could be excited and even becomes dominant at high electron doping. With increasing Fermi level, the scattering of electrons to the Q valley becomes more efficient facilitating the formation of such a charged state. Consequently, we are able to tune its power-law response from linear ( α ~0.95) to superlinear ( α ~1.42), and radiative lifetime τ 2 from 880 ps to 250 ps efficiently by gate modulation. These findings provide a new perspective for understanding and manipulating the valley dynamics of the monolayer TMDs. The Q-valley excitonic states in two-dimensional TMDs are expected to play critical roles in developing next-generation entangled photonics and valleytronics applications. 输出章节标题 start Materials and methods Sample reparations. The heterostructure consisting of bottom h-BN, monolayer WS 2 , and top h-BN was fabricated by standard PDMS stamp dry transfer technique 53 . Few layer h-BN (10~20 nm) and monolayer WS 2 were exfoliated from the bulk crystals using scotch tape onto PDMS stamp first. Then each 2D layer was transferred onto 300 nm SiO 2 /Si substrate to form the hetero-stacking region. The alignment was carefully done to expose part of the WS 2 for the contacts. Followed by electron beam lithography patterning, Cr (5 nm)/Au (50 nm) contact layer was deposited by thermal evaporation. A lift-off process in acetone was used to remove the sacrificial PMMA layer. The as-fabricated sample was annealed at 120 °C for 2 hours under a high vacuum (< 10 –5 mTorr(1 Torr=133 Pa)). Optical measurements. Steady-state photoluminescence spectroscopy was conducted using a spectrometer (Horiba HR-Evolution) equipped with a liquid nitrogen cooled charge-coupled device (CCD). A 532 nm solid state laser was used as excitation source, whose power was changed from a few micro-watts to above 400 μW by a continuous neutral density filter. The numerical aperture of the objective used in our experiment is NA =0.5. For time-resolved photoluminescence spectroscopy measurements, A Ti:sapphire femtosecond-pulsed laser (400 nm, frequency-doubled) with 100 fs pulse duration and 80 MHz repetition rate was used for excitation. The lifetime was measured by a silicon avalanche single photon detector, mounted on a Horiba iHR320 spectrometer dispersed by a 300 groves/mm grating, driven by a time-correlated single photon counting system (TCSPC system, PicoQuant). The collected PL was spectrally filtered by a 1800 groves/mm grating monochromator with bandpass of 2 nm. The back gate voltage ranging from −60 V to 60 V was applied by a source-measure unit in a semiconductor parameter analyzer (Agilent B1500A), with the WS 2 flake grounded. Acknowledgements Q. H. Xiong gratefully acknowledges the strong support from Singapore Ministry of Education via AcRF Tier 3 Programme “Geometrical Quantum Materials” (MOE2018-T3-1-002) and AcRF Tier 2 grants (MOE2017-T2-1-040). H. Zhang acknowledges financial support from the National Natural Science Foundation of China (Grant No. 61435010). J. J. Pei acknowledges the National Natural Science Foundation of China (Grant No. 61905156), the China Postdoctoral Science Foundation (Grant No. 2017M622764), and the Natural Science Foundation of Fujian Province (Grant No. 2022J01555). Y. Z. Zhang acknowledges the National Natural Science Foundation of China (Grant No. 61575010), the Beijing Municipal Natural Science Foundation (Grant No. 4162016). A. G. del Águila gratefully acknowledges the financial support of the Presidential Postdoctoral Fellowship program of the Nanyang Technological University. K. Watanabe and T. Taniguchi acknowledge support from the Elemental Strategy Initiative conducted by the MEXT, Japan and the CREST (JPMJCR15F3), JST. Author contributions J. J. Pei, H. Zhang and Q. H. Xiong conceived the idea. J. J. Pei, X. Liu and A. G. del Águila designed the experiments. J. J. Pei prepared the samples and performed the experiments. D. Bao, S. Liu helped with the set up for low temperature measurement. M. R. AMARA helped with the set up for lifetime measurement. C. Y. You and X. Liu helped with the device fabrication. K. Watanabe and T. Taniguchi provided the h-BN crystals. J. J. Pei, X. Liu, D. Bao, A. G. del Águila and Q. H. Xiong analyzed the data. F. Zhang, W. J. Zhao, Y. Z. Zhang and H. Zhang helped with the theoretical analysis. J. J. Pei wrote the manuscript with input from all authors. H. Zhang and Q. H. Xiong supervised the whole project. Competing interests The authors declare no competing financial interests. Supplementary information Supplementary information for this paper is available at 参考文献
Drawing on the research idea of electron spin degree of freedom, the valley degree of freedom can be used as an information carrier to design and realize related functional devices. Monolayer group VI transition metal dichalcogenides (TMDs) have recently emerged as promising candidates for photonic and opto-valleytronic applications due to their excellent photoelectric properties and peculiar energy valley structure. On the other hand, the optoelectronic properties of these atomically-thin semiconducting crystals are strongly governed by the tightly bound electron-hole pairs such as excitons and trions (charged excitons) as a result of the enhanced Coulomb interactions in the 2D limit. The anomalous spin and valley configurations at the conduction band edges in monolayer WS2 give rise to even more fascinating valley many-body complexes. The coupling of charges, spins, energy valleys, and many-body complex quasiparticles offers opportunities to manipulate quantum information by an optical method. Among them, the control of these many-body complexes by regulating the valley charge by external fields is an effective means to realize the development of novel quantum devices, which is expected to be applied in the fields of quantum computing and quantum communication. The authors of an article now published in Opto-Electronic Advances found that the indirect Q valley in the first Brillouin zone of monolayer WS2 plays a critical role in the formation of a new excitonic state. By employing a high-quality h-BN encapsulated WS2 field-effect transistor, it is able to switch the electron concentration within K-Q valleys at conduction band edges. Consequently, a distinct emission feature could be excited at the high electron doping region. This feature is competitive with the traditional K-valley trions and obeys the Boltzmann distribution law. Further studies have proved that this feature is from the luminescence of the indirect Q-valley trions. Such a feature differs from the previously reported trion-exciton complex (XX-), which only appeared at the very low doping region of the sample, although they have very close energies. It is found that the actual doping level of the sample has a significant impact on the power-law response of this emission feature. With increasing Fermi level, the scattering of electrons to the Q valley becomes more efficient facilitating the formation of such a charged state. Consequently, it is able to tune its power-law response from linear (α~0.95) to superlinear (α~1.42), and radiative lifetime τ2 from 880 ps to 250 ps efficiently by gate modulation. The study suggests that the indirect Q valley has a significant impact on the relaxation pathways of exciton complexes when its energy level is lower or close to that of the direct K valley, which provides a new perspective for understanding the material. The authors also note that the Q valley at the first Brillouin zone results from the strong hybridization of p- and d-orbitals between the chalcogen atom X and the transition metal atom M, which makes these valleys highly sensitive to the environmental stimulus, such as the strain, doping, magnetic field, dielectric field, etc. The strong Q-K valley interaction suggests that such states are good candidates for tuning the spin/valley entanglement in these materials and their heterostructures. These findings open up a new avenue for the study of valley many-body physics and quantum optics in semiconducting 2D materials, as well as provide a promising way of valley manipulation for next-generation entangled photonic devices.
10.29026/oea.2023.220034
Biology
Research proves Midwestern fish species lives beyond 100 years
Alec R. Lackmann et al. Bigmouth Buffalo Ictiobus cyprinellus sets freshwater teleost record as improved age analysis reveals centenarian longevity, Communications Biology (2019). DOI: 10.1038/s42003-019-0452-0
http://dx.doi.org/10.1038/s42003-019-0452-0
https://phys.org/news/2019-05-midwestern-fish-species-years.html
Abstract Understanding the age structure and population dynamics of harvested species is crucial for sustainability, especially in fisheries. The Bigmouth Buffalo ( Ictiobus cyprinellus ) is a fish endemic to the Mississippi and Hudson Bay drainages. A valued food-fish for centuries, they are now a prized sportfish as night bowfishing has become a million-dollar industry in the past decade. All harvest is virtually unregulated and unstudied, and Bigmouth Buffalo are declining while little is known about their biology. Using thin-sectioned otoliths and bomb-radiocarbon dating, we find Bigmouth Buffalo can reach 112 years of age, more than quadrupling previous longevity estimates, making this the oldest known freshwater teleost (~12,000 species). We document numerous populations that are comprised largely (85–90%) of individuals over 80 years old, suggesting long-term recruitment failure since dam construction in the 1930s. Our findings indicate Bigmouth Buffalo require urgent attention, while other understudied fishes may be threatened by similar ecological neglect. Introduction The Bigmouth Buffalo ( Ictiobus cyprinellus ) is one of the largest freshwater fishes endemic to North America, reaching lengths exceeding 1.25 m and body masses >36 kg 1 . Indeed, it is the largest of all catostomids (Cypriniformes: Catostomidae). Bigmouth Buffalo are also unique as the only catostomid with a terminal mouth and planktivorous, filter-feeding tendencies. All other catostomids are benthivorous 2 . The life history of I. cyprinellus was described previously as fast-paced 2 , despite apparent conflicting evidence from two studies reporting failure of some mature females to spawn every year 2 , 3 . One study reported a maximum estimated age of 26 years 4 , but previous reports suggested a younger maximum age (10–20 years), and reproductive maturity occurring as early as the first year of growth 2 , 5 , 6 . This exclusively freshwater species inhabits shallow (<4 m) warm-water lakes and pond-like areas of rivers, and is tolerant of eutrophication and high turbidity 1 , 2 . Shallow habitats are not typically associated with a long lifespan 7 , 8 . Bigmouth Buffalo have been important to human cultures in North America. Several lake names in Minnesota use the word niigijiikaag , the Ojibwe (a regional Native American tribe) name for buffalofish (Klimah, C., Minnesota Department of Natural Resources Fisheries Biologist, 2018, personal communication). Other Minnesota lakes and one county were named Kandiyohi by the Dakota tribe, meaning “where the buffalofish come.” In addition, the city of Buffalo, MN is named after this species 9 . In 1804, Lewis and Clark harvested buffalofish in Nebraska 10 and they have been of commercial importance since the 1800s 11 , 12 . This fishery is valued in the 21st Century at over 1 million USD per year in the Upper Mississippi Basin alone 13 . Despite its value, Bigmouth Buffalo have become increasingly misunderstood over the past century as they became commonly categorized as a “rough fish.” This imprecise term is used in much of the USA to lump many endemic, traditionally nongame fishes, along with unwanted invasive fishes, for purposes of harvest regulation 14 . This pejorative designation has led to the misconception by the general public of Bigmouth Buffalo as an “invasive species” or “a carp,” encouraging its persecution as a sacrificial or unimportant species. Contrary to this treatment in the USA, Bigmouth Buffalo were given Special Concern status in the Hudson Bay drainage of Canada in 1989 by the Committee on the Status of Endangered Wildlife in Canada following documented population decline concomitant with increases in invasive Common Carp ( Cyprinus carpio ) 15 . Bigmouth Buffalo serve as a competitor to the invasive Bighead Carp ( Hypophthalmichthys nobilis ) and Silver Carp ( H. molitrix ) 5 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , as well as the invasive Common Carp 2 , 24 , thus these three invasive species pose threats in addition to overharvest. Hence there is a basis for considering Bigmouth Buffalo as an ecological asset, and reason for concern about declining populations of Bigmouth Buffalo that have been documented in the northern parts of their range, including Canada, Minnesota, and North Dakota 1 , 15 . Unfortunately, other North American catostomids may also warrant such concern, with 42 out of 76 species already classified as endangered, threatened, vulnerable, or extinct, according to a recent synthesis on the conservation status of this family 25 . Current harvest of Bigmouth Buffalo is largely unregulated. This is partly because Bigmouth Buffalo have long been unpopular with recreational anglers, as these pelagic filter-feeders rarely take a baited hook or lure and thus are seldom caught by hook-and-line. However, legislative changes in the past decade coincide with a sharp increase in the popularity of bowfishing 26 . Across the USA, bowfishing is now permitted at night; archers can shoot “rough fish” with a bow and arrow under powerful lights, despite little to no regulation or study of this new harvest method 27 . While Bigmouth Buffalo and several other endemic taxa have become prized catches for bowfishers 28 , angler harvest of Bigmouth Buffalo in the USA is currently unregulated in 19 of the 22 states to which they are endemic, where recreational anglers can harvest unlimited numbers. Exceptions include Missouri and Louisiana with established take limits, and in Pennsylvania where Bigmouth Buffalo are considered endangered and are illegal to possess 14 . Furthermore, commercial anglers face no limits on the total number of Bigmouth Buffalo harvested in any U.S. state except Pennsylvania, and fish size restrictions on commercial harvest exist only in Louisiana and Mississippi 14 . Given the largely unregulated harvest of this ecologically important and historically valued endemic fish, it is crucial to validate its life history characteristics. We use otoliths (earstones) to estimate demographic characteristics of I. cyprinellus collected from 12 populations in two major drainages in Minnesota, and annulus counts on thin-sectioned otoliths to estimate fish age. The validity of these age estimates is tested using bomb radiocarbon ( 14 C) dating, a method that relies on bomb-produced 14 C from atmospheric thermonuclear testing in the 1950s and 1960s as a time-specific marker 29 . These validated age-at-length data are used to describe Bigmouth Buffalo growth characteristics and age-at-maturity, which differ by an order of magnitude from previously published work on this species. We also report on novel age-related external markings that aid individual recognition and mark-recapture, as well as provide a non-lethal means of age estimation. Results Age analysis During the years 2016 to 2018, we estimated the age for 386 Bigmouth Buffalo by counting annuli in thin-sectioned otoliths (Fig. 1 ), currently the most reliable method for age estimation of teleost fishes 30 . We investigated the three pairs of otoliths (sagittae, lapilli, and asterisci) for growth zone structure that could be interpreted as annual. Specimens used in this study came from 12 populations spanning two drainages in Minnesota: the Red River Basin of the North ( n = 257), a part of the Hudson Bay watershed; and the Mississippi River Basin ( n = 129). From the Red River Basin, 224 fish came from eight lakes along a 26-km reach of the Pelican River Basin in Otter Tail County: Prairie ( n = 7), North Lida ( n = 42), Crystal ( n = 59), Rush ( n = 37), Lizzie ( n = 46), Fish ( n = 1), Big Pelican ( n = 19), and Little Pelican ( n = 13). The remaining 33 Red River Basin specimens came from the Otter Tail River below Orwell Dam. From the Mississippi River Basin, we obtained 129 Bigmouth Buffalo from three lakes: Minnetaga (Kandiyohi County; n = 66) along the Crow River; plus Artichoke (Big Stone and Swift County; n = 52), and Ten Mile (Otter Tail County; n = 11) along the Minnesota River. Fig. 1 Thin-sectioned otoliths. Thin-sectioned lapillus and asteriscus otoliths from four Bigmouth Buffalo ( Ictiobus cyprinellus ) with age estimates ranging from 3, 36, 85, and 112 years at the time of collection. White dots indicate annual growth bands and yellow triangles decade counts. All otolith composite images set to same scale bar = 1 mm. Note the well-defined annuli Full size image Age estimates from thin-sectioned otoliths (lapilli and asterisci) were validated using bomb 14 C dating (see Methods). Both of these otoliths were validated for age analysis by the strong agreement between the quantified annuli across all ages (Fig. 2 ). Furthermore, many of the otoliths provided for bomb 14 C dating were scored exclusively using an asteriscus ( n = 7; Table 1 ), even though a lapillus (read only to age two for core extraction) was used for bomb radiocarbon analyses, and expected 14 C results were obtained in all cases. All samples analyzed revealed 14 C values that were consistent with the birth years generated from annulus counts (Table 1 , Fig. 3 ), when compared to expected 14 C levels associated with the bomb 14 C reference records that are available for freshwater bodies of North America (Fig. 4 ). One exception was the sole Mississippi River Basin sample; this anomaly was likely a basin effect as all other samples were from the Red River Basin. Specifically, samples extracted from annuli representing birth years in pre-bomb, rise, and post-peak decline periods resulted in 14 C values that were consistent with freshwater 14 C reference records (Table 1 , Fig. 3 ). All fish estimated to have hatched prior to atmospheric nuclear testing had otolith core radiocarbon values indicative of a minimum age that exceeded 60 years, as expected. These pre-bomb otolith core 14 C values were very consistent through time (1926–1938) with a mean of –39.4‰ ± 3.3 SD (Fig. 3 ). The one available rise period specimen (ICCY-09) had a diagnostic Δ 14 C value of 134.9‰ at a birth year of 1960, for a validated age of 58 ± 1–2 years. Variation from this rise time should be within 2 years because of the rapid and time-specific increase in 14 C, assuming the regional hydrogeology is similar to the available 14 C references. Younger fish were also consistent with the general expectation for the bomb 14 C peak and subsequent decline period, to the extent that specimens were available. No fish with birth years near the expected peak of bomb-produced 14 C (~1965) were available, but specimens with birth years in the 1970s through to the 2000–2010s showed a temporal 14 C decline consistent with the declining trend shown by reference 14 C records. Fig. 2 Annulus counts of lapillus vs. asteriscus otoliths from the same fish. Comparison of age readings made by the primary reader from thin-sectioned lapillus vs. asteriscus otoliths of the same Bigmouth Buffalo ( Ictiobus cyprinellus ) specimens ( n = 72); Pearson Correlation Coefficient = 0.999, p < 0.0001; Paired t -test: t-Ratio = −1.813, p > 0.05, mean difference = −0.306. The linear regression slope estimate (0.997; 95% CI: [0.986, 1.007]) includes 1.000, and thus was statistically isometric. Hence, either the lapillus or asteriscus otolith can be used to consistently age Bigmouth Buffalo with precision and accuracy Full size image Table 1 Specimen data for 28 Ictiobus cyprinellus samples analyzed using bomb 14 C dating Full size table Table 2 Additional sample data for the radiocarbon analyses of Bigmouth Buffalo Full size table Fig. 3 Age validation of Bigmouth Buffalo ( Ictiobus cyprinellus ). Radiocarbon (Δ 14 C) measurements for the estimated year of otolith core formation of 14 Bigmouth Buffalo collected from the Hudson Bay drainage in 2017–2018. These specimens were estimated to be 3 to 92 years old (birth years of 1926–2015) from annulus counts. Reference curves for bomb-produced 14 C were generated from the only thorough records from otoliths of freshwater fishes. Note the rise of bomb-produced 14 C is similar across these regions of North America. The two sets of connected samples represent radial extraction series for two specimens sampled from the otolith core, through a sequential set of estimated formation dates Full size image Fig. 4 Reference freshwater bomb 14 C data. These data provide temporal constraints on the measured values from Bigmouth Buffalo ( Ictiobus cyprinellus ). These records were from a combination of known-age (juvenile fish) and estimated age (core extractions from age-validated adults) to provide the best available bomb-produced 14 C reference record for freshwaters of North America (i.e. salmonids of Arctic lakes (filled circles; Salvelinus namaycush and S. alpinus ) 35 and Freshwater Drum of central North America lakes (open circles; Aplodinotus grunniens 46 and U.S. Fish and Wildlife Service unpublished data)) Full size image Radial samples that covered multiple years of growth for each of two Bigmouth Buffalo were also consistent with expected time-specific 14 C levels, and supported annulus-count ages of 90 and 92 years (Table 1 , Fig. 3 ). These fish lived their first three decades prior to atmospheric bomb testing and carbonate samples from otolith locations corresponding to those pre-bomb years were within expected 14 C levels from previous freshwater studies, but also may have set a new baseline for pre-bomb levels. The most comprehensive extraction series ( n = 10) from ICCY-27 provided the most robust 14 C data. Measured 14 C values aligned strongly with 14 C reference data through the last six decades of the 90-year-old lifespan of this fish. The core and four sequential radial samples were all pre-bomb, with the 14 C rise occurring as predicted (based on annulus counts) in the late 1950s. The subsequent peak and declining 14 C values were also consistent with reference records. While the pre-bomb otolith core 14 C values were very consistent (–39.4‰ ± 3.3 SD) among individuals, pre-bomb radial samples tended to be more depleted with a mean of –61.2‰ ± 13.5 SD (Fig. 3 ), which is likely associated with a minor ontogenetic shift in habitat to slightly deeper, more 14 C-depleted, waters after the first 1–2 years of growth. Overall, the findings from bomb 14 C dating using both core and radial samples indicate the age reading protocol is valid and that age estimates approaching and exceeding 110 years are well-supported. In all but one case (223 of 224), Bigmouth Buffalo from the Pelican River Basin were older than the previously reported maximum age of 26 years 4 , with 186 of 224 individuals exceeding 75 years of age (1906–1942 year-classes). The remaining 39 Pelican River Basin fish ranged 18–49 years old (1969–2000 year-classes; Fig. 5 ). The five oldest Bigmouth Buffalo all exceeded 100 years, with the oldest estimated at 112 years old (Fig. 1 ). The 33 Red River Basin fish from Orwell Dam on the Otter Tail River ranged 3–80 years (1938–2015 year-classes; Fig. 5 ). In the Mississippi River Basin, two of the sampled lakes were subject to commercial harvest. At Artichoke Lake on the Minnesota River, 52 Bigmouth Buffalo selected from the harvest ranged 2–43 years (Fig. 5 ). At Lake Minnetaga (Crow River), we obtained 66 Bigmouth Buffalo of unmarketable commercial size that ranged 2–14 years, with 98% of the individuals between 2 and 6 years old (Fig. 5 ). Fish from Tenmile Lake (Minnesota River) came from a bowfishing take, and ranged 13–36 years with a dominant 2005 year-class (Fig. 5 ). Fig. 5 Age distribution. Age structure of 2016–2018 sampled Bigmouth Buffalo ( n = 386) by minor drainage (e.g. Crow) and lake (e.g. Minnetaga). Both age and year class distributions are given for fish from the Pelican River Basin where 83% of the fish are older than 75 years. Although >60% of the individuals bowfished from Tenmile Lake were between 13–15 years, none of the 193 bowfished individuals from the Pelican River Basin was younger than 18, and 81% (156) were over 75 years Full size image Growth and reproductive maturity Length-at-age estimates from fish used in this study were analyzed for life history parameters using the von Bertalanffy growth function 31 . Six different models were compared in which the parameters for asymptotic length and growth rate were constant or varied with sex (Fig. 6 ). The global model, in which both parameters varied with sex and t 0 unconstrained, was the most parsimonious based on the relative Akaike’s Information Criterion adjusted for small sample size 32 . Fixing t 0 = 0 produced models that ranked third at best. All but the Minnetaga individuals ( n = 66) and unsexed Bigmouth Buffalo (skeletons only, n = 2) were used in this analysis ( n = 318). We excluded Lake Minnetaga fish, 98% of which were younger than seven years, because I. cyprinellus grow relatively quickly in their first decade (Fig. 6 ), and these Lake Minnetaga individuals (collected during the fall) had completed an extra growing season compared to all other spring-collected fish in these age classes. Fig. 6 Growth in length. Total length (cm) vs. age (years) for sexed Bigmouth Buffalo (excluding Minnetaga) modeled by the highest ranked von Bertalanffy curve (solid black) with different parameters for asymptotic length ( L ∞ ) and growth rate ( k ) for females ( L ∞ = 88.2, 95% CI [87.2, 89.3], k = 0.084 [0.074, 0.096]) compared to males ( L ∞ = 74.5 [73.5, 75.9], k = 0.103 [0.093, 0.114]) (age at 0 length parameter [ t 0 ] = −4.4 [−5.7, −3.4]). Both females and males have reached 95% of the asymptotic length by age 30 according to this growth model. Note that t 0 is negative due to the absence of 0–1 year old fish in the sample. Fixing t 0 = 0 changes the model (gray curves). Bomb 14 C tested Bigmouth Buffalo are labeled with their sample ID (Table 1 ). Bigmouth Buffalo are taken by bowfishers as small as 30 cm total length in Texas 26 Full size image An estimate of reproductive maturity was calculated at the population level from the gonadosomatic index (GSI = gonad mass divided by total fish mass) of individual Bigmouth Buffalo from Artichoke Lake, our only sample in which many Bigmouth Buffalo were taken on a single date in the spring prior to spawning. Totals of 30 females and 14 males were used for this analysis. Sex-specific age at reproductive maturity was determined at the population level following a published method that uses the point at which 50% of the population has mature gonadal tissue 33 . The GSI threshold for which 50% of this population is estimated to reach sexual maturity was approximately the same for males and females (GSI >4%). This corresponds to an age of ~5–6 years for males, and 8–9 years for females. This GSI threshold, although likely appropriate for males, may well be too low for females. This calculation is influenced by the paucity of Bigmouth Buffalo females collected in the range of 6–12 years, and thus our age estimate for female reproductive maturity is likely underestimated for this population. In addition, GSI values are typically 15–25% near asymptotic size for females, while only 5–7% for males, as is true in this case. Pigmentation variation with age Many Bigmouth Buffalo have unique, long-lasting black or orange markings, and the presence and extent of this pigmentation intensifies with age. In a tagged individual recaptured 9 months later, the position and size of both black and orange spots had not changed (Fig. 7a, b ). These color markings are most accentuated in the oldest individuals. Indeed, logistic regression indicated that the presence of black markings increased in likelihood with age ( χ 2 = 471.425, P < 0.0001). Similarly, orange spots also increased in likelihood with age ( χ 2 = 415.546, P < 0.0001). Black markings were never found on fish younger than 32 years, yet were present on all individuals older than 45 years (Fig. 7c ). Orange spots were present on only two individuals younger than 32 years, and were absent on only four individuals older than 45 years of age. Both black and orange markings vary in position among individuals, but black markings usually have a dorsal orientation and orange spots are usually most concentrated on the head. Fig. 7 Age and pigmentation. a An 81.3 cm total length, 9.53 kg female captured in August 2017 had two prominent orange spots (arrows). The fish was tagged with elastomer and released. b When recaptured 9 months later she had not grown in total length. Comparing a and b , these natural orange spots had not changed. Many smaller orange and black spots not obvious in these full-body images also were unchanged. Orange scale bar = 50 cm for both a and b . c The presence of black and orange spots on Bigmouth Buffalo increases in likelihood with age. Data points (triangles) represent presence (1) or absence (0) of these markings on a given fish ( n = 384). Inflection points of these logistic regression models are marked with 95% CI. Inset photographs show each type of spot Full size image Discussion Taken together, evidence from thin-sectioned otoliths and bomb 14 C dating revealed that Bigmouth Buffalo can live to 112 years, older than all other reports of maximum age for freshwater teleost fishes by nearly 40 years. To date, the oldest age estimates were from otoliths of Freshwater Drum ( Aplodinotus grunniens ) obtained from archeological sites (maximum reported age of 73 years) 34 and cold-adapted Arctic Lake Trout ( Salvelinus namaycush ; maximum age of 62 years) 35 . With ~12,000 species of freshwater teleost fishes 36 , 37 , the longevity of Bigmouth Buffalo can be considered exceptional. The Family Catostomidae contains at least six other species, representing five of 13 genera, reported to have long lifespans: Quillback ( Carpiodes cyprinus , 52 years) 38 , Razorback Sucker ( Xyrauchen texanus , 44 years) 39 , Cui-ui ( Chasmistes cujus , 44 years) 40 , 41 , Lost River Sucker ( Deltistes luxatus , 43 years) 42 , June Sucker ( Chasmistes liorus , 41 years) 43 , and Black Buffalo ( Ictiobus niger , 56 years: a single specimen donated to our research team was 32 years older than the previously reported maximum age) 44 . However, the findings for the Cui-ui and the Lost River Sucker may be underestimates because otoliths were not used. Using otoliths, we show that Bigmouth Buffalo and other catostomids (e.g. Black Buffalo) have life histories that challenge current paradigms. To our knowledge, this is the first age-validation work done on the buffalofishes ( Ictiobus spp.), including a first-time application of bomb 14 C dating to catostomids, and a first-time validation of a freshwater fish lifespan using radial otolith sampling to support ages decades before the bomb 14 C rise 45 . Thus, Bigmouth Buffalo are now the oldest age-validated freshwater fish. While bomb 14 C dating has been widely applied throughout the world, its primary application to fishes has been in the marine environment. Few studies exist that have made thorough assays of the bomb-produced 14 C signal in the freshwater environment (Figs. 4 and 8 ), but as with the mixed layer of most of the world oceans, the timing of the rise of 14 C in freshwater habitats is likely similar across various waterbodies (Fig. 9 ). Certainly, there are potential complications based on the hydrogeology of the water body under consideration, as exists in the marine environment. However, the utility of the time-specific rise of bomb-produced 14 C remains an invaluable tool in age validation of freshwater fishes 35 , 46 , 47 , 48 . In this study of Bigmouth Buffalo, the finding of a valid age-reading protocol to ~60 years, coupled with: the consistency of 14 C in younger fish with expected 14 C levels; and radial otolith series for two specimens that push the birth years into the 1920s, clearly supports the validity of age estimates in this study and our conclusion that Bigmouth Buffalo can achieve centenarian longevity. Fig. 8 Map of freshwater 14 C chronologies in North America. Chronologies have been determined from otoliths of: Arctic salmonids 35 ; Freshwater Drum of Lake Winnebago (western white circle) 45 and Lake Ontario and Lake Oneida (eastern most white circles; U.S. Fish and Wildlife Service unpublished data); and 3) Bigmouth Buffalo ( Ictiobus cyprinellus ) of Minnesota (diamonds; present study). Bigmouth Buffalo were also taken from the points marked with an “X”, but these were not analyzed for radiocarbon. The dark-gray shaded area within the USA and Canada represents the endemic range of Bigmouth Buffalo 5 , 62 . Scale bar = 400 km Full size image Fig. 9 Various bomb-produced 14 C records. Note the general similarities and differences in each environment of the Northern Hemisphere. The most rapid increase is atmospheric due to thermonuclear testing in this environment for which two 14 C data sets were applicable to North America (above 40° north and an average of various records across all northern latitudes 63 ). Arctic and Central lakes of North America also exhibit a rapid 14 C rise due to a close hydrologic connection via precipitation. The marine environment is exemplified by a coral record from the North Pacific Gyre ( Porites sp. of Kure Atoll) 64 and two fish otolith records from the upwelled environment of the northeastern Pacific (Yelloweye Rockfish, Sebastes ruberrimus ) 61 and the mixed layer of the northwestern Atlantic (various species) 35 Full size image Over their long lives, Bigmouth Buffalo accrue black and orange spots that correlate with age (Fig. 7 ). These irregular pigment markings have not been described previously for Bigmouth Buffalo. We hypothesize that black spots accrue from sun exposure over time (melanosis), and that orange spots accrue as a result of diet. Not only do both markings (taken together) provide a consistent, non-lethal means of estimating age (e.g. likelihood of individuals over 75 years old), they also have assisted individual recognition and mark-recapture (Fig. 7 ). Nonetheless, the utility of these markings has just been realized and their biological function (if any) is unknown. Interestingly, large brown spots were briefly mentioned as a distinguishing feature of old individuals in a different catostomid, the Cui-ui, from Pyramid Lake, Nevada 40 . This revised life history view of Bigmouth Buffalo has implications for management. Dams on rivers are cited as the leading cause of recruitment failure for Bigmouth Buffalo because they restrict access to spawning habitats and can mute the environmental cues thought to initiate spawning behavior 3 , 5 , 15 , 49 . There are four dams along the Pelican River within the eight-lake sampling area (along a 26 km reach of the river), all of which were constructed in 1936–1938 and have been in place for approximately eight decades 50 . Each of these dams restricts upstream movement of fishes. We found the age distribution of the Bigmouth Buffalo populations in the Pelican River Basin lakes to be heavily skewed toward the oldest fish (i.e., 82% of sampled individuals were born prior to 1939, Fig. 5 ). This is strong indication of a persistent lack of reproductive recruitment since the time of the dam construction. A further threat to Bigmouth Buffalo populations in Minnesota waterbodies is increased angling pressure since 2010, when regulatory changes permitted angling by night archery with artificial lights 14 . In this form of angling, fish are shot with arrows, catch and release is neither legal nor possible, and there are no bag limits on several endemic taxa including Bigmouth Buffalo and Black Buffalo 14 . Thus, a reevaluation of management decisions concerning Bigmouth Buffalo is required. This new life history evidence points to a precautionary approach to the conservation of buffalofishes in general, and potentially other catostomids, which currently have little or no harvest regulation. Protecting spawning habitat and older individuals from harvest may be necessary for sustaining populations of species like Bigmouth Buffalo whose life history includes asymptotic growth, delayed maturity, great longevity, and episodic recruitment 51 . In practice, endemic taxa are often ignored if their societal value is not commonly appreciated or has yet to be realized. Addressing such neglect is important in this human-dominated era 52 when ecosystems, literally the life-support system of humankind 53 , are destabilized and have lost productivity 54 . For many fishes that are endemic to North America, ecological neglect results from a disregard for the intrinsic value of underutilized taxa, an under-appreciation for life history diversity, and an inappropriate classification as “rough fish” that portrays low-value to the public. A telling case in this regard is the Bigmouth Buffalo. For centuries this species has been valued as a North American food-fish 11 , 12 , 13 , and for decades has served as a direct competitor to several invasive fishes notorious for their deleterious effects on aquatic systems 2 , 5 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 . However, almost all populations of Bigmouth Buffalo in the USA remain unprotected 14 even though they are declining 1 . In contrast, declining populations in Canada have been recognized as a problem, and Bigmouth Buffalo gained Special Concern status in the 1980s 15 . In this study we identify Bigmouth Buffalo as the oldest freshwater teleost, and suggest that urgent conservation measures may be necessary for recovery of old populations with evidence of recruitment failure. Indeed, recent efforts to develop sustainable marine fisheries have emphasized the need to validate lifespans 55 , given the threat of longevity overfishing 56 . As was observed for Pallid Sturgeon ( Scaphirhynchus albus ) 47 , it is likely that reproductive and recruitment characteristics associated with a long lifespan may be crucial for population persistence across times of unfavorable environmental conditions common to freshwater habitats. The Bigmouth Buffalo is capable of living and reproducing to ages that more than quadruple all previous estimates. This finding serves as a prime example of discoveries overlooked and management dilemmas that can arise as a consequence of the ecological neglect of under-appreciated species. Methods Fish collection We have treated all animals in accordance with NDSU guidelines on animal care (IACUC protocol A17007). In the Red River Basin, Bigmouth Buffalo were collected from Otter Tail County, Minnesota, along two tributaries of the main stem of the Red River of the North (henceforth, Red River): the Pelican River and the Otter Tail River. The Pelican River Basin sites included eight lakes: Crystal, Lizzie, Rush, Fish, Pelican, Little Pelican, North Lida, and Prairie. These lakes are along a 26 km reach of the river from which specimens ( n = 224) were taken during 2016–2018 via Fyke net, gill net, hook and line, and bowfishing. Otter Tail River Basin individuals ( n = 33) came from a single site below Orwell Dam in April of 2018 via hook and line. Specimens were immediately measured to obtain wet mass (±1 g) and total length (±0.1 cm), photographed laterally with a scale bar, and then dissected to obtain gonadal tissue for sex determination and mass (±0.1 g). In the Mississippi River Basin, Bigmouth Buffalo were collected along two tributaries of the Mississippi River: the Crow River and the Minnesota River. Fish were obtained from a commercial harvest on 4 May 2017 from Artichoke Lake ( n = 52) in Big Stone and Swift County, Minnesota (on the Minnesota River); and on 22 Sept 2017 from Lake Minnetaga ( n = 66) in Kandiyohi County, Minnesota (on the Crow River). An additional 11 specimens were obtained from a bowfishing take on Tenmile Lake, Otter Tail County, Minnesota (on the Minnesota River) in May of 2018. For Artichoke Lake fish, measurements, photographs, and sex determination were obtained after fish had been frozen and thawed. For Lake Minnetaga and Tenmile Lake, fish data were obtained as previously described for the Red River Basin specimens, except that Lake Minnetaga specimens were dissected for sex determination after being frozen and thawed. Otolith preparation and age analysis Otoliths were removed from fish by first exposing the ventral surface of the cranium, through the otic bullae under the operculum. At least one otolith was obtained from every fish dissected ( n = 386) and in most cases (77%) the complete set of six otoliths (asterisci, sagittae, and lapilli) was collected. Following extraction, the otoliths were gently removed from the labyrinth organ with forceps and placed in 1.5 ml plastic microvials pre-filled with water to prevent any residual tissue or fluid from drying to the surfaces. All collected otoliths were rinsed and submersed in distilled water to photograph the whole otolith set at 10X with an Olympus® SZH10 dissecting microscope using transmitted light in dark-field mode. The orientation of the nuclear transect to be thin-sectioned from the whole otolith was determined from these images. Otoliths were then dried for 30 min at 55 °C and lapilli were weighed (±0.001 g) using a CAHN Electrobalance®. Only the lapilli were weighed because they produce the most reliable weight measures. Of all Bigmouth Buffalo otoliths, lapilli are the largest, least fragile, and least likely to hold residual endolymph. Sagittae are the smallest otoliths in Bigmouth Buffalo and fracture easily, while asterisci have grooves that are difficult to thoroughly clean of non-otolith material (both factors that led to unreliable weight measures). Weighed otoliths were embedded in ACE® quick-setting epoxy within 1.5 cm 3 compartments (lined with petroleum jelly) in a plastic tray. After the epoxy hardened, the epoxy block was placed in a Buehler IsoMet™ 1000 low-speed saw equipped with diamond-embedded thin-sectioning blades to obtain 300–500 μm sections via the wafer method. A total of 557 otoliths (315 asterisci, 241 lapilli, and 1 sagittal) were thin-sectioned to obtain age estimates for these 386 Bigmouth Buffalo. Sagittae are the most difficult otoliths to section in Bigmouth Buffalo, and thus were rarely used. Sections of asterisci and lapilli from the same individual produce essentially the same age estimate for the entire range of ages (Figs. 1 and 2 ), proving that either structure can be used. For 165 individuals, only asterisci were thin sectioned, and for 114 specimens only lapilli. The remaining specimens had both asterisci and lapilli ( n = 106), or lapilli, asterisci, and sagittae ( n = 1) thin sectioned to provide comparison opportunities within individual fish. In addition, both asterisci from a single specimen were sectioned for 25 individuals, and both lapilli for 11 individuals. Although many sections were taken, a small portion (~15%) were too fractured or structurally polymorphic to be readable. Nonetheless, at least one readable section from every Bigmouth Buffalo in this study was obtained. Thin sections of the otoliths were mounted on a glass microscope slide and immersed in mineral oil to enhance visibility and photographed at 75× under a compound microscope using transmitted light. Multiple images per thin section were required to provide a composite image of the whole otolith section at this magnification. Images were stitched together using Adobe Photoshop software to create the high-resolution composite image of the whole thin section. The composite images were then examined for annuli that could be quantified and were digitally marked (Fig. 1 ). The best otolith sections were assigned ages by multiple readers, with consensus readings used to determine the final age assigned to each specimen. First, a primary and secondary reader independently marked annuli on duplicate images of the thin section. Discrepant annuli counts between the primary and secondary reader were identified using a minimum criterion of 1 year per decade of age. For example, reader counts for individuals scored 0–9 years of age were deemed discrepant if the primary and secondary reader scores differed by more than ±1 annulus count. This approach was used for individuals scored up to 110–119 years (deemed discrepant if the primary and secondary reader scores differed by more than ±12 annulus counts). If reader scores fell into separate decades, the younger age group criterion was used. Images of otoliths identified as discrepant based on these criteria were then either independently analyzed by a third reader ( n = 29), or another otolith section(s) already available from the same fish was aged by both primary and secondary readers. If consensus scores were still not obtained between readers, then yet another otolith was thin-sectioned from that specimen and again scored independently by the primary and secondary reader, at which point all age estimates were resolved. Otoliths for which annuli counts were not identical between readers, but not identified as discrepant (e.g., scored 12 by the primary and 13 by the secondary), a final determination was made by the primary reader. The overall between-reader precision (primary and secondary) was a coefficient of variation (CV) of ~5.6%. This precision varied with age and was highest in the youngest group of fish, as expected. For individuals across each of the 12 decadal age groups in this study (from 0–9, to 110–119 years) the precision was CV ~10.4, 5.7, 4.0, 4.5, 4.5, 3.6, NA, 3.3, 2.9, 3.4, and 2.7, and 3.9% respectively. Bomb radiocarbon dating We selected for radiocarbon analysis 15 lapillus otoliths from Bigmouth Buffalo previously aged via a thin-sectioned asteriscus or lapillus (or both) annulus count(s). These fish spanned the range of chronological dates required for this type of age validation work (Table 1 ). Typically, a selection of birth years that range from the pre-bomb period (earlier than ~1955) to the post-peak decline period (more recent than the 1970s) is used to trace the bomb-produced 14 C signal through the lifespan of the species, and to potentially provide diagnostic ages from birth years associated with the rapid rise of 14 C in the 1950s and 1960s. For dating, we chose a lapillus to the matching, thin-sectioned asteriscus or lapillus (or both) used for age determination (Table 1 ), because lapilli are the largest otoliths by mass for Bigmouth Buffalo and thus were most likely to provide a sufficient amount of calcium carbonate for 14 C analyses. The 15 lapillus otoliths selected for bomb 14 C dating were sectioned in a similar manner to the previously described thin sectioning, except that they were serially sectioned using a single blade to a thickness of ~1 mm. We selected a section that contained the desired core (the first 1–2 years of growth), with a planar orientation normal to the growth layer structure, such that the growth layers were not tilted and the micromilled material would include only the targeted growth years. A section thickness of 1 mm was necessary to provide greater material depth for micromilling and sufficient mass for 14 C analysis. Otoliths were micromilled using a New Wave Research micromilling machine to a depth of ~600–800 μm providing ~0.5–1.3 mg of carbonate per sample (Table 1 ). A total of 15 specimens spanning the bomb 14 C chronology were milled for the core region of the otolith, representing the first 1–2 years of growth, and for 13 of these, only the core was extracted. From the two additional individuals (both estimated to have hatched prior to atmospheric nuclear testing in the 1950s and 1960s), multiple samples were extracted per otolith in a radial pattern that began after the core extraction and led into more recent years of formation (Table 1 ). The goal was to detect the location in the otolith section (year or years of formation) where the time-specific rise of bomb-produced 14 C occurred (~1955). This approach can validate age estimates exceeding the minimum maximum age indicated by pre bomb radiocarbon levels in the otolith core. The radial extractions were assigned a mean year of formation by overlaying the annulus structure (an image from the aged lapillus section) on an image of the path extracted by the micromill. We submitted 28 extracted otolith samples as carbonate to the National Ocean Sciences Accelerator Mass Spectrometry Facility (NOSAMS), Woods Hole Oceanographic Institution in Woods Hole, Massachusetts, for 14 C analysis. Radiocarbon measurements were reported by NOSAMS as Fraction Modern (the measured deviation of the 14 C/ 12 C ratio from Modern). Modern is defined as 95% of the 14 C concentration of the National Bureau of Standards Oxalic Acid I standard (SRM 4990B) normalized to δ 13 C VPDB (–19‰) in 1950 AD (VPDB = Vienna Pee Dee Belemnite standard) 57 . Radiocarbon results were corrected for isotopic fractionation using a value measured concurrently during the accelerator mass spectrometry analysis, and these data are reported here as F 14 C. These values were date corrected based on the estimated year of formation and are reported 58 as Δ 14 C. Stable isotope δ 13 C measurements were made on a split of CO 2 taken from the CO 2 generated during acid hydrolysis. These values are robust and can be used to infer carbon sources in the formation of the otolith carbonate. Measured Δ 14 C values were used to determine the validity of age estimates by comparing the purported year of formation (birth year), calculated from the collection year and estimated age relative to regional Δ 14 C references (Figs. 4 , 8 , and 9 ). Temporal alignment of the measured Δ 14 C values from otolith material with regional Δ 14 C reference records from otoliths of other freshwater fishes provided an independent basis for determining fish age, and for evaluating our age reading protocol for Bigmouth Buffalo based on otolith annulus counts. The only thorough 14 C reference records available for the freshwater environment of North America were from arctic lakes and mid-continent lakes near the Great Lakes, because very little work has been done in this regard (Fig. 8 ). Freshwater radiocarbon references for North America Overall, bomb radiocarbon dating is considered one of the best methods of age validating long-lived fishes 30 . The radiocarbon ( 14 C) data used as reference material to validate the age and longevity of Bigmouth Buffalo ( Ictiobus cyprinellus ) in this study were from a series of rare freshwater sources that used otoliths of two fish species from widely separated regions of North America (Fig. 8 ). These bomb 14 C records were from otoliths of either known-age (juvenile fish) or aged adults (otolith cores) from: salmonids of Arctic lakes ( Salvelinus namaycush and S. alpinus ) 35 , and Freshwater Drum of central North America lakes ( Aplodinotus grunniens 45 and U.S. Fish and Wildlife Service - Northeast Fishery Center, Lamar, Pennsylvania, unpublished data). These data sets were fitted with a Loess curve (spline interpolation smoothing parameter = 0.4, two-parameter polynomial; SigmaPlot v.11.2) to describe the central tendency of each time series (Fig. 4 ). A caveat of the curve fitting is that one Freshwater Drum specimen from Lake Ontario (2012) was elevated relative to all others from Oneida Lake in 2012–2014 and was considered more likely to be similar to the Arctic references due to hydrography of the Laurentian Basin. This 14 C value may have been elevated due to increased atmospheric exposure from greater water mass residence times in the Great Lakes (relative to Oneida Lake) along with other catchment factors associated with the delivery of terrestrial carbon sources that can be 14 C-enriched 59 . The bomb-produced changes in freshwater 14 C for North America may begin with what appears to be variable 14 C levels in the pre-bomb period (Δ 14 C ranged from approximately –80‰ to –125‰ before 1955) and become coincident as the sharp bomb-produced 14 C rise begins near 1955 (Fig. 4 ). At mid-rise, near 1960, the regional records (Arctic vs. Central North America lakes) start to diverge and then exhibit differences in peak amplitude and subsequent decline. A separation of the records is maintained through the decline period of the 1970s to the 2010s, but the signal appears to dovetail toward most recent years, provided the elevated specimen from Lake Ontario is an accurate reflection of regional variability. Regardless of the potential for minor variability in 14 C levels, the 14 C rise due to atmospheric testing provides a valid marker that can be used to determine the validity of age estimates, with further support from the generally consistent pattern of the overall rise and fall of bomb-produced 14 C. Very little work has been done with determining the full bomb-produced 14 C signal in freshwater environments — most has been within the marine environment (usually in the mixed layer using various forms of biogenic carbonate). There are differences between the bomb-produced 14 C signals in these environments, primarily because of the way 14 CO 2 from nuclear testing enters the hydrologic system. While input of the bomb 14 C signal to the ocean system relies mostly on air-sea diffusion at the sea surface, the freshwater environment has a more direct advection of bomb 14 C from the atmosphere to rivers and lakes via precipitation. Hence, the hydrology of the freshwater environment leads to a more synchronous link to 14 C changes in the atmosphere and exhibits a more rapid 14 C rise than the marine environment (Fig. 9 ). The 14 C peaks expressed for the Arctic and central North America lakes may be artificially muted because actual peak dates may not have been sampled 35 . Nonetheless, the marine bomb-produced 14 C signal is usually attenuated and phase lagged relative to both freshwater and atmospheric 14 C records (Fig. 9 ). The exceptions are either, close-in fallout that generated a strong regional 14 C signal in the marine environment 60 , or places where there are 14 C-depleted sources from either unique hydrogeology (karst topography; AH Andrews, pers. observation) or upwelled waters of the deep-sea 61 . For the existing freshwater 14 C records it is the temporal similarities, despite differences in amplitude, that indicate tracing the bomb-produced 14 C signal in other freshwater environments of North America (e.g. river basins of Minnesota for Bigmouth Buffalo). These temporal constraints on otolith 14 C measurements can be used to validate age estimates. In some cases, otolith core material cannot be used as a strong record of support for determining the age of other organisms because of reasoning circularity — a fish of unknown age that was age-validated from a reference 14 C record should not be in turn used as a reference to validate the age of other otolith measurements of unknown age. However, this is avoidable when otolith annuli are very well defined and there is little or nothing else to refer to as a regional 14 C reference. If the temporal nature of the nearest regional 14 C signal is a match with the otolith material’s signal (position in time based on annulus counts from the otolith), then an assumption can be made that adults of the species can provide a bomb-produced 14 C timeline where none existed. Hence, this is the case for both the Arctic salmonids and Freshwater Drum used as a reference in the current study on Bigmouth Buffalo. Known age juvenile fish and cored adults with well-defined otolith annuli produced strong evidence of the regional 14 C signal of freshwater environments in North America. This data provides a strong basis for validating other freshwater fishes in this region (e.g. Bigmouth Buffalo). These are the most complete records for this environment. The only other records for freshwater environments of North America come from Lake Sturgeon ( Acipenser fulvescens ) 46 and Pallid Sturgeon Scaphirhynchus albus ) 47 , but these 14 C records are not as complete. Mark recapture Bigmouth Buffalo from Big Pelican and Little Pelican Lakes (Otter Tail County, Minnesota) were captured during 2011–2018 by hook-and-line. Captured individuals were photographed, measured (total length and wet mass, as described previously), sexed (based on visual examination of the urogenital opening, presence of tubercles, or expression of gametes (or combination thereof)), tagged at initial capture using either safety pins or Visible Implant Elastomer tags (Northwest Marine Technology, Inc.), and released in good condition. Statistics and reproducibility We used JMP Pro Statistical Discovery™ Software (Version 13.0, SAS Institute, Inc. 2014) for statistical analysis and graphical output. SigmaPlot (Version 11.2) was used to render smoothed curve fits (Loess function, spline interpolation smoothing parameter = 0.4, two-parameter polynomial) to the regional 14 C reference data (Figs. 3 , 4 ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
Research recently completed at North Dakota State University has proven that the Bigmouth Buffalo (Ictiobus cyprinellus), a fish native to North America, lives more than eight decades longer than previously thought. The study published in Communications Biology documents several individuals more than 100 years of age, with one at 112 years, which more than quadruples all previous age estimates for this species. In addition, many populations were documented to be 85-90% comprised of individuals more than 80 years old, suggesting unsuccessful reproduction since the 1930s. The Bigmouth Buffalo is now known as the longest-lived freshwater teleost (a group of approximately 12,000 species) and the oldest age-validated freshwater fish (a group of about 14,000 species). A research team led by Alec R. Lackmann, Ph.D., Department of Biological Sciences at NDSU, recently published their work about aging the fish (Lackmann, Andrews, Butler, Bielak-Lackmann, & Clark 2019. "Bigmouth Buffalo Ictiobus cyprinellus sets freshwater teleost record as improved age analysis reveals centenarian longevity." Communications Biology). Methodology In order to determine the age of the fish, the team employed a relatively new approach utilizing thin-sectioned otoliths (stones from the inner ears of the fish) and carbon dating of a sequential set of microsamples from individual otoliths. Otoliths are present in the ear anatomy of most fish species and they grow continuously as a fish ages. Traditional methods of aging fish by counting rings on scales and fin rays (similar to the process of aging trees) placed a typical age for Bigmouth Buffalo at 5 to 20 years. One very small study of Bigmouth Buffalo in Oklahoma published in 1999 found a maximum age of 26 years via thin-sectioned otoliths. Lackmann et al. 2019 utilized bomb radiocarbon dating to validate their otolith readings of Bigmouth Buffalo from Minnesota. This radiocarbon dating method shows time-specific markers in living creatures that are a result of the bomb-produced radiocarbon from atmospheric thermonuclear testing in the 1950s and 1960s. Using this method, the team conclusively validated their otolith age readings of Bigmouth Buffalo. Nearly 400 fish were aged in the study with five exceeding 100 years and the oldest at 112 years of age. Nearly 200 fish were aged in their 80s or 90s. 90 year old male Bigmouth Buffalo showing orange pigmentation spots. Credit: Alec Lackmann Cultural impact Found in 22 states across the upper Midwest and into Canada, Bigmouth Buffalo have historically been a part of human culture. The Minnesota county and city name of Kandiyohi means "where the buffalofish come;" the city of Buffalo, MN is named after the fish; and Lewis and Clark harvested them during their famous journey. The fish has been a commercially valued product since the 1800s and a fishery in the upper Mississippi Basin places a value on their harvest at more than $1 million annually. Despite their historic commercial food importance, today Bigmouth Buffalo are often incorrectly grouped with other invasive species of "rough fish" such as the Bighead Carp, Silver Carp, and Common Carp. While superficially similar in appearance to those species, Bigmouth Buffalo actually serve an important ecological role by displacing and keeping these invasive species in check, and belong to an entirely different family. In addition, the filter-feeding Bigmouth Buffalo consume other invasive creatures such as larval stage Zebra Mussels (Lackmann et al. in preparation). Virtually unregulated harvest While Bigmouth Buffalo were given special concern status in the 1980s and their harvest remains regulated to this day in Canada, Bigmouth Buffalo harvest in the United States today is almost totally unregulated with no limits established in almost every state. Traditionally unpopular with anglers, the fish has rapidly become a premier (and easy) target for bowfishers given recent legal changes in the past 5-7 years that have allowed night bowfishing and an extended season that includes their vulnerable spawning period. "We need to start recognizing Bigmouth Buffalo for the native, ecological asset that they are", said Lackmann. "Our neglect of under-appreciated, native species needs to be addressed immediately. For example, the term 'rough fish' should be eliminated from harvest regulation terminology because it promotes an inappropriate, negative image for the native species lumped together with real invasives. Our research has shown that the Bigmouth Buffalo is one of the longest-lived vertebrates. That alone is something worth preserving and understanding. Among freshwater fish, the Bigmouth Buffalo is quite exceptional, and they deserve some protection like many other native species in North America have already achieved. The Bigmouth Buffalo could be treasured one day."
10.1038/s42003-019-0452-0
Medicine
Scientists identify sensor underlying mechanical itch stimulus
Rose Z. Hill et al, PIEZO1 transduces mechanical itch in mice, Nature (2022). DOI: 10.1038/s41586-022-04860-5 Journal information: Nature
https://dx.doi.org/10.1038/s41586-022-04860-5
https://medicalxpress.com/news/2022-06-scientists-sensor-underlying-mechanical-stimulus.html
Abstract Itch triggers scratching, a behavioural defence mechanism that aids in the removal of harmful irritants and parasites 1 . Chemical itch is triggered by many endogenous and exogenous cues, such as pro-inflammatory histamine, which is released during an allergic reaction 1 . Mechanical itch can be triggered by light sensations such as wool fibres or a crawling insect 2 . In contrast to chemical itch pathways, which have been extensively studied, the mechanisms that underlie the transduction of mechanical itch are largely unknown. Here we show that the mechanically activated ion channel PIEZO1 (ref. 3 ) is selectively expressed by itch-specific sensory neurons and is required for their mechanically activated currents. Loss of PIEZO1 function in peripheral neurons greatly reduces mechanically evoked scratching behaviours and both acute and chronic itch-evoked sensitization. Finally, mice expressing a gain-of-function Piezo1 allele 4 exhibit enhanced mechanical itch behaviours. Our studies reveal the polymodal nature of itch sensory neurons and identify a role for PIEZO1 in the sensation of itch. Main Discrete subsets of somatosensory neurons within the dorsal root ganglion (DRG) and trigeminal ganglion selectively drive itch behaviours in mice 5 , 6 . These pruriceptors are thought to be chemosensory rather than mechanosensory and, as such, are selectively implicated in chemical itch 1 . Chemical pruriceptors are marked by the expression of the neuropeptide genes somatostatin ( Sst ) and natriuretic polypeptide precursor B ( Nppb ), or the MAS-related G-protein coupled receptor A3 ( Mrgpra3 ), along with other marker genes identified from single-cell RNA sequencing (RNA-seq) studies 7 , 8 , 9 . Previous work elucidated chemical itch transduction pathways in these neurons 5 , 10 , 11 , 12 , 13 . By contrast, the molecular and cellular basis of mechanical itch in the periphery is relatively unknown, although its circuitry is well described in the spinal cord. For example, mechanical itch can be triggered by the activation of Toll-like receptor 5-positive Aβ-low threshold mechanoreceptors (LTMRs) and the engagement of urocortin 3- and neuropeptide Y receptor 1-positive excitatory spinal pathways, and/or by the inhibition of spinal neuropeptide Y-positive inhibitory interneurons that receive LTMR input 14 , 15 , 16 . Furthermore, age-dependent loss of Merkel cell–neurite complexes enhances mechanical itch, presumably through the loss of LTMR-dependent inhibition of itch spinal circuitry 17 . The contributions of other somatosensory neurons, including chemical pruriceptors, to mechanical itch and itch sensitization are unknown. Itch neurons express functional PIEZO1 Early work on the expression of Piezo genes unequivocally demonstrated that Piezo2 transcript is present at high levels in somatosensory neurons and suggested that Piezo1 was expressed at only background levels 3 , 18 . In response to the publication of several single-cell RNA-seq datasets reporting low but detectable expression of Piezo1 transcript in mouse DRG neurons 7 , 8 , 19 (Extended Data Fig. 1 ), we revisited our previous experiments using single-molecule fluorescence in situ hybridization (smFISH) to more thoroughly characterize the expression of Piezo1 in sensory neurons. Notably, we observed that Piezo1 is expressed in 92% of Nppb + DRG neurons (Fig. 1a–f and Extended Data Fig. 2a–d ), and in non-neuronal cells that are likely to comprise vascular endothelium 20 . This specific expression pattern suggested that PIEZO1 has a role in itch. We also observed the expression of Piezo1 in a subset of MAS-related G-protein coupled receptor D ( Mrgprd + ) neurons (Extended Data Fig. 2a ), which have previously been implicated in mechanical pain and in chemical itch evoked by the compound β-alanine 21 , 22 ; however, we did not observe Piezo1 in the Mrgpra3 + chloroquine-sensitive chemical itch neurons (Fig. 1d–f and Extended Data Fig. 3a ). By contrast, Piezo2 was expressed in a smaller percentage of Nppb + neurons (Fig. 1d–f and Extended Data Fig. 3b ). Substantial differences have been identified between human and mouse DRG neurons 23 , 24 and the role—if any—of NPPB in human itch transmission is unknown. smFISH of sections from a human DRG revealed PIEZO1 transcript in 83.6% of NPPB + neurons (Fig. 1h and Extended Data Fig. 4a,b ), suggestive of a conserved pattern of expression. Using a mouse line that expresses a PIEZO1 tdTomato C-terminal fusion protein 25 , we observed robust expression of PIEZO1 tdTomato in platelet endothelial cell adhesion molecule 1 (PECAM1 + ) vascular endothelial cells of the DRG and trigeminal ganglion capillaries (as expected) 20 , as well as within a subset of neuronal cell bodies and nerve fibres, consistent with the expression of PIEZO1 protein in mouse somatosensory neurons (Fig. 1i–l and Extended Data Fig. 5a–c ). Fig. 1: PIEZO1 is expressed in mouse and human putative itch receptors. a – c , Representative images (five images from two mice) of sectioned mouse DRG smFISH for Piezo1 ( a ), Nppb ( b ) and merged with DAPI ( c ). White arrowheads indicate Piezo1 + or Nppb + cells. d , Quantification of mouse DRG smFISH images showing the percentage of cells expressing a given marker (bar labels) that were co-labelled with Piezo1 transcript. The number of analysed neurons is indicated above the bar (four to six images per marker from two mice). e , Data from d presented as the percentage of Piezo1 + neurons that co-express a given marker. f , Percentage of Nppb + neurons expressing Piezo1 or Piezo2 . g , Comparison of Piezo2 expression in Nppb + versus Mrgpra3 + neurons. h , Quantification of human smFISH images (seven to eight images per marker from one donor; see Extended Data Fig. 4a,b ). i – l , Representative images of sectioned mouse DRGs labelled with antibodies against tdTomato (images show PIEZO1 ( i ), PECAM1 ( j ), neurofilament H (NEFH; k ) and the merged image ( l )). Asterisks indicate blood vessels and arrowheads indicate a PIEZO1 + neuron and nerve fibre. The experiment was repeated one additional time. All images are presented as maximum intensity z -projections of confocal images. Scale bars, 100 µm. Full size image We next sought to establish whether PIEZO1 is functional in somatosensory neurons. Unlike PIEZO2, which to date has no known chemical activators, PIEZO1 can be activated in vitro and in vivo using the small molecule Yoda1, which sensitizes PIEZO1 currents elicited by mechanical stimuli and triggers calcium influx on its own 26 . To assess the effects of Yoda1 on sensory neuron physiology, we turned to ratiometric calcium imaging in dissociated cultured mouse DRG neurons. We observed that Yoda1 triggered calcium transients in approximately 20% of cultured DRG neurons (Fig. 2a ) and most of these cells were responsive to the itch compounds β-alanine (a MRGPRD agonist) and/or histamine, which activates a broad subset of TRPV1 + somatosensory neurons 27 including Nppb + neurons 5 , 11 (Fig. 2b,c ). Yoda1-dependent calcium transients were lost in neurons from Piezo1 fl/fl ;Pirt-Cre +/− mice, which targets the vast majority of peripheral sensory neurons 28 (Fig. 2d ). Conversely, we tested whether neurons from mice expressing a gain-of-function (GOF) Piezo1 allele (PIEZO1 GOF ) equivalent to a human hereditary xerocytosis mutation 4 might exhibit enhanced Yoda1 responses. We observed modest increases in the area under the curve and the peak amplitude of Yoda1 calcium transients in PIEZO1 GOF DRG neurons (Fig. 2e,f ). Fig. 2: PIEZO1 is functionally expressed in a subset of putative itch neurons. a , Percentage of wild-type neurons responding to compounds (2,682 neurons from 3 mice). AITC, allyl isothiocyanate. b , Venn diagrams of response overlap from combined wild-type data in a and d , indicating the percentage of total neurons. c , Traces of representative calcium signals from b . Arrowheads indicate compound addition. d , Percentage of neurons responding in Piezo1 fl/fl ;Pirt Cre−/− (wild type; WT) versus Piezo1 fl/fl ;Pirt Cre+/− (KO) neurons (1,883 WT and 2,218 KO neurons from 2 mice per genotype). e , Area under the curve of Piezo1 +/+ versus Piezo1 GOF/GOF responses to 20 µM Yoda1 (Mann–Whitney: **** P < 0.0001, U = 102,979; n = 347 +/+ and 711 GOF/GOF neurons from 2 mice per genotype). f , Peak normalized F 340 / F 380 ratio of Piezo1 +/+ versus Piezo1 GOF/GOF mice from data in e (Mann–Whitney: * P = 0.0142, U = 121,046; n = 347 +/+ and 768 GOF/GOF neurons from 2 mice). In e , f , the centre line denotes the median, the boxes are the 25th and 75th percentiles and the whiskers indicate 1.5 times the interquartile range. g , Representative immunohistochemistry (IHC) section (of three sections from two mice) of Ai9 fl/fl ;Sst Cre+/ − DRG neurons showing native tdTomato (expressed in SST + cells) with the indicated markers (scale bar, 100 µm). h , Dissociated Ai9 fl/fl ;Sst Cre+/ − DRG neurons. i – k , Summary of mechanically activated (MA) current inactivation kinetics in whole-cell poke experiments after nucleofection of Ai9 fl/fl ;Sst Cre+/ − DRG neurons with the indicated siRNA mix against non-targeting control siRNA or against Piezo1 ( i ; **** P < 0.0001, χ 2 = 23.92, degrees of freedom (df) = 3; n = 28 control and 32 Piezo1 siRNA cells), Piezo2 ( j ; * P = 0.0130, χ 2 = 10.78, df = 3; n = 32 control and 31 Piezo2 siRNA cells) and Piezo1 + Piezo2 ( k ; **** P < 0.0001, χ 2 = 36.21, df = 3; n = 30 control and 33 Piezo1 + Piezo2 siRNA cells). Chi-squared ( χ 2 ) tests were performed. IA, intermediately adapting; NR, non-responsive; RA, rapidly adapting; SA, slowly adapting. l – n , Representative 150-ms indentation traces (top) and MA currents (bottom) from i – j with nucleofection of indicated siRNA (from two mice each). All statistical tests are two-tailed where applicable. n indicates biological replicates (cells). Source data Full size image SST + neurons have PIEZO1-dependent currents We then investigated whether PIEZO1-expressing somatosensory neurons are inherently mechanosensitive. Although chemical pruriceptors are generally believed to be mechanically insensitive, a handful of studies suggest that they may exhibit mechanosensitivity under certain conditions 29 . As Piezo1 is expressed in only a subset of Mrgprd + sensory neurons, we reasoned that a phenotype driven by the perturbation of Piezo1 expression in these cells might be missed. By contrast, Piezo1 was expressed in the vast majority of Nppb + neurons, and these cells are amenable to genetic targeting with somatostatin Cre ( Sst Cre ; Fig. 2g ), as Sst is expressed in virtually all mouse Nppb + neurons 5 , 19 . We validated the expression of Piezo1 within Ai9 fl/fl ;Sst Cre+/− tdTomato + cells by RNAscope and found that 73.9% of tdTomato + cells co-expressed Piezo1 (Extended Data Fig. 6a ). We performed whole-cell electrophysiology to examine mechanically activated currents in tdTomato + cultured DRG neurons from Ai9 fl/fl ;Sst Cre+/− mice (Fig. 2h ). To perturb the function of mechanically activated channels, we nucleofected pooled small interfering RNAs (siRNAs) against Piezo1 and/or Piezo2 or non-targeting control siRNA 3 , 18 , 30 . In all experiments, we co-nucleofected a plasmid to drive the cytosolic expression of green fluorescent protein (GFP) and recorded from GFP + tdTomato + cells. In 45 out of 74 mechanosensitive cells nucleofected with the non-targeting siRNA ( n = 90 recordings), we observed intermediately adapting currents (10 ms < τ inactivation (the time constant of current inactivation) < 30 ms) in response to controlled indentation with a blunt glass probe, with 24 out of 74 mechanosensitive cells exhibiting rapidly adapting currents ( τ inactivation ≤ 10 ms; Figs. 2i–k,n and Extended Data Fig. 6b–d ). With Piezo1 siRNA nucleofection, intermediately adapting currents were largely lost, and rapidly adapting currents were retained (Fig. 2i,l,n and Extended Data Fig. 6c ). This is consistent with published data that show that PIEZO1-dependent currents have a larger τ inactivation than PIEZO2-dependent currents in endogenous and heterologous expression systems 3 , 31 . By contrast, after nucleofection of Piezo2 siRNA, the rapidly adapting responses were lost and the intermediately adapting responses remained (Fig. 2j,m,n and Extended Data Fig. 6b–d ). This is consistent with the role of PIEZO2 in mediating nearly all rapidly adapting currents in DRG neurons 18 . Of note, some slowly adapting currents ( τ inactivation ≥ 30 ms) were observed after the knockdown of Piezo2 , suggestive of the unmasking of slowly adapting responses after loss of the rapidly adapting channel, as slowly adapting currents were infrequently observed in control cells. With simultaneous knockdown of both Piezo1 and Piezo2 , 32 out of 33 cells were unresponsive to mechanical stimuli (Fig. 2k ). Thus, PIEZO1 is the predominant mediator of mechanically activated currents in SST + DRG neurons, with PIEZO2 contributing to rapidly adapting currents. Hypersensitivity to mechanical itch (alloknesis) arises after the injection of histamine into the skin of mice and humans 32 , 33 . Histamine-responsive DRG neurons are primarily composed of dedicated TRPV1 + itch receptors, including Nppb + neurons as well as Mrgpra3 + neurons 11 . We investigated whether histamine could directly sensitize PIEZO1-dependent mechanically activated currents 26 in Ai9 fl/fl ;Sst Cre+/− DRG neurons. A five-minute exposure to 100 µM histamine caused a small increase in the τ inactivation of presumptive PIEZO1-dependent mechanically activated currents that were also sensitized by 10 µM Yoda1 (Extended Data Fig. 6e,f,i ). Neither Yoda1 nor histamine significantly altered the maximal current, I max (Extended Data Fig. 6g,h ). Incubation in a sub-threshold concentration of histamine 34 (1 µM), which is insufficient to drive calcium influx, similarly enhanced Yoda1-dependent calcium influx in tdTomato + DRG neurons (Extended Data Fig. 6j–l ). Aside from direct effects on PIEZO1-dependent mechanically activated currents, histamine and other pruritogens may drive itch sensitization through cell-autonomous or -non-autonomous mechanisms, the latter analogous to the sensitization of LTMR circuitry that is observed in allodynia 35 . These findings suggest a potential role for PIEZO1-dependent mechanotransduction in alloknesis. PIEZO1 mediates mechanical itch behaviours In order to ascertain the role of sensory neuronal PIEZO1 in mechanical itch and alloknesis, we depleted PIEZO1 from peripheral sensory neurons using the Pirt Cre driver in a Piezo1 flox mouse ( Piezo1 fl/fl ;Pirt Cre+/− ) (ref. 28 ) and tested the resulting mice in models of acute chemical and mechanical itch. We chose this pan-sensory neuronal Cre driver to delete PIEZO1 from all peripheral neurons. In the nape model of mechanical itch 14 , 15 , 32 (Fig. 3a ), we observed a profound decrease in mechanically evoked scratching in conditional knockout (KO) mice, but not in heterozygous or wild-type littermate controls (Fig. 3b,c ). Moreover, although KO mice were capable of scratching in response to histamine injection into the nape, KO mice did not exhibit robust alloknesis to stimulation with the 0.04 g von Frey filament after the cessation of histamine-evoked scratching 14 , 32 (Fig. 3d and Extended Data Fig. 7a–d ). We chose the 0.04 g filament for alloknesis assays because it did not evoke substantial scratching in naive wild-type mice (Fig. 3b ). Although KO mice did scratch in response to acute histamine injection, we observed a small but significant reduction that was most apparent in a reduced number of scratching bouts per scratching episode 36 (Extended Data Fig. 7b–d ). To further investigate this finding, we also tested the ability of KO mice to scratch in response to the injection of chloroquine, which activates MRGPRA3 + itch receptors 12 (that do not express Piezo1 ) or that of of interleukin-31 (IL-31), which activates Nppb + neurons that co-express Il31ra ( ref. 7 ). Chloroquine-evoked itch (Extended Data Fig. 7e,f ) was normal in KO mice, whereas IL-31-evoked itch and alloknesis were decreased similarly to histamine (Extended Data Fig. 7g–j ). These minor deficits in histamine- and IL-31-evoked (but not chloroquine-evoked) chemical itch may reflect a role for PIEZO1-dependent mechanical itch in amplifying scratching behaviours that depend on Sst + Nppb + neurons. To relate our findings to previous studies of mechanical itch that also tested scratching responses to ear stimulation 14 , 15 , we also probed the shaved area behind the ears of mice and observed that KO mice had profoundly reduced mechanical itch responses (Extended Data Fig. 7k ). Fig. 3: Neuronal PIEZO1 is required for mechanically evoked scratching and histamine alloknesis in mice. a , Illustration of the nape model of mechanical itch. b , Mechanical itch model in Piezo1 fl/fl or Piezo1 fl /+ ; Pirt Cre−/ − (WT; n = 11), Piezo1 fl/+ ;Pirt Cre+/ − (heterozygous (HET); n = 6) and Piezo1 fl/fl ;Pirt Cre+/ − (KO; n = 8) mice (two-way ANOVA: **** P genotype < 0.0001, F (2, 22) = 17.88; Sidak’s P adjusted : ** P 0.07g = 0.0017, *** P 0.16g = 0.0003, *** P 0.4g = 0.0005). c , Cumulative per cent scratch responses from b (Kruskal–Wallis: *** P = 0.0007, χ 2 = 14.52; Dunn’s *** P adjusted = 0.0008). d , Histamine alloknesis (Kruskal–Wallis: *** P = 0.0003, χ 2 = 16.43; Dunn’s: *** P adjusted = 0.0001) from mice in b . Data in b – d are from three experiments. e , Mechanical itch model in Piezo1 fl/fl ; Sst Cre −/− (WT; n = 8) and Piezo1 fl/fl ;Sst Cre+/ − (KO; n = 10) mice (two-way ANOVA: **** P genotype < 0.0001, F (1, 17) = 44.87; Sidak’s P adjusted : ** P 0.07g = 0.0086, *** P 0.16g = 0.0002, ** P 0.4g = 0.0027). f , Cumulative per cent scratch responses from e (Mann–Whitney: **** P < 0.0001, U = 0). g , Histamine alloknesis (Mann–Whitney: **** P < 0.0001, U = 0) from mice in e . Data in e – g are from two experiments. h , Cheek model of Yoda1-evoked itch (Kruskal–Wallis: **** P < 0.0001, χ 2 = 12.88; Dunn’s: ** P adjusted = 0.0014; n = 4 mice from 1 experiment). No wiping was observed. i , Nape model of Yoda1-evoked itch (Mann–Whitney: *** P = 0.0003, U = 1; n = 8 mice from 1 experiment). j , Nape model of itch (50 µM Yoda1) in Piezo1 fl/fl or Piezo1 fl/+ ;Pirt Cre−/− (WT) and Piezo1 fl/fl ; Pirt Cre+/− (KO) mice (Mann–Whitney: *** P = 0.0004, U = 0; n = 9 WT and 6 KO mice from 2 experiments). k , Mechanical itch model in Piezo1 +/+ (PIEZO1 WT ; n = 9) and Piezo1 GOF/GOF or Piezo1 GOF/+ (PIEZO1 GOF ; n = 17) mice (two-way ANOVA: **** P genotype < 0.0001, F (1, 25) = 24.16; Sidak’s P adjusted : * P 0.16g = 0.0493, * P 0.4g = 0.0441, *** P 0.6g = 0.0008). l , Cumulative per cent scratch responses from k (Mann–Whitney: ** P = 0.0013, U = 14). Error bars represent mean ± s.e.m. of n biological replicates (mice) and statistical tests are two-tailed where applicable. Data in k – l are from three experiments. Source data Full size image When we examined other mechanosensory behaviours in the KO mice, we observed a small but significant increase in paw withdrawal threshold as measured using the von Frey assay (Extended Data Fig. 7l–m ); however, mechanonociceptive reflex behaviours to punctate and blunt stimuli were normal (Extended Data Fig. 7n–q ). Proprioceptive behaviours were unaffected (Extended Data Fig. 7r ), unlike with the sensory-specific loss of PIEZO2 (ref. 37 ). We postulate that the small effect on mechanical threshold could be due to a potential role for Piezo1 -expressing Sst + Nppb + and/or MRGPRD + neurons in baseline mechanosensitivity—a function suggested previously for these neuronal subpopulations 5 , 22 . We hypothesized that PIEZO1 acts primarily through the Sst + Nppb + subpopulation of itch neurons to promote mechanical itch and itch sensitization, as Piezo1 was more strongly expressed within this subpopulation than within the Mrgprd + cells (Fig. 1 ), whereas the majority of mechanically activated currents in MRGPRD + cells are dependent on PIEZO2 (ref. 30 ). To this end, we generated Piezo1 fl/fl ;Sst Cre+/− mice and observed that they largely phenocopied the Piezo1 fl/fl ;Pirt Cre+/− mice with respect to mechanical itch and histamine-evoked alloknesis (Fig. 3e–g ). Thus, we conclude that PIEZO1 transduces mechanical itch and alloknesis primarily through Sst + Nppb + dedicated itch receptors. We subsequently investigated whether the activation of PIEZO1 can trigger acute itch. We found that injection of Yoda1 selectively induced scratching behaviours in the cheek model, which allows for discrimination between itch-evoked hind limb scratching and pain-evoked forepaw wiping 38 (Fig. 3h ), and induced robust scratching in the nape model of itch (Fig. 3i ). No mechanical allodynia was observed after intraplantar injection of Yoda1 (Extended Data Fig. 8a ). Notably, Piezo1 fl/fl ;Pirt Cre+/ − mice did not exhibit scratching in response to Yoda1, suggestive of a sensory-neuron-specific mechanism by which Yoda1 and PIEZO1 selectively trigger itch and not pain (Fig. 3j ). In addition, we observed no overt signs of inflammation 30 min after the injection of Yoda1 into the nape skin (Extended Data Fig. 8b,c ), supporting our knockout studies that showed that Yoda1 evokes acute itch primarily through somatosensory neurons rather than through indirect effects on PIEZO1 + immune cells or keratinocytes. To answer the question of whether enhanced PIEZO1 activity modulates mechanical itch in vivo, we took advantage of an existing PIEZO1 gain-of-function mouse model 4 , 39 . Constitutive PIEZO1 GOF mice showed enhanced mechanically evoked scratching behaviours in response to von Frey stimulation of the shaved nape compared to controls (Fig. 3k–l ). In addition, PIEZO1 GOF mice exhibited increased histamine-evoked alloknesis (Extended Data Fig. 8f ). We also observed a slight increase in histamine-evoked itch in PIEZO1 GOF mice (Extended Data Fig. 8d,e ), implying overall increases in itch-evoked scratching in this mouse line, and consistent with the opposite effect that was observed in Piezo1 fl/fl ;Pirt Cre+/− mice. Furthermore, PIEZO1 GOF mice exhibited enhanced itch-evoked scratching after the injection of Yoda1 (Extended Data Fig. 8g ), and unlike wild-type littermate controls, developed alloknesis after Yoda1 injection (Extended Data Fig. 8h ). We speculate that the alloknesis phenotype could be due to enhanced activation of GOF itch receptors by Yoda1, which is administered at a dose limited by the solubility of the molecule 26 . Consistent with the minor increase in hind paw mechanical threshold in the PIEZO1 KO mice, we observed no evidence of constitutive allodynia (Extended Data Fig. 8i ) and only a slight decrease in the 50% withdrawal threshold of PIEZO1 GOF mice (Extended Data Fig. 8j ). There was no enhancement of acute mechanonociceptive reflexes to punctate or blunt stimuli (Extended Data Fig. 8k–m ). These results demonstrate that enhanced PIEZO1 activity selectively exacerbates mechanical itch and alloknesis in vivo. We wondered whether PIEZO1-dependent mechanical itch is relevant to chronic itch, a global health issue with a lifetime prevalence exceeding 10% in humans 40 . We tested the relevance of PIEZO1-dependent mechanical itch in a widely used mouse model of chronic itch that has been shown to mimic specific aspects of human atopic dermatitis, the most common chronic itch disorder 41 . Daily application of the vitamin D analogue MC903 (calcipotriol) induces profound erythema, xerosis, excoriation, itch-evoked scratching and itch hypersensitivity in the mouse nape, ear or cheek 42 . We observed the development of mature lesions in wild-type and Piezo1 fl/fl ;Pirt Cre+/− mice that were treated with MC903, which suggests that skin inflammation develops normally in the absence of neuronal PIEZO1 (Fig. 4a ). Of note, knockout mice showed significant deficits in mechanical itch hypersensitivity (Fig. 4b ), with mildly but significantly decreased spontaneous scratching behaviours compared to control littermates treated with MC903 (Fig. 4c ), which were largely explained by a reduced number of scratch bouts per episode (Extended Data Fig. 9a ). This suggests that partially independent mechanisms underlie itch hypersensitivity versus spontaneous scratching in the setting of chronic itch, and aligns with previous work showing that diverse endogenous chemical pruritogens that are released from immune and skin cells contribute to itch in this model 42 . Fig. 4: Neuronal PIEZO1 mediates itch hypersensitivity in a mouse model of chronic itch. a , Representative images ( n = 12 WT and n = 9 KO mice from two experiments) of nape skin of Piezo1 fl/fl ; Pirt Cre+/− (KO; top) and Piezo1 fl/fl ; Pirt Cre −/− (WT; bottom) littermates on day 8 of the MC903 model. b , MC903-evoked mechanical itch hypersensitivity in Piezo1 fl/fl or Piezo1 fl/+ ;Pirt Cre−/− (WT), Piezo1 fl/+ ;Pirt Cre+/− (HET) and Piezo1 fl/fl ;Pirt Cre+/− (KO) mice (Kruskal–Wallis: *** P = 0.0002, χ 2 = 17.36; Dunn’s: *** P adjusted = 0.0004). c , MC903 spontaneous scratching (Kruskal–Wallis: * P = 0.0444, χ 2 = 6.231; Dunn’s: * P adjusted = 0.0319). Data in b , c are from n = 12 WT, n = 5 HET and n = 9 KO mice from two experiments. d , Mechanically evoked scratching after injection of phosphate-buffered saline (PBS) or GsMTx4 in wild-type mice, normalized to baseline; see also Extended Data Fig. 9b (three-way ANOVA: **** P treatment < 0.0001, F (1, 104) = 51.38; Tukey’s P adjusted : *** P 0 .16g = 0.0002, * P 0.4g = 0.0260; n = 14 mice). e , Histamine alloknesis (Mann–Whitney: **** P < 0.0001, U = 0; n = 14 mice). f , Histamine-evoked scratching (Mann–Whitney: P = 0.0709, U = 58.50; n = 14 mice). Data in d – f are from two experiments. g , Schematic of MC903 chronic itch model experiments with acute GsMTx4. h , MC903 itch hypersensitivity before and after injection of PBS or GsMTx4 (Kruskal–Wallis: ** P = 0.0013, χ 2 = 15.66; Dunn’s (left to right): ** P a djusted = 0.0063, ** P adjusted = 0.0066, * P adjusted = 0.01; n = 8 mice). i , MC903 spontaneous scratching after injection of PBS or GsMTx4 (Mann–Whitney: P = 0.1848, U = 19; n = 8 mice). Data in h – i are from two experiments. Error bars represent mean ± s.e.m. of n biological replicates (mice) and statistical tests are two-tailed where applicable. Source data Full size image Finally, we asked whether acute inhibition of PIEZO1 could alleviate mechanical itch and itch sensitization. We investigated whether inhibition of PIEZO1 with the toxin GsMTx4 could phenocopy the genetic loss of PIEZO1 (refs. 39 , 43 ). Indeed, pretreatment with intraperitoneal injection of GsMTx4 to achieve systemic blockade 39 reduced mechanically evoked scratching in naive mice (Fig. 4d and Extended Data Fig. 9b ) and alloknesis after histamine injection (Fig. 4e ), but did not significantly affect histamine-evoked itch (Fig. 4f ). Furthermore, MC903-dependent itch hypersensitivity in the nape was largely attenuated in GsMTx4-treated mice (Fig. 4g,h ), whereas spontaneous scratching behaviours persisted, albeit at slightly reduced levels (Fig. 4i ). Although GsMTx4 is not a selective PIEZO1 antagonist and inhibits other mechanically activated channels in vitro 44 , when taken together with our chronic itch data from PIEZO1 conditional KO mice, this finding indicates that PIEZO1 antagonists have a potential use in the treatment of itch. Discussion In summary, our work shows that a subset of itch-sensing neurons are polymodal, responding to chemical and mechanical pruritogens. The PIEZO1 + pruriceptors described here may act either in parallel with or independently of previously identified LTMR-dependent mechanical itch circuitry 2 . The question remains as to why PIEZO1 transduces mechanical itch when PIEZO2 is expressed in somatosensory neurons and is exquisitely sensitive to mechanical stimuli. We speculate that PIEZO1-dependent mechanical itch in slow-conducting C fibres may fuel the persistent sensation of a burrowing parasite and drive the desire to scratch until the organism is expelled from the skin, whereas Aβ-LTMR-dependent mechanical itch may have a key role in coordinating and triggering a rapid reflexive response, much like how PIEZO2 + LTMRs coordinate nociceptive reflexive behaviours (such as response to pinprick) that are largely independent of PIEZO2 (ref. 45 ). Moreover, previous in vitro work has shown that PIEZO1 is more sensitive to membrane stretch (through suction stimulation of the membrane patch in cell-attached mode) than PIEZO2 (ref. 46 ). Although we did not test the membrane stretch responsiveness of SST + neurons, one hypothesis is that the distinct mechanical activation properties of PIEZO1 may favour the mechanosensitivity of free nerve endings like those of pruriceptors, whereas the properties of PIEZO2 may specifically favour specialized touch-sensitive end organs. In addition, although we show that PIEZO1 has an essential role in SST + neurons in mechanical itch, we cannot rule out the possibility that MRGPRD + neurons contribute to mechanical itch—despite their controversial role as itch mediators 47 . With regard to this point, we did observe itch-evoked scratching in mice with chemogenetic activation of mature MRGPRD + neurons expressing the hM3Dq DREADD (designer receptor exclusively activated by a designer drug) after the injection of DREADD agonist 21 into the cheek (Extended Data Fig. 10 ), supporting previous studies that implicated MRGPRD + neurons in itch 21 . The contribution of PIEZO2 to mechanical itch remains unclear given the opposing effects of different LTMR subpopulations on mechanical itch (that is, Merkel cell afferents versus TLR5 + LTMRs) 14 , 17 . Highly selective genetic strategies will need to be developed to investigate how PIEZO1- and PIEZO2-dependent itch pathways intersect, as the present methods of accomplishing the knockout of PIEZO2 in LTMRs also target proprioceptors that coordinate reflexive behaviours 37 , which may confound the study of itch-evoked scratching. On a final note, the genetics of itch are largely attributed to variants in a handful of genes that are mainly found in white European populations, which do not contribute substantially to itch disorders in Black or African populations 48 . Given the significance of PIEZO1 variants to human health, particularly in underserved and understudied Black and African populations 4 , 39 , it will be important to examine whether and how variation in PIEZO1 contributes to itch. Such studies will require large-scale genomics databases complemented with extensive clinical phenotyping of mechanical itch and chronic itch severity, which has so far been performed only on small cohorts of individuals owing to the complexity of such experiments. Methods Statistics All statistical analyses were performed in Prism 9.3.0 (GraphPad). Error bars are defined as the mean ± s.e.m unless otherwise indicated, and wherever feasible, individual data points or total counts are plotted. For Fig. 3b,e,k , and Extended Data Figs. 7k,l , 8i and 9b , only mean ± s.e.m is plotted owing to the large number of columns, and individual values are provided in the Source Data. All tested covariates are reported in the legends. Two-tailed tests were performed wherever applicable. n numbers, test statistics, exact P values and degrees of freedom are indicated in the figure legends. Aside from electrophysiology data, which were analysed using previously described methods 3 , 18 , normality and/or equal variance were not assumed and so nonparametric tests were used throughout, with the appropriate post-hoc test indicated for multiple comparisons. When two or more independent variables were examined, a two- or three-way analysis of variance (ANOVA) was used, and sphericity was not assumed. Study design No analyses were performed in advance to pre-determine sample size. Sample sizes were based on similar studies in the literature 14 , 18 . All attempts at replication were successful. All experiments were repeated more than once as indicated in the figure legends except for Extended Data Figs. 7r , 8a and 10 , and n numbers (biological replicates) are indicated for those experiments in the figure legends. For those experiments that were repeated only once, it is stated as such in the figure legend. No randomization was used. Mice were arbitrarily assigned to treatment and vehicle groups for the GsMTx4 and Yoda1 experiments, as they were of identical age, genotype and sex, so no randomization was possible. For all other behaviour experiments, entire cohorts or litters of mice were tested at once by a blinded experimenter, so no allocation or randomization was needed or possible. Mice were arbitrarily assigned behavioural chamber numbers by the blinded experimenter. For all behavioural experiments, the experimenter and scorer (analyser) was blinded whenever possible to both treatment (when two or more treatments were applied) and/or genotype (when two or more genotypes were tested). For electrophysiology and calcium imaging, a single coverslip or chamber of cells from each genotype or condition was tested in alternating order with the opposing genotype or condition (for example, siRNA or drug treatment) so that genotypes and conditions were assessed in parallel. For all other experiments, no randomization was needed or possible as there were no conditions to compare between. For calcium imaging, data were analysed offline using automated routines and so blinding was not necessary. For electrophysiology, experiments were conducted as previously published without blinding 3 , 18 , 37 . For all other experiments, there were no comparisons so blinding was unnecessary. Mice All experiments were performed under the policies and recommendations of the International Association for the Study of Pain and approved by the Scripps Research Animal Care and Use Committee. Mice were kept in standard housing with a 12-h light–dark cycle set with lights on from 6 am to 6 pm, with the room temperature kept around 22 °C, and humidity between 30% and 80% (not controlled). Mice were kept on pelleted paper bedding and provided with paper square nestlets and polyvinyl chloride pipe enrichment with ad libitum access to food and water. Age-matched littermate mice were used for all in vivo experiments. For all in vivo experiments except for Figs. 3h,i and 4d–i , which used only male mice, male and female mice were used and pooled. Mouse ages ranged from 2 to 6 months for behavioural studies, and 1.5 to 4 months for electrophysiology, calcium imaging, IHC and smFISH. The homozygous Piezo1 tdTomato mice were previously described 25 and were maintained in the laboratory ( B6;129-Piezo1 tm1.1Apat/J ; Jackson Laboratories 029214). The HM3dGq fl/fl ;Mrgprd CreERT2+/ − mice were generated by crossing commercially available HM3dGq fl/fl mice ( B6N;129-Tg(CAG-CHRM3*,-mCitrine)1Ute/J ; Jackson Laboratories 026220) with Mrgprd CreERT2+/− mice ( Mrgprd tm1.1(cre/ERT2)Wql ; Jackson Laboratories 031286), and intercrossing the progeny to obtain the desired genotypes. Recombination was achieved with once-daily intraperitoneal injection of 75 mg per kg body weight tamoxifen (Sigma) dissolved in 0.22-µm sterile-filtered corn oil delivered to both experimental and control mice over five consecutive days. The Ai9 fl/fl ;Sst Cre+/− mice were generated by crossing commercially available Ai9 fl/fl female mice ( B6.Cg-Gt(ROSA)26Sor tm9(CAG-tdTomato)Hze/J ; Jackson Laboratories 007909) with Ai9 fl/fl ; Sst Cre+/+ males ( B6J.Cg-Sst tm2.1(cre)Zjh/MwarJ ; Jackson Laboratories 028864). Visibly pink or red mice were not used for experiments, as some germline recombination was observed. Piezo1 fl/fl ; Sst Cre+/− mice were also generated from this line. Piezo1 fl/fl ;Pirt Cre+/ mice were generated by crossing Piezo1 fl/fl female mice ( Piezo1 tm2.1Apat/J ; Jackson Laboratories 029213) with Pirt Cre+/− males ( Pirt tm3.1(cre)Xzd , gift from X. Dong, Johns Hopkins University), and then crossing the Piezo1 fl/+ ;Pirt Cre+/− male offspring with Piezo1 fl/fl or Piezo1 fl/+ ;Ai9 fl/+ or Ai9 +/+ female mice to generate homozygous knockouts, heterozygous mice and Pirt Cre−/− control mice, some of which carried the Ai9 fl/+ allele. The PIEZO1 GOF mouse line ubiquitously carries the nucleotide change c.GG 7742-7743 AC and has been previously described 4 . Experimental PIEZO1 GOF mice were generated from heterozygous matings. These above strains were maintained on a C57BL6/J background when not intercrossed to generate desired genotypes, except for Piezo1 tdTomato and Mrgprd CreERT2+/− , which were maintained as inbred stocks. C57BL6/J wild-type male mice used in Figs. 3h,i and 4d–i were purchased from the Scripps Research Department of Animal Resources rodent breeding colony. PCR genotyping from tail snip DNA samples was performed in-house using guidelines from Jackson Laboratory. All mice except for purchased C57BL6/J mice received metal identification tags (National Band & Tag, 1005-1) on the right ear when they were between 18 and 30 days old. After weaning between 21 and 30 days of age, mice were co-housed in groups of 2–5 littermates of the same sex. smFISH For mouse experiments, DRG tissues were removed immediately, embedded in optimal cutting temperature compound (OCT, Sakura), and flash-frozen in liquid nitrogen. For human tissue, a flash-frozen T1 (thoracic)-level DRG was obtained from Anabios from one female donor aged 45 with no history of neurological disease. The human DRG was embedded into pre-chilled OCT over dry ice such that it remained frozen, and 20-µm cryosections were used for all experiments. The protocol for RNAscope Multiplex Fluorescent Reagent Kit V2 (ACDBio: 323100) was followed exactly according to the instructions for fresh-frozen tissue. Protease IV was applied for 22 min for mouse tissue and 30 min for human tissue after pilot experiments to optimize protease conditions. Probes (all from ACDBio) for mouse Piezo1 (C1; 400181), mouse Piezo2 (C1; 400191, C2; 400191-C2), mouse Mrgprd (C3; 417921-C3), mouse Nppb (C3; 425021-C3), mouse Mrgpra3 (C2; 548161-C2), mouse Calca (C2; 417961-C2), mouse Scn10a (C2; 426011-C2), human PIEZO1 (C1; 485101), human PIEZO2 (C1 or C2; 449951, 449951-C2), human NPPB (C2; 448511-C2) and tdTomato (C2; 317041-C2) were applied to detect transcript. Quantification of images was performed manually in ImageJ (Fiji, 2.3.0/1.53f) using regions of interest (ROIs) to define the quantification area. In mouse tissues, positive ROIs were counted as those with more than five puncta per ROI, on the basis of experiments with the 3-plex Dapb negative control probe (320871). Cell borders were drawn around highly expressed marker transcript signals to define individual cells. In humans, cells with more than three puncta per ROI were counted as positive cells, based on the Dapb negative control probe. Cell borders were drawn around highly expressed marker transcript signals to define individual cells, and cells needed to have a clearly defined satellite glial border. Lipofuscin was present in human DRGs, and those areas were identified by identical fluorescence signals across multiple detection channels using published criteria 49 , and so puncta were not counted in those regions. Displayed images were uniformly cropped from the original 20× images on which quantification was performed. Immunohistochemistry For PIEZO1 tdTomato experiments, tissues were processed using a modified protocol to preserve signal 50 . In brief, fresh-frozen DRGs and trigeminal ganglia were embedded in OCT and sectioned at 20 µm. Sections were post-fixed on slides in cold 4% paraformaldehyde (PFA) in PBS for 10 min at room temperature and quenched using 20 mM glycine and 75 mM ammonium chloride with 0.1% v/v Triton X-100 in PBS for 10 min. Slides were washed in PBS and then incubated in blocking buffer (0.6% w/v fish skin gelatin with 0.05% w/v saponin in PBS with 5% v/v normal goat or donkey serum) for 1 h at room temperature. Slides were incubated in primary antibodies overnight at 4 °C in blocking buffer without serum: 1:200 rabbit anti-RFP (Rockland 600-401-379), 1:200 rat anti-PECAM1 (Sigma CBL1337-I) and 1:1,000 chicken anti-NefH (Abcam ab4680). Slides were washed in PBS, and then incubated in secondary antibodies in blocking buffer 1 h at room temperature (all 1:1,000): goat anti-rabbit AlexaFluor 594 (Life Technologies A11037), donkey anti-rat AlexaFluor 488 (Jackson 712-546-153) and donkey anti-chicken AlexaFluor 647 (Jackson 703-605-155). Samples were mounted in SlowFade Diamond and sealed with nail polish. For conventional IHC, mice were transcardially perfused with 15–30 ml ice-cold PBS followed by 30 ml 4% PFA in PBS. DRGs were dissected into PBS and post-fixed for 20 min on ice in 4% PFA in PBS. Tissues were cryoprotected overnight in 30% sucrose-PBS (w/v) at 4 °C before embedding in OCT and sectioning at 20 µm on a cryostat. Sections were briefly rinsed in PBS, washed for 10 min in 0.3% Triton X-100 in PBS (PBST), then blocked for 1 h in 5% normal goat serum in 0.3% PBST. Sections were incubated for 2 h at room temperature in rabbit anti-CGRP (Immunostar 24112) diluted 1:1,000 in 0.3% PBST. Sections were washed in PBS and incubated in 1:1,000 goat anti-rabbit AlexaFluor 488 (Thermo Fisher Scientific A32731) and 25 µg ml −1 isolectin B4 AlexaFluor 647 conjugate (Life Technologies I32450) for 1 h at room temperature. Tissues were rinsed in PBS, mounted in Fluoromount G + DAPI (4’,6-diamidino-2-phenylindole) and sealed with nail polish. For both smFISH and IHC, all samples were imaged on either a Nikon A1 or a Nikon C2 confocal microscope and the imaging settings (laser power, gain, 1,024 × 1,024 original resolution, pixel dwell, objective and use of Nyquist zoom) were kept consistent within experiments. For all images, brightness and contrast adjustments were uniformly applied to the entire image. Cell culture Cell culture was carried out as previously described 18 . In brief, DRGs were dissected and incubated for 60 min at 37 °C in 6.25 mg ml −1 collagenase IV (Life Technologies 17104-019) in serum-free medium, followed by incubation in 1 U ml −1 papain (Fisher NC9199962) for 30 min at 37 °C. Cells were triturated and transferred into medium with 10% fetal bovine serum supplemented with the following growth factors (from Gibco): GDNF 50 ng ml −1 , NGF 100 ng ml −1 , NT-4 50 ng ml −1 , NT-3 50 ng ml −1 and BDNF 50 ng ml −1 . For calcium imaging, 10 µM cytosine arabinoside (Sigma) was added to the medium. Cells were plated onto poly- d -Lysine and laminin-coated glass coverslips (Corning, for electrophysiology) or eight-well chambered coverslips (Ibidi, for calcium imaging). Media: HyClone DMEM/F12 1:1 with l -glutamine and HEPES (Cytiva or Gibco) supplemented with 1:100 penicillin–streptomycin (Gibco). For calcium imaging, cells were used within one to three days. For nucleofection experiments, cells were used three to five days after plating. DRG cultures and transfection of siRNA were performed exactly as described 3 , 37 using the Amaxa P3 Primary Cell 4D-Nucleofector X Kit S (Lonza, V4XP-3032); 120 pmol of siRNA and 400 ng of pIRES2-eGFP (Clontech 6029-1) plasmid were nucleofected per reaction. Reagents: mouse Piezo1 siRNA (ON-TARGETplus mouse Piezo1 (234839) siRNA SMARTpool; L-061455-00-0005), mouse Piezo2 siRNA (ON-TARGETplus mouse Piezo2 (667742) siRNA SMARTpool; L-163012-00-0005) and non-targeting siRNA (ON-TARGETplus non-targeting siRNA; D-001810-10-05). Calcium imaging Cells were loaded for 60 min at room temperature with 10 µM Fura-2AM (Life Technologies F1201) supplemented with 0.01% Pluronic F-127 (w/v; Life Technologies) in a physiological Ringer’s solution containing 127 mM NaCl, 3 mM KCl, 10 mM HEPES, 2.5 mM CaCl 2 , 1 mM MgCl 2 and 10 mM D-(+)-glucose, pH 7.3. All chemicals were purchased from Sigma. Neurons were presented with 20 µM Yoda1 (Tocris) in 1% DMSO-Ringer’s vehicle, 100 µM histamine dihydrochloride (Tocris) in Ringer’s, 1 mM β-alanine (Tocris) in Ringer’s and/or 100 µM allyl isothiocyanate (AITC, Sigma) in 0.1% DMSO-Ringer’s. Images were acquired using MetaFluor software (v.7.8.2.0) and displayed as the ratio of 340 nm/380 nm. Cells were identified as neurons by eliciting depolarization with high-potassium Ringer’s solution (71.5 mM) at the end of each experiment. Responding neurons were defined as those having an increase of more than 15% from the baseline ratio. Analysis was performed using previously established methods in Igor Pro 6.3.7 (WaveMetrics) 51 , 52 . Fifty-seven individual neurons with compound addition artefacts (large spikes in the calcium imaging trace) were excluded from the area under the curve analysis but were still used for the peak normalized ratio analysis. In separate experiments, cells were incubated for 5 min in a sub-threshold 1 µM histamine (which did not elicit calcium transients), before stimulation with 20 µM Yoda1. Fura2 ratios were normalized to the baseline ratio F 340 / F 380 = (Ratio)/(Ratio t =0 ). Electrophysiology siRNA knockdown of Piezo genes Whole-cell patch clamp recordings were performed using an Axopatch 200B amplifier as described using standard methods to achieve an access resistance of 6.6 ± 0.2 MΩ ( n = 186) 3 , 18 . During recording, cells were maintained at 21–23 °C in physiological Ringer’s solution and clamped at −80 mV. Electrodes had resistances of 3.4 ± 0.1 MΩ ( n = 186) when filled with gluconate-based low-chloride intracellular solution: 100 mM K-gluconate, 25 mM KCl, 0.483 mM CaCl 2 , 3 mM MgCl 2 , 10 mM HEPES, 1 mM BAPTA tetrapotassium salt, 4 mM Mg-ATP and 0.4 mM Na-GTP (pH 7.3 with KOH). Neuronal somata were tested for mechanosensitivity using a fire-polished glass probe. The probe displacement was advanced in increments of 1 μm using a computer-controlled piezoelectric stimulator 3 , 18 . All data were analysed as previously described 3 , 18 , 37 using pClamp 10 and Prism 9.3.0. Histamine sensitization of mechanically activated currents Whole-cell patch clamp recordings were performed in parallel by two experimenters using either an Axopatch 200B amplifier or a Multiclamp 700A amplifier. Baseline mechanically activated currents were measured as described above using increasing 0.5-µm displacement increments until the stimulus-intensity–response relationship approached I max . Histamine dihydrochloride (100 µM; Tocris) was delivered by gravity perfusion at a rate of 2–3 ml min −1 . Mechanically activated currents were assessed again in the presence of histamine, which elicited an inward current. Washout of histamine was performed over several minutes. Cells were finally perfused with 10 µM Yoda1 to assess whether the inactivation kinetics of the mechanically activated currents were slowed as has been previously shown for heterologous PIEZO1 currents 26 . Behavioural studies All behavioural experiments were performed between 12:00 and 18:00 in the same room. When multiple tests were performed on a cohort of mice (as in Fig. 3 ), tests were performed in the following order over the course of seven days: von Frey series, up–down von Frey, pinprick, Randall–Selitto, tail clip, mechanical itch and acute histamine itch followed by measurement of alloknesis. With the exception of the MC903 model, in which cagemates could ingest the topically applied chemical and so were singly housed, mice were co-housed with one to four littermates of the same sex. Mice with noticeable lesions, wounds, ear chondritis or poor physical condition were not used for behavioural studies. The experimenter and scorer(s) (for itch) were blind to the genotype and the compound injected (where relevant). Itch-evoked scratching behaviour Itch and acute pain behavioural measurements were performed as previously described 51 , 53 , 54 . Mice were shaved on the nape of the neck, the fluffy hairs of the back of the left ear, or the right cheek five to seven days before the experiment with surgical clippers under 1–2% isoflurane. Unless indicated otherwise, all itch behaviours were performed in the nape. For all itch behaviour experiments, mice were acclimated in the behaviour chambers on the two days before behavioural measurements for one hour. For chemical itch experiments, compounds (histamine dihydrochloride: 50 µg in PBS, Tocris; Yoda1: 14.2 ng, 142 ng or 355 ng in 1% DMSO-PBS, Tocris; chloroquine diphosphate: 200 µg in PBS, Tocris; recombinant mouse IL-31: 60 pmol in 0.9% NaCl, Peprotech; DREADD agonist 21: 25 µg in PBS, Tocris) were injected via the intradermal route using a 31g insulin syringe in a total volume of 20 µl. Mice were individually placed into covered four-part plexiglass chambers with opaque dividers (Ugo Basile) on a plexiglass platform (Fab Glass and Mirror) with a small square of paper bedding to absorb excess urine. Bout and episode quantity, and episode length, were manually scored from videos recorded with either a GoPro Hero 8 camera or a Nikon D3200 camera. All behaviour videos were recorded from below using a mirror and scored for 30 min, except for Fig. 4f,i that were recorded and scored for 25 min. Behavioural scoring was performed using QuickTime 10.4. A scratch episode was defined as a period of one or more scratching bouts from the moment the paw was lifted from the plexiglass floor to when it was returned, or paw grooming persisting for three or more seconds. A bout was defined as a series of one or more scratches within an episode in which the paw was lifted towards and then away from the site of scratching. Wipes were defined as unilateral forepaw motions on the cheek that did not occur during a period of facial grooming (in which the face is wiped with both paws). Mechanical itch Mechanical itch experiments were performed similar to previously described methods 14 , 32 . Mice were placed into four-part plexiglass chambers on a plexiglass platform and acclimated for 30 min. From above, the mice were probed on the shaved nape or the shaved back of the left ear (Extended Data Fig. 7k ) with a descending force-series of five trials per force using von Frey monofilaments (Touch Test) ranging from 1 g to 0.008 g. A positive response was scored as one or more instances of site-directed scratching with the hind paw. Mice that were spontaneously scratching were not probed until 1 min after scratch cessation. Cumulative scratch responses report the total number of scratch responses divided by the total number of trials as a percentage, regardless of filament force. Alloknesis models In the histamine- and Yoda1-evoked alloknesis models, itch-evoked scratching was recorded immediately after injection of 50 µg histamine, 60 pmol IL-31 or 355 ng Yoda1 for 30 min before assessment. The shaved nape was probed for mechanical itch responses (see above) with the 0.04 g filament three times in a 5-min interval for a total of 30 min (21 tests in total) 14 , 32 . MC903-induced chronic itch The MC903 model of chronic itch was performed as previously described 42 , 51 . In brief, mice were shaved on the nape and singly housed five days before the start of the model. MC903 (0.2 mM; Calcipotriol, Tocris) was prepared fresh in absolute ethanol, and 20 µl was applied using a micropipette to the skin each morning between 07:00 and 09:00. On day 8, spontaneous scratching was recorded for 30 min before the assessment of itch hypersensitivity using identical methods to the above alloknesis method. GsMTx4 experiments GsMTx4 was acquired from Abcam, prepared fresh in sterile PBS and injected intraperitoneally at 540 µg per kg body weight 39 . Baseline mechanical itch was assessed the day before injection using an attenuated filament series from 0.4 g to 0.04 g. The following day, GsMTx4 or PBS vehicle was injected, and histamine-evoked scratching was recorded for 25 min 1 h after injection. Histamine alloknesis was assessed immediately afterwards. For MC903-induced itch behaviours, baseline mechanical itch hypersensitivity was assessed on day 7 of the model. On day 8, GsMTx4 was administered 1 h before the itch-evoked scratching measurements (recorded for 25 min) and the itch hypersensitivity assay described above. von Frey assays The mechanical threshold was measured using calibrated von Frey monofilaments (Touch Test) on a metal mesh platform (Ugo Basile). von Frey experiments were performed as previously described using the up–down method starting with 1 g, or a descending force-series of four trials per force from 4 g to 0.008 g (refs. 18 , 37 ). Valid responses included fast paw withdrawal; licking, biting or shaking of the affected paw; or flinching. Mice were allowed to acclimate on the platform for 1 h before measurements. For von Frey mechanical allodynia behaviour, 355 ng Yoda1 was injected into the plantar surface of the hind paw and the mechanical threshold was quantified using the up–down method just before injection, and 5 min, 15 min and 30 min after injection. Pinprick The pinprick assay was conducted on the von Frey testing platform. The mouse hind paw was poked with a 27 g syringe needle without breaking the skin to induce fast acute mechanical pain 18 , 37 . Each paw was stimulated 10 times with the needle, with a 5-min rest in between trials, and the per cent withdrawal (fast withdrawal; licking, biting or shaking of paw; jumping; and/or flinching) was calculated from the total number of trials. For latency measurements, the assay was performed just as above, except that the needle was soldered to a braided copper wire that was connected by a BNC cable to a standard digital oscilloscope (Tektronix). Using the 'trigger' mode, the duration of the voltage trace was used to determine how long the paw was in contact with the filament to determine the latency to withdrawal. Tail clip The tail clip assay was performed as previously described 18 , 37 . Mice were acclimated on a metal benchtop for 15 min in clear circular plexiglass chambers before assessment. The alligator clip was placed near the base of the mouse tail. A response was scored when the mice showed awareness of the clip by biting, vocalization, grasping of tail or a jumping response. Latency was measured with a stopwatch, with a minimum recordable time of 1 s. Randall–Selitto The Randall–Selitto assay was performed as previously described 18 . In brief, mice were gently restrained in the hand of the experimenter and a pinching force was applied to the hind paw using a Randall–Selitto device (IITC Life Sciences). A 300-g cut-off was used. A response was scored by any visible flinching of the hind limb or audible vocalization. Proprioception assay In brief, naive adult mice were restrained by the tail and held over the countertop or home cage and the hind limbs were photographed. A 0–2 scoring system was developed, in which images of a Piezo2 fl/fl ;Hoxb8 Cre+/− mouse 37 represented '0', or severe proprioceptive deficit; images of a C57BL6/J mouse represented '2', or normal; and any intermediate or uncertain images were scored a '1', which could have been indicative of transient limb positioning from a proprioceptively normal mouse or a mild compromise in proprioception. Images were scored by five independent, blinded scorers and the results of each experimenter were averaged for each mouse. Histology Yoda1 (355 ng) or vehicle was injected intradermally into the shaved nape skin of C57BL6/J mice. Mice were euthanized 30 min after injection, the skin was de-haired with depilatory cream (Nair) and then rinsed with water, and the section of back skin immediately around the site of injection was dissected and fixed in 10% formalin for paraffin embedding, sectioning and haematoxylin and eosin (H&E) staining. Skin sections were imaged at 20× using a Keyence microscope. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Raw data are available from the authors upon reasonable request. The previously published single-cell RNA-seq data shown in Extended Data Fig. 1 are available at: . Source data are provided with this paper.
Scientists at Scripps Research Institute have identified a protein in sensory nerves that works as a key detector of itch—specifically the "mechanical" itch stimulus of crawling insects, wool fibers, or other irritating objects that touch the skin. The discovery, published June 22, 2022, in Nature, is the first identification of a sensor for mechanical itch rather than chemically-triggered itch. It could lead to better treatments for itch conditions such as eczema and psoriasis. "These findings help us untangle the complexity of itch sensation, and suggest that PIEZO1 inhibitors could be very useful clinically," says study senior author Ardem Patapoutian, Ph.D., a professor in the Department of Neuroscience at Scripps Research, and a Howard Hughes Medical Institute investigator. Itch is a distinct sensation with its own nerve circuitry and evolutionary purpose—likely to alert organisms to potentially harmful chemicals, insects and parasites. Researchers over the past decade or so have identified itch-specific subsets of spinal neurons that extend nerve fibers into the skin and are sensitive to chemical triggers of itch such as the allergy mediator histamine. But so far, relatively little has been discovered about the circuitry of mechanical itch. PIEZO1's role in mechanical itch was unexpected. Patapoutian won a share of last year's Nobel Prize for Medicine or Physiology for his lab's pioneering research on PIEZO1 and its sister-protein PIEZO2. These unique, propeller-shaped "mechanosensor" ion channels are embedded in the outer membranes of many cell types. They become activated when mechanically distorted, opening their ion channels and triggering various downstream events. Since 2010, Patapoutian and colleagues have shown that PIEZO2 is a key mechanosensor for light touch, the feeling of the positioning of the body and limbs, and the urge to urinate—all via nerves in various tissues and organs. By contrast, the researchers have found that PIEZO1 has a variety of non-sensory roles throughout the body, for example in blood vessels and red blood cells. While their initial studies suggested that PIEZO1 was not expressed in sensory neurons, other recent investigations have suggested that it is expressed at low levels in some subsets of these neurons. In the new study, Patapoutian and his team, including study first author Rose Hill, Ph.D., a postdoctoral research associate, followed up on this surprising lead. In experiments in mice, they confirmed that PIEZO1 is expressed, and appears to be a functional, mechanical pressure-sensitive ion channel protein in two different types of sensory neuron that were already implicated in chemical itch. Mice with an overactive form of PIEZO1 were markedly more sensitive to itch sensations. By contrast, mice lacking PIEZO1 in their sensory neurons scratched themselves far less when stimulated on the skin with filaments that normally would trigger strong itch sensations. The researchers also showed that a PIEZO1-blocking compound alleviates scratching behaviors in mice with the equivalent of eczema. "We did see a dramatic effect on itch with this compound, and though it wasn't specific enough against PIEZO1 to develop into a drug, we hope eventually to develop a much more PIEZO1-specific compound for treating itch conditions," Hill says. Curiously, the absence or enhancement of PIEZO1 activity in mice caused at least a small reduction or increase of scratching due to chemical itch triggers such as histamine—implying that mechanical and chemical itch signals are indeed transmitted in some cases by the same sensory neurons. The researchers are now investigating whether variants of the PIEZO1 gene in the human population are related to itch sensitivity. The Patapoutian lab published a paper in 2018 showing that a mildly overactive form of PIEZO1, which also has the effect of making red blood cells relatively resistant to malaria parasites, is present in about a third of people of African ancestry.
10.1038/s41586-022-04860-5
Medicine
Genetic 'fingerprints' implicate gut bacterium in bowel cancer
Pleguezuelos-Manzano C., Puschhof J. et al. 2020. A mutational signature in human colorectal cancer induced by genotoxic pks+ E. coli. Nature. DOI: 10.1038/s41586-020-2080-8 Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-2080-8
https://medicalxpress.com/news/2020-02-genetic-fingerprints-implicate-gut-bacterium.html
Abstract Various species of the intestinal microbiota have been associated with the development of colorectal cancer 1 , 2 , but it has not been demonstrated that bacteria have a direct role in the occurrence of oncogenic mutations. Escherichia coli can carry the pathogenicity island pks , which encodes a set of enzymes that synthesize colibactin 3 . This compound is believed to alkylate DNA on adenine residues 4 , 5 and induces double-strand breaks in cultured cells 3 . Here we expose human intestinal organoids to genotoxic pks + E. coli by repeated luminal injection over five months. Whole-genome sequencing of clonal organoids before and after this exposure revealed a distinct mutational signature that was absent from organoids injected with isogenic pks -mutant bacteria. The same mutational signature was detected in a subset of 5,876 human cancer genomes from two independent cohorts, predominantly in colorectal cancer. Our study describes a distinct mutational signature in colorectal cancer and implies that the underlying mutational process results directly from past exposure to bacteria carrying the colibactin-producing pks pathogenicity island. Main The intestinal microbiome has long been suggested to be involved in colorectal cancer (CRC) tumorigenesis 1 , 2 . Various bacterial species have been reported to be enriched in stool and biopsy samples from patients with CRC 6 , 7 , 8 , 9 , including genotoxic strains of E . coli 3 , 6 , 10 , 11 . The genome of these genotoxic E. coli harbours a 50-kb hybrid polyketide-nonribosomal peptide synthase operon ( pks , also referred to as clb ) that is responsible for the production of the genotoxin colibactin. pks + E. coli are present in a substantial fraction of individuals (about 20% of healthy individuals, about 40% of patients with inflammatory bowel disease, and about 60% of patients with familial adenomatous polyposis or CRC) 6 , 10 , 11 . pks + E. coli induce—among other things—interstrand crosslinks (ICLs) and double-strand breaks (DSBs) in epithelial cell lines 3 , 10 , 11 , 12 and in gnotobiotic mouse models of CRC, in which they can also contribute to tumorigenesis 6 , 11 . Recently, two studies have reported that colibactin–adenine adducts are formed in mammalian cells exposed to pks + E. coli 4 , 5 . Whereas the chemistry of the interaction between colibactin and DNA is thus well-established, the outcome of this process in terms of recognizable mutations remains to be determined. Recent advances in sequencing technologies and the application of novel mathematical approaches allow somatic mutational patterns to be classified. More than 50 mutational signatures have been defined in using a mutational signature analysis that includes the bases immediately 5′ and 3′ to a single-base substitution (SBS), and a number of different contexts that characterize insertions and deletions (indels) 13 , 14 . For some of these mutational signatures, the underlying causes (for example, tobacco smoke, UV light, specific genetic DNA repair defects) are known 13 , 15 , 16 . However, for many the underlying aetiology remains unclear. Human intestinal organoids, which are established from primary crypt stem cells 17 , have been useful for identifying the underlying causes of mutational signatures 18 : After being exposed to a specific mutational agent in culture, the organoids can be subcloned and analysed by whole-genome sequencing (WGS) to identify the resulting mutational signature 16 , 19 , 20 . To define the mutagenic characteristics of pks + E. coli , we developed a co-culture protocol in which a pks + E. coli strain (originally derived from a CRC biopsy 21 ) was microinjected into the lumen of clonal human intestinal organoids 22 (Fig. 1a, b ). An isogenic clbQ knockout strain that cannot produce active colibactin 21 , 23 served as a negative control. Both bacterial strains were viable for at least three days in co-culture and followed similar growth dynamics (Fig. 1c ). DSBs and ICLs, visualized by γH2AX and FANCD2 immunofluorescence, respectively, were induced specifically in epithelial cells exposed to pks + E. coli (Fig. 1d, e , Extended Data Fig. 1a ), confirming that pks + E. coli induced DNA damage in our model. There was no substantial difference in viability between organoids exposed to pks + E. coli and those exposed to pks Δ clbQ E. coli , although there was a modest decrease for both when compared to the organoids injected with dye only (Extended Data Fig. 1b, c ). We then performed repeated injections (with pks + E. coli , pks Δ clbQ E. coli or dye only) into single cell-derived organoids, in order to achieve long-term exposure over a period of five months. Subsequently, we established subclonal organoids from individual cells extracted from the exposed organoids. For each condition, three subclones were subjected to WGS (Fig. 2a ). We also subjected the original clonal cultures to WGS to subtract somatic mutations that were already present before co-culture. Organoids exposed to pks + E. coli presented increased numbers of SBSs compared to those exposed to pks Δ clbQ E. coli , with a bias towards T > N substitutions (Fig. 2b ). These T > N substitutions occurred preferentially in A T A, A T T and T T T (with the middle base mutated). From this, we defined a pks -specific single-base substitution signature (SBS- pks ; Fig. 2c ). This mutational signature was not observed in organoids exposed to pks Δ clbQ E. coli or dye only (Fig. 2b, c , Extended Data Fig. 2a–c ), proving that it is a direct consequence of exposure to pks + E. coli . Furthermore, exposure to pks + E. coli induced a characteristic small indel signature (ID- pks ), which was characterized by single T deletions at T homopolymers (Fig. 2d, e , Extended Data Fig. 2d–f ). SBS- pks and ID- pks were replicated in an independent human intestinal organoid line (Extended Data Fig. 3a–d ; SBS cosine similarity, 0.77; ID cosine similarity, 0.93) and with a clbQ -knockout E. coli strain recomplemented with the clbQ locus ( pks Δ clbQ:clbQ ) (Extended Data Fig. 3e–h ; SBS cosine similarity, 0.95; ID cosine similarity, 0.95). Fig. 1: Co-culture of healthy human intestinal organoids with genotoxic E. coli induces DNA damage. a , Schematic representation of microinjection of genotoxic E. coli into the lumen of human intestinal organoids. b , Scanning electron microscopy image illustrating direct contact between organoid apical side and pks + E. coli after 24 h co-culture. Scale bar, 10 μm. c , Mean ± s.d. bacterial load of pks + or pks Δ clbQ at 0, 1, 2 and 3 days after co-culture establishment ( n = 8 co-cultures per condition and time point, except pks + day 2 ( n = 7) and pks Δ clbQ day 3 ( n = 6)). CFU, colony-forming units. d , Representative images of DNA damage induction after 1 day of co-culture, measured by γH2AX immunofluorescence. One organoid is shown per image with one nucleus in the inset (expansion of boxed area). Scale bars, 10 μm (main image); 2 μm (inset). MMC (mitomycin C), positive control for double-strand break induction. e , Quantification of data from d : mean ± s.d. percentage of nuclei positive for γH2AX foci in organoids injected with pks + E. coli ( n = 9 organoids), pks Δ clbQ E. coli ( n = 7 organoids), dye ( n = 7 organoids) or mitomycin C (MMC) ( n = 7 organoids) after 1 day of co-culture. Full size image Fig. 2: Long-term co-culture with pks + E. coli induces SBS- pks and ID- pks mutational signatures. a , Schematic representation of the experimental setup. b , Bar segment height indicates the mean ± s.d. number of SBSs that accumulated in organoids co-cultured with either pks + or pks Δ clbQ E. coli ( n = 3 clones). Dot position above the bottom of the corresponding bar segment (T > N, black; C > N, grey) indicates the number of mutations for each clone. c , SBS 96-trinucleotide mutational spectra in organoids exposed to either pks + (top) or pks Δ clbQ (middle) E. coli . The bottom panel depicts the SBS- pks signature, which was defined by subtracting SBS mutations under the pks Δ clbQ condition from those under the pks + condition. SBSs are indicated above the plot. Most mutated trinucleotide sequences are highlighted below the bottom axis as ‘5′ base.3′ base’, with the dot indicating the position of the substituted nucleotide. d , Bar segment height indicates the mean ± s.d. number of indels that accumulated in organoids co-cultured with either pks + or pks Δ clbQ E. coli ( n = 3 clones). Dot position above the bottom of the corresponding bar segment (T deletion in T-homopolymer, black; other indels, grey) indicates the number of mutations for each clone. e , Indel mutational spectra observed in organoids exposed to either pks + (top) or pks Δ clbQ (middle) E. coli . The bottom panel depicts the ID- pks signature, which was defined by subtracting indel mutations under the pks Δ clbQ condition from those under the pks + condition. Full size image Next, we investigated whether the SBS- pks and ID- pks mutations were characterized by other recurrent patterns. First, the assessed DNA stretch was extended beyond the nucleotide triplet. This uncovered the preferred presence of an adenine residue 3 bp upstream of the mutated SBS- pks T > N site (Fig. 3a ). Similarly, mutations that contributed to the ID- pks signature in poly-T stretches showed enrichment of adenines immediately upstream of the affected poly-T stretch (Fig. 3b ). Notably, the lengths of the adenine stretch and the T-homopolymer were inversely correlated, consistently resulting in a combined length of five or more A/T nucleotides (Extended Data Fig. 4a ). While SBS- pks and ID- pks are the predominant mutational outcomes of colibactin exposure, we also observed longer deletions at sites containing the ID- pks motif in organoids treated with pks + E. coli (Fig. 3c ). Additionally, the SBS- pks signature exhibited a striking transcriptional strand bias (Fig. 3d, e ). We speculate that these observations reflect preferential repair of alkylated adenosines on the transcribed strand by transcription-coupled nucleotide excision repair. These features clearly distinguish the pks signature from published signatures of alkylating agents or other factors 19 . Fig. 3: Consensus motifs and extended features of SBS- pks and ID- pks mutational signatures. a , Two-bit representation of the extended sequence context of T > N mutations observed in organoids exposed to pks + E. coli . Green, highlighted T > N trinucleotide sequence; blue, highlighted A-enriched position characteristic of the SBS- pks mutations. b , Two-bit representation of the extended sequence context of single T deletions in T homopolymers observed in organoids exposed to pks + E. coli . Green, highlighted T homopolymer with deleted T; blue, highlighted characteristic poly-A stretch. c , Bar segment height indicates the mean ± s.d. occurrence of deletions comprising more than 1 bp in organoids exposed to pks + or pks Δ clbQ E. coli ( n = 3 clones). Dot position above the bottom of the corresponding bar segment (matching ID- pks motif, black; lacking ID- pks motif, grey) indicates the number of mutations for each clone. d , Transcriptional strand bias of T > N and C > N mutations in organoids exposed to pks + E. coli or pks Δ clbQ E. coli . Pink, C > N; blue, T > N; dark colour, transcribed strand; bright colour, untranscribed strand; mean ± s.d. number of events ( n = 3 clones). e , Transcriptional strand bias of the 96-trinucleotide SBS- pks mutational signature. Colour, transcribed strand; white, untranscribed strand. Full size image We then investigated whether the experimentally deduced SBS- pks and ID- pks signatures occur in human tumours, by interrogating WGS data from a Dutch collection of 3,668 solid cancer metastases 24 . The mutations acquired by a cancer cell at its primary site will be preserved even in metastases, so that these provide a view of the entire mutational history of a tumour. We first performed non-negative matrix factorization (NMF) on genome-wide mutation data obtained from 496 CRC metastases in this collection. Encouragingly, this unbiased approach identified an SBS signature that highly resembled SBS- pks (cosine similarity, 0.95; Extended Data Fig. 5a, b ). We then determined the contributions of SBS- pks and ID- pks to the mutations of each sample in the cohort. This analysis revealed that the two pks signatures were strongly enriched in CRC-derived metastases when compared to all other cancer types (Fisher’s exact test, P < 0.0001; Fig. 4a, b , Extended Data Table 1 ). With a cut-off contribution value of 0.05, 7.5% of CRC samples were enriched for SBS- pks , 8.8% for ID- pks and 6.25% for both SBS- pks and ID- pks ( Fig. 4c , Extended Data Table 1 ). As expected, the SBS- pks and ID- pks signatures were positively correlated in this metastasis data set ( R 2 = 0.46 (all samples); R 2 = 0.70 (CRC-only); Fig. 4c ), in line with their co-occurrence in our in vitro data set. The longer deletions at ID- pks sites were also found to co-occur with SBS- pks and ID- pks (Fig. 4e, f ). In addition, we evaluated the levels of the SBS- pks or ID- pks mutational signatures in an independent cohort, generated in the framework of the Genomics England 100,000 Genomes Project. This data set comprises WGS data from 2,208 CRC tumours, predominantly of primary origin. SBS- pks and ID- pks were enriched in 5.0% and 4.4% of patients, respectively, while 44 samples (2.0%) were high in both SBS- pks and ID- pks (Fig. 4d ). The relative contribution of both pks signatures correlated with an R 2 of 0.35 (Fig. 4d ). Fig. 4: SBS- pks and ID- pks mutational signatures are present in a subset of CRC samples from two independent cohorts. a , Top 20 out of 3,668 metastases from the HMF cohort, ranked by the fraction of SBSs attributed to SBS- pks . CRC metastases (orange) are enriched. b , Top 20 out of 3,668 metastases from the HMF cohort. Samples are ranked by the fraction of indels attributed to ID- pks . CRC metastases (in orange) are also enriched here. NET, neuroendocrine tumour. c , Scatterplot of the fraction of SBSs and indels attributed to SBS- pks and ID- pks in 3,668 metastases fro the HMF cohort. Each dot represents one metastasis. Samples high for both SBS- pks and ID- pks (more than 5% contribution, dashed lines) are enriched in CRC (orange). SBS- pks and ID- pks are correlated ( R 2 = 0.46; only CRC, R 2 = 0.7). d , Scatterplot of SBS- pks and ID- pks contributions in 2,208 CRC tumour samples, predominantly of primary origin, from the Genomics England cohort. SBS- pks and ID- pks are correlated ( R 2 = 0.35). Each dot represents one primary tumour sample. Dashed lines delimit samples with high SBS- pks or ID- pks contribution (more than 5%). e , Scatterplot of SBS- pks and deletions longer than 1 bp with ID- pks pattern in the HMF cohort. f , Scatterplot of ID- pks and deletions longer than 1 bp with ID- pks pattern in the HMF cohort. a – f , Colours indicate tissue of origin. g , Exonic APC driver mutations found in the IntOGen collection matching the colibactin target SBS- pks or ID- pks motifs. h , Schematic representation of a driver mutation in APC causing a premature stop codon matching the SBS- pks motif, found in the IntOGen collection and in two independent patients from the HMF cohort with high SBS- pks and high ID- pks . Full size image Finally, we also investigated to what extent the pks signatures can cause oncogenic mutations. To this end, we investigated the most common driver mutations found in seven cohorts of patients with CRC 25 for hits matching the extended SBS- pks or ID- pks target motifs (Fig. 3a, b ). This analysis revealed that 112 out of 4,712 CRC driver mutations (2.4%) matched the colibactin target motif (Supplementary Table 1 ). APC , the most commonly mutated gene in CRC, contained the highest number of mutations that matched the SBS- pks or ID- pks target sites, with 52 out of 983 driver mutations (5.3%) matching the motifs (Fig. 4g ). We then explored the mutations in the 31 SBS/ID- pks high CRC metastases from the HMF cohort for putative driver mutations that matched the extended motif. In total, this approach detected 209 changes in protein-coding sequences (Supplementary Table 2 ). Remarkably, an identical APC driver mutation matching the SBS- pks motif was found in two independent donors (Fig. 4h ). A recent publication 26 identified mutational signatures occurring in healthy human colon crypts. The authors of that study note the co-occurrence of two mutational signatures in subsets of crypts from some of the subjects. These signatures were termed SBS-A and ID-A. The authors derived hierarchical lineages of the sequenced crypts, which allowed them to conclude that the unknown mutagenic agent was active only during early childhood. Notably, SBS-A and ID-A closely match SBS- pks and ID- pks , respectively. Our data imply that pks + E. coli is the mutagenic agent that causes the SBS-A and ID-A signatures observed in healthy crypts. We assessed whether the SBS- pks mutational signature contributed early to the mutational load of metastatic samples from the Dutch cohort by evaluating their levels separately in clonal (pre-metastasis) or non-clonal (post-metastasis) mutations. The accumulation of SBS- pks and ID- pks at the primary tumour site or even earlier was substantiated by the abundant presence of SBS- pks in clonal mutations in the cohort (Extended Data Fig. 5c ). In addition to CRCs, one head and neck-derived tumour and three urinary tract-derived tumours from this cohort also displayed a clear SBS- pks and ID- pks signature (Fig. 4c ). Both tissues have been described as sites of E. coli infection 27 , 28 , 29 . This rare occurrence of the pks signatures in non-CRC tumours was substantiated by a preprint report 30 of signatures that closely resembled SBS- pks and ID- pks in a patient with oral squamous cell carcinoma. The distinct motifs at sites of colibactin-induced mutations may serve as a starting point for deeper investigations into the underlying processes. There is increasing evidence that colibactin forms interstrand crosslinks between two adenosines 4 , 5 , 12 , and our data suggest that there is a distance of 3–4 bases between these adenosines. These crosslinks formed by a bulky DNA adduct could be resolved in different ways, including induction of DSBs, nucleotide excision repair or translesion synthesis, which in turn could result in various mutational outcomes. While our study identifies single-base substitutions and deletions as a mutational consequence, the underlying mechanisms will need to be elucidated in more detailed DNA-repair studies. In summary, we find that prolonged exposure of wild-type human organoids to genotoxic E. coli allows the extraction of a unique SBS and indel signature. As organoids do not model immune or inflammation effects or other microenvironmental factors, this provides evidence that colibactin directly causes mutations in host epithelial cells. The adenine-enriched target motif is consistent with the proposed mode of action of colibactin’s ‘double-warhead’ in attacking closely spaced adenine residues 4 , 5 , 12 . The pronounced sequence specificity reported here may inspire more detailed investigations into the interaction of colibactin with specific DNA contexts. As stated above, Stratton and colleagues 26 are likely to have described SBS- pks and ID- pks mutational signatures of the same aetiology in primary human colon crypts. This agrees with the notion that pks + E. coli- induced mutagenesis occurs in the healthy colon of individuals that harbour genotoxic E. coli strains 31 and that such individuals may be at an increased risk of developing CRC. The small number of pks signature-positive cases of urogenital and head-and-neck cancer suggests that pks + bacteria act beyond the colon. Notably, the presence of the pks island in another strain of E. coli , Nissle 1917, is closely linked to its probiotic effect 32 . This strain has been investigated for decades in relation to various disease indications 33 . Our data suggest that E. coli Nissle 1917 may induce the characteristic SBS/ID- pks mutational patterns described here. Future research should clarify whether this is the case in vitro, and in patients treated with pks + bacterial strains. This study implies that detection and removal of pks + E. coli , as well as re-evaluation of probiotic strains harbouring the pks island, could decrease the risk of cancer in a large group of individuals. Methods Human material and organoid cultures Ethical approval was obtained from the ethics committees of the University Medical Center Utrecht, Hartwig Medical Foundation and Genomics England. Written informed consent was obtained from patients. All experiments and analyses were performed in compliance with relevant ethical regulations. Organoid culture Clonal organoid lines were derived and cultured as described previously 16 , 17 . In brief, wild-type human intestinal organoids (clonal lines ASC-5a and ASC-6a, previously described 34 ) were cultured in domes of Cultrex Pathclear Reduced Growth Factor Basement Membrane Extract (BME) (3533-001, Amsbio) covered by medium containing Advanced DMEM/F12 (Gibco), 1× B27, 1× glutamax, 10 mmol/l HEPES, 100 U/ml penicillin-streptomycin (all Thermo-Fisher), 1.25 mM N -acetylcysteine, 10 μM nicotinamide, 10 μM p38 inhibitor SB202190 (all Sigma-Aldrich) and the following growth factors: 0.5 nM Wnt surrogate-Fc fusion protein, 2% noggin conditioned medium (both U-Protein Express), 20% Rspo1 conditioned medium (in-house), 50 ng/ml EGF (Peprotech), 0.5 μM A83-01, and 1 μM PGE2 (both Tocris). For derivation of clonal lines, cells were sorted by fluorescence-activated cell sorting (FACS) and grown at a density of 50 cells per μl in BME. The ROCK inhibitor Y-27632 (10 μM; Abmole, M1817) was added for the first week of growth. Upon reaching a size of >100 μm diameter, organoids were picked and transferred to one well per organoid. All organoid lines were regularly tested to rule out mycoplasma infection and authenticated using SNP profiling. No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. Organoid bacteria co-culture The genotoxic pks + E. coli strain was previously isolated from a patient with CRC and isogenic pks Δ clbQ knock-out and pks Δ clbQ:clbQ recomplemented strains were generated from this strain 21 . Bacteria were initially cultured in Advanced DMEM (Gibco) supplemented with glutamax and HEPES to an optical density (OD) of 0.4. They were then microinjected into the lumens of organoids as previously described 22 , 35 . Bacteria were injected at a multiplicity of infection of 1 together with 0.05% (w/v) FastGreen dye (Sigma) to allow tracking of injected organoids. At this point, 5 μg/ml of the non-permeant antibiotic gentamicin was added to the medium to prevent overgrowth of bacteria outside the organoid lumen. Cell viability was assessed as follows: organoids were harvested after 1, 3 or 5 days (bacteria were removed by primocin treatment at day 3) of co-culture in cold DMEM (Gibco) and incubated in TrypLE Express (Gibco) at 37 °C for 5 min with repeated mechanical shearing. Single cells were resuspended in DMEM with added DAPI, incubated on ice for at least 15 min and assessed for viability on a BD FACS Canto. Cells positive for DAPI were considered dead, while cells maintaining DAPI exclusion were counted as viable. Bacterial growth kinetics were assessed by harvesting, organoid dissociation with 0.5% saponin for 10 min and re-plating of serial dilutions on LB plates. CFUs were quantified after overnight culture at 37 °C. E. coli were killed with 1× Primocin (InvivoGen) after 3 days of co-culture, after which organoids were left to recover for 4 days before being passaged. When the organoids reached a cystic stage again (typically after 2–3 weeks), the injection cycle was repeated. This procedure was repeated five times (three times for ASC clone 6-a and the clbQ recomplementation experiment in ASC clone 5-a) to reduce injection heterogeneity and ensure accumulation of enough mutations for reliable signature detection. Whole-mount organoid immunofluorescence, DNA damage quantification and scanning electron microscopy Organoids co-cultured with pks + or pks Δ clbQ E. coli 21 were collected in cell recovery solution (Corning) and incubated at 4 °C for 30 min with regular shaking in order to free them from BME. For FANCD2 staining, organoids were pre-permeabilized with 0.2% Triton-X (Sigma) for 10 min at room temperature. Then, organoids were fixed in 4% formalin overnight at 4 °C. Subsequently, organoids were permeabilized with 0.5% Triton-X (Sigma), 2% donkey serum (BioRad) in PBS for 30 min at 4 °C and blocked with 0.1% Tween-20 (Sigma) and 2% donkey serum in PBS for 15 min at room temperature. Organoids were incubated with mouse anti-γH2AX (Millipore; clone JBW301; 1:1,000 dilution) or rabbit anti-FANCD2 (affinity purified as described 36 ; 1 mg/ml) primary antibody overnight at 4 °C. Then, organoids were washed four times with PBS and incubated with either secondary goat anti-mouse AF-647 (Thermo Fisher, catalogue number A-21235, 1:500 dilution) or goat anti-rabbit AF-488 (Life Technologies, catalogue number A21206, 1:500 dilution) antibodies, respectively, for 3 h at room temperature in the dark and washed again with PBS. Organoids were imaged using an SP8 confocal microscope (Leica). Fluorescent microscopic images of γH2AX foci were quantified as follows: nuclei were classified as containing either no foci or one or more foci. The fraction of nuclei containing foci over all nuclei is displayed as one datapoint per organoid. Organoids co-cultured with bacteria for 24 h were harvested as described above and processed for scanning electron microscopy as previously described 35 . WGS and read alignment For WGS, clonal and subclonal cultures were generated for each condition. DNA was isolated from these clonal cultures using the DNeasy Blood and Tissue Kit (Qiagen) according to the manufacturer’s instructions. Illumina DNA libraries were prepared using 50 ng of genomic DNA isolated from the (sub-)clonal cultures isolated using a TruSeq DNA Nano kit. The parental ASC 5a clone was sequenced on a HiSeq XTEN instrument at 30× base coverage. All other samples were sequenced using an Illumina Novaseq 6000 with 30× base coverage. Reads were mapped against the human reference genome version GRCh37 using Burrows–Wheeler Aligner 37 (BWA) version v0.7.5 with settings bwa mem -c 100 -M. Sequences were marked for duplicates using Sambamba (v0.4.732) and realigned using GATK IndelRealigner (GATK version 3.4-46). The full description and source code of the pipeline is available at . Mutation calling and filtration Mutations were called using GATK Haplotypecaller (GATK version 3.4-46) and GATK Queue to produce a multi-sample Vcf file 20 . The quality of the variants was evaluated using GATK VariantFiltration v3.4-46 using the following settings: -snpFilterName SNP_LowQualityDepth -snpFilterExpression “QD < 2.0” -snpFilterName SNP_MappingQuality -snpFilterExpression “MQ < 40.0” -snpFilterName SNP_StrandBias -snpFilterExpression “FS > 60.0” -snpFilterName SNP_HaplotypeScoreHigh -snpFilterExpression “HaplotypeScore > 13.0” -snpFilterName SNP_MQRankSumLow -snpFilterExpression “MQRankSum < -12.5” -snpFilterName SNP_ReadPosRankSumLow -snpFilterExpression “ReadPosRankSum < -8.0” -snpFilterName SNP_HardToValidate -snpFilterExpression “MQ0 >= 4 && ((MQ0 / (1.0 * DP)) > 0.1)” -snpFilterName SNP_LowCoverage -snpFilterExpression “DP < 5” -snpFilterName SNP_VeryLowQual -snpFilterExpression “QUAL < 30” -snpFilterName SNP_LowQual -snpFilterExpression “QUAL >= 30.0 && QUAL < 50.0 ” -snpFilterName SNP_SOR -snpFilterExpression “SOR > 4.0” -cluster 3 -window 10 -indelType INDEL -indelType MIXED -indelFilterName INDEL_LowQualityDepth -indelFilterExpression “QD < 2.0” -indelFilterName INDEL_StrandBias -indelFilterExpression “FS > 200.0” -indelFilterName INDEL_ReadPosRankSumLow -indelFilterExpression “ReadPosRankSum < -20.0” -indelFilterName INDEL_HardToValidate -indelFilterExpression “MQ0 >= 4 && ((MQ0 / (1.0 * DP)) > 0.1)” -indelFilterName INDEL_LowCoverage -indelFilterExpression “DP < 5” -indelFilterName INDEL_VeryLowQual -indelFilterExpression “QUAL < 30.0” -indelFilterName INDEL_LowQual -indelFilterExpression “QUAL >= 30.0 && QUAL < 50.0” -indelFilterName INDEL_SOR -indelFilterExpression “SOR > 10.0. Somatic SBS and indel filtering To obtain high-confidence catalogues of mutations induced during culture, we applied extensive filtering steps as previously described 20 . First, only variants obtained by GATK VariantFiltration with a GATK phred-scaled quality score of ≥100 for SBSs and ≥250 for indels were selected. Subsequently, we considered only variants with at least 20× read coverage in control and sample. We additionally filtered base substitutions with a GATK genotype score (GQ) lower than 99 or 10 in WGS( t n ) or WGS( t 0 ), respectively (Fig. 2a ). Indels were filtered when GQ scores were higher than 60 WGS( t n ) or 10 in WGS( t 0 ). All variants were filtered against the Single Nucleotide Polymorphism Database v137.b3730, from which SNPs present in the COSMICv76 database were excluded. To exclude recurrent sequencing artefacts, we excluded all variants that were variable in at least three individuals in a panel of bulk-sequenced mesenchymal stromal cells 38 . Next, all variants present at the start of co-culture (WGS( t 0 ) in Fig. 2a ) were filtered from those detected in the clonal pks + E. coli , pks Δ clbQ E. coli co-cultures (WGS( t n ) in Fig. 2a ) or dye culture. Indels were selected only when no called variants in WGS( t 0 ) were present within 100 bp of the indel and if not shared in WGS( t 0 ). In addition, both indels and SNVs were filtered for the additional parameters: mapping quality (MQ) of at least 60 and a variant allele frequency (VAF) of 0.3 or higher to exclude variants obtained during the clonal step. Finally, all multi-allelic variants were removed. Scripts used for filtering SBSs (SNVFIv1.2) and indels (INDELFIv1.5) can be found at . Mutational profile analysis To extract mutational signatures from the high-quality mutational catalogues after filtering, we used the R package MutationalPatterns to obtain 96-trinucleotide SBS and indel subcategory counts for each clonally cultured sample 39 (Extended Data Fig. 1a, d ). To identify the additional mutational effects induced by pks + E. coli (SBS and ID), we pooled mutation numbers for each culture condition ( pks Δ clbQ and pks + ), and subtracted the mutational counts of pks Δ clbQ from pks + (Fig. 2c, e , Extended Data Fig. 2b, d ). For the clones exposed to pks Δ clbQ:clbQ E. coli , we subtracted relative levels of the pks Δ clbQ mutations in the same organoid line. This enabled us to correct for the background of mutations induced by pks Δ clbQ E. coli and the injection dye. To determine the transcriptional strand bias of mutations induced during pks + E. coli exposure, we selected all SBSs within gene bodies and checked whether the mutated C or T was located on the transcribed or non-transcribed strand. We defined the transcribed area of the genome as all protein-coding genes based on Ensembl v75 (GCRh37) 40 and included introns and untranslated regions. The extended sequence context around mutation sites was analysed and displayed using an in-house script (‘4_extended_sequence_context.R’). Two-bit sequence motifs were generated using the R package ggseqlogo. Cosine similarities between indel and SBS profiles were calculated using the function ‘cos_sim_matrix’ from the MutationalPatterns package. Analysis of clonal mutations in the SBS/ID- pks -high CRC tumours From the 31 SBS/ID- pks -high CRC tumours, clonal and subclonal SBSs were defined to contain a purity- or ploidy-adjusted allele-fraction (PURPLE_AF) of <0.4 or >0.2, respectively 41 . Signature re-fitting on both fractions was performed with the same signatures as described above for the initial re-fitting of the HMF cohort. Analysis of >1-bp deletions matching pks -motif For each >1-bp T-deletion observed in organoid clones or the HMF cohort, the sequence of the deleted bases and 5-bp flanking regions was retrieved using the R function getSeq from the package BSgenome. Retrieved sequences were examined for the presence of a 5-bp motif matching the pks -motifs identified (Extended Data Fig. 4a ): AAAAT, AAATT, AATTT or ATTTT. Sequences containing one or more matches with the motifs were marked as positive for containing the motif. NMF extraction of signatures from the HMF CRC cohort To identify SBS- pks in an unbiased manner, signature extraction was performed on all 496 samples from colorectal primary tumours present in the HMF metastatic cancer database 24 . All variants containing the ‘PASS’ flag were used for analysis. Signature extraction was performed using non-negative matrix factorization (NMF), using the R package MutationalPatterns, function ‘extract_signatures’ with the following settings: rank = 17, nrun = 200. The cosine similarity of the extracted signature matching SBS- pks was re-fitted to the COSMIC SigProfiler signatures and SBS- pks was determined as described above to determine similarity (Extended Data Fig. 5a, b ). Signature re-fitting on HMF cohort Mutation catalogues containing somatic variants processed as described 24 were obtained from the HMF. All variants containing the ‘PASS’ flag in the HMF data set were selected. Single-base trinucleotide and indel subcategory counts were extracted using the R package MutationalPatterns and in house-written R scripts, respectively. To determine the contribution of SBS- pks and ID- pks to these mutational catalogues, we re-fitted the COSMIC SigProfiler mutational SBS and ID signatures v3 ( ), in combination with SBS- pks and ID- pks , to the mutational catalogues using the MutationalPatterns function ‘fit_to_signatures’. Signatures marked as possible sequencing artefacts were excluded from the re-fitting. Cut-off values for high SBS- pks and ID- pks levels were manually set at 5% each. The numbers of SBS/ID- pks -positive samples were compared between CRC and other cancer types by Fisher’s exact test (two-tailed). Mutation calling and filtration (Genomics England cohort) As part of the Genomics England 100,000 Genomes Project (main programme version 7) 42 standard pipeline, 2,208 CRC genomes were sequenced on the Illumina HiSeq X platform. Reads were aligned to the human genome (GRCh38) using the Illumina iSAAC aligner 03.16.02.1 43 . Mutations were called using Strelka and filtered in accordance with the HMF data set 24 . Before examining somatic mutations for the pks mutational signature, mutation calls were first subjected to additional filtering steps similar to those previously described 24 . All calls present in the matched normal sample were removed. The calls were split into high and low confidence genomic regions according to lists available at ftp://ftp-trace.ncbi.nlm.nih.gov/giab/ftp/release/NA12878_HG001/NISTv3.3.1/GRCh38/. Somatic mutation calls in high-confidence regions were passed with a somatic score (QSI or QSS) of 10, while calls in low confidence regions were passed with a score of 20. A pool of 200 normal samples was constructed, and any calls present in three or more normal samples were removed. Any groups of single-nucleotide variants within 2 bp were considered to be miscalled multiple nucleotide variants and were removed. Finally, all calls had to pass the Strelka ‘PASS’ filter. Mutational signatures were then analysed as described above for the HMF cohort. Detection of pks -signature mutations in protein-coding regions Mutations were extracted from the 31 SBS/ID- pks -high CRC samples. Exonic regions were defined as all autosomal exonic regions reported in Ensembl v75 (GCRh37) 40 . All extracted CRC mutations were filtered for localization in exonic regions using the Bioconductor packages GenomicRanges 44 and BSgenome. In a second filtering step, the sequence context of mutations was required to match the following criteria. For SBS- pks : T > N mutation, A or T directly upstream and downstream, A three bases upstream. For ID- pks : single T deletion, A directly upstream, a stretch of an A homopolymer followed by a T polymer with combined length of at least five nucleotides, but no stretch exceeding ten nucleotides in length. Mutations passing both filter steps were further filtered for presence of a predicted ‘high’ or ‘moderate’ score in the transcript with the highest impact score according to the reported SnpEff annotation. To assess the mutagenic effect of pks , we obtained all mutations from the 50 highest mutated genes in CRC from IntOGen 25 , release 2019.11.12. Mutations were filtered to match the pks motif according to the sequence criteria stated above apart from the predicted impact score. Mutations in APC were plotted using the R package rtrackViewer, using only exonic mutations. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Whole-genome sequence data have been deposited in the European Genome–Phenome Archive ( ); accession number EGAS00001003934 . The data used from the Hartwig Medical Foundation and Genomics England databases consist of patient-level somatic variant data (annotated variant call data) and are considered privacy sensitive and available through access-controlled mechanisms. Patient-level somatic variant and clinical data were obtained from the Hartwig Medical Foundation under data request number DR-084. Somatic variant and clinical data are freely available for academic use from the Hartwig Medical Foundation through standardized procedures. Privacy and publication policies, including co-authorship policies, can be retrieved from: . Data request forms can be downloaded from . To gain access to the data, this data request form should be emailed to info@hartwigmedicalfoundation.nl, upon which it will be evaluated within six weeks by the HMF Scientific Council and an independent Data Access Board. When access is granted, the requested data become available through a download link provided by HMF. Somatic variant data from the Genomics England data set were analysed within the Genomics England Research Environment secure data portal, under Research Registry project code RR87, and exported from the Research Environment following data transfer request 1000000003652 on 3 December 2019. The Genomics England data set can be accessed by joining the community of academic and clinical scientist via the Genomics England Clinical Interpretation Partnership (GeCIP), . To join a GeCIP domain, the following steps have to be taken: 1. Your institution has to sign the GeCIP Participation Agreement, which outlines the key principles that members of each institution must adhere to, including our Intellectual Property and Publication Policy. 2. Submit your application using the relevant form found at the bottom of the page ( ). 3. The domain lead will review your application, and your institution will verify your identity for Genomics England and communicate confirmation directly to Genomics England. 4. Your user account will be created. 5. You will be sent an email containing a link to complete Information Governance training and sign the GeCIP rules ( ). Completing the training and signing the GeCIP Rules are requirements for you to access the data. After you have completed the training and signed the rules, you will need to wait for your access to the Research Environment to be granted. 6. This will generally take up to one working day. You will then receive an email letting you know your account has been given access to the environment, and instructions for logging in (for more detail, see: ). Details of the data access agreement can be retrieved from . All requests will be evaluated by the Genomics England Access Review Committee taking into consideration patient data protection, compliance with legal and regulatory requirements, resource availability and facilitation of high-quality research. All analysis of the data must take place within the Genomics England Research Environment secure data portal, and exported following approval of a data transfer request. Regarding co-authorship, all publications using data generated as part of the Genomics England 100,000 Genomes Project must include the Genomics England Research Consortium as co-authors. The full publication policy is available at . All other data supporting the findings of this study are available from the corresponding author upon request. Code availability All analysis scripts are available at .
A common type of bacteria found in our guts could contribute to bowel cancer, according to research funded by a £20 million Cancer Research UK Grand Challenge award and published in Nature today. Scientists in The Netherlands, the UK and USA have shown that a toxin released by a strain of E. coli causes unique patterns, or 'fingerprints', of DNA damage to the cells lining the gut. The fingerprints were also seen in bowel cancer tumours, showing for the first time a direct link between the bacterial toxin and the genetic changes that drive cancer development. The team suggests that detecting this specific DNA damage in the cells lining the gut could one day allow doctors to identify people at higher risk of the disease and become used alongside current bowel cancer screening tests. Other bacterial toxins from gut bacteria might have similar effects and the hunt for them is now on as researchers seek to determine whether this mechanism of DNA damage is widespread. There are around 42,000 new bowel cancer cases in the UK every year, where it remains the second most common cause of cancer death. Understanding the early triggers that could lead to bowel cancer may help doctors prevent its development and detect it at its earliest stage, when treatment is most likely to be successful. This has led scientists to investigate the role that the microbiome—trillions of bacteria, viruses, fungi and other single-celled organisms—plays in the development of bowel cancer. Cayetano Pleguezuelos-Manzano, Jens Puschhof and Axel Rosendahl Huber explain their research on genotoxic E. coli bacteria Credit: DEMCON | nymus 3D and Melanie Fremery, ©Hubrecht Institute Professor Hans Clevers and his team at the Hubrecht Institute in The Netherlands focused on one strain of E. coli producing a toxin called colibactin, and which is more often present in the stool samples of people with bowel cancer compared to healthy people. Because colibactin can cause DNA damage in cells grown in the lab, they thought the toxin might be doing the same to cells lining the gut. The team used human intestinal organoids, miniature replicas of the gut grown in the lab, and exposed them to colibactin-producing E. coli. They analysed the DNA sequence of the gut cells in the organoids after 5 months and found about double the DNA damage in them, compared to organoids exposed to 'regular' E. coli that didn't produce the colibactin. The researchers also found that the DNA damage caused by colibactin followed two very specific patterns—like fingerprints—which were unique to the toxin. To determine whether the DNA damage caused by the bacterium played a role in bowel cancer, the researchers then analysed the DNA sequences of more than 5500 tumour samples from the UK and Netherlands, with the help of Dr. Henry Wood and Professor Philip Quirke from the University of Leeds. First, they checked for the two colibactin DNA damage fingerprints in over 3600 Dutch samples of various cancer types. The fingerprints were present in multiple tumours, and much more often in bowel cancers than other cancer types. Illustration of colibactin binding to specific DNA sequence. Credit: DEMCON | nymus3D, ©Hubrecht Institute The researchers then refined their investigation on bowel cancer tumours specifically, and analysed over 2000 bowel cancer samples from the UK, collected as part of the 100,000 Genomes Project run by Genomics England. Among these samples, the colibactin fingerprints were present in 4-5% of patients. This suggests that colibactin-producing E. coli may contribute to 1 in 20 bowel cancer cases in the UK. It will be up to further studies to shed light on just how much of a role the toxin could play in these cases, and what other components of the microbiome may be involved in the early stages of bowel cancer. Professor Hans Clevers, Grand Challenge co-investigator at the Hubrecht Institute, said: "Things like tobacco or UV light are known to cause specific patterns of DNA damage, and these fingerprints can tell us a lot about past exposures that may have caused cancer to start. But this is the first time we've seen such a distinctive pattern of DNA damage in bowel cancer, which has been caused by a bacterium that lives in our gut." Further down the line, the researchers say that looking for DNA damage fingerprints like the ones associated with colibactin in the cells of the gut lining could be used to identify those who are at a greater risk of developing the disease. Professor Philip Quirke, Grand Challenge co-investigator at the University of Leeds, said: "Our goal is to understand the causes of bowel cancer, so discovering the role of colibactin represents an important step. As a Grand Challenge team, we are now looking at other bacteria and their toxins associated with bowel cancer, and we hope to identify more DNA damage fingerprints to paint a better picture of risk factors. We will then need to work out how we can reduce the presence of high-risk bacteria in the gut. But this is all in the future, so for now people should continue to eat a healthy diet and participate in bowel cancer screening." John Barnes, patient advocate for Grand Challenge said: "As a cancer survivor, I don't want others to go through what I've gone through. Catching bowel cancer at an earlier stage while it's still treatable has the potential to save thousands of people's lives. This brilliant research gives me hope that people may not have to suffer from bowel cancer in the future." Nicola Smith, senior health information manager at Cancer Research UK, said: "The more doctors understand about how bowel cancer develops, the better they will be at detecting it and helping people reduce their risk. "But there are already things that people can do right now to help reduce their risk of bowel cancer. Not smoking, keeping a healthy weight, eating a diet high in fibre and low in red and processed meat will all help. And for those who are eligible, participating in bowel screening can help to detect the disease at an early stage." This discovery of a direct link between a bacterium and bowel cancer tumours is the first major outcome of a £20 million Grand Challenge project striving to understand how the microbiome impacts on cancer risk, development and treatment.
10.1038/s41586-020-2080-8
Computer
Next-generation robotic cockroach can explore under water environments
Yufeng Chen et al. Controllable water surface to underwater transition through electrowetting in a hybrid terrestrial-aquatic microrobot, Nature Communications (2018). DOI: 10.1038/s41467-018-04855-9 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-04855-9
https://techxplore.com/news/2018-07-next-generation-robotic-cockroach-explore-environments.html
Abstract Several animal species demonstrate remarkable locomotive capabilities on land, on water, and under water. A hybrid terrestrial-aquatic robot with similar capabilities requires multimodal locomotive strategies that reconcile the constraints imposed by the different environments. Here we report the development of a 1.6 g quadrupedal microrobot that can walk on land, swim on water, and transition between the two. This robot utilizes a combination of surface tension and buoyancy to support its weight and generates differential drag using passive flaps to swim forward and turn. Electrowetting is used to break the water surface and transition into water by reducing the contact angle, and subsequently inducing spontaneous wetting. Finally, several design modifications help the robot overcome surface tension and climb a modest incline to transition back onto land. Our results show that microrobots can demonstrate unique locomotive capabilities by leveraging their small size, mesoscale fabrication methods, and surface effects. Introduction Many animal species 1 , 2 , 3 , 4 , 5 exhibit multimodal locomotive capabilities in terrestrial and aquatic environments to evade predators or search for prey. A few arachnids 3 and insects 4 can move on the surface of water by exploiting static surface tension. The surface tension force significantly exceeds (over 10 times) their body weight, enabling rapid locomotion and even jumping without breaking the water surface 6 . Remarkably some insects, such as diving flies 4 and diving beetles 7 , 8 , can also generate enough force to spontaneously break the water surface to swim, feed, and lay eggs underwater. The ability to move on the water surface, controllably transition into water, and swim underwater enable these creatures to live in complex environments (e.g., high salinity environment such as the Mono Lake 4 ) that most animals cannot survive. Robots that can traverse complex terrains, such as hybrid terrestrial-aquatic environments, are suitable for diverse applications in environmental monitoring and the exploration of confined spaces. Taking inspiration from nature, many robotic prototypes 9 , 10 , 11 , 12 have been developed for terrestrial-aquatic locomotion. Most of these amphibious robots, however, weigh over 100 g and cannot move on the water surface due to their large weight-to-size ratio 9 , 10 , 11 , 12 . Microrobots (mass<20 g, length<15 cm) have a smaller weight-to-size ratio, and they can leverage surface effects, such as electrostatics or surface tension, to perch on compliant surfaces 13 or move on the surface of water 14 , 15 , 16 , 17 , 18 , 19 . This confers several potential advantages; for example, microrobots can avoid submerged obstacles by returning to the water surface. Furthermore, microrobots experience smaller drag on the water surface due to reduced wetted area, which leads to higher locomotion speed compared to swimming underwater. These hybrid locomotion capabilities could potentially allow microrobots to explore diverse environments that are inaccessible to larger robots. Previous work 14 , 15 , 16 , 17 , 18 , 19 , 20 on water strider-inspired microrobots leveraged insights from studies of biological water striders 21 . These robots weigh 6–20 g, are 8–15 cm long, and use hydrophobic wires for support on the water surface. These devices were used to investigate the biomechanics of water striders 22 , 23 , study the associated fluid mechanics 21 , and enable microrobot locomotion on the water surface. Actual water striders are ~1 cm long and weigh 4.5 mg, and they are over 10 times smaller and 1000 times lighter than these microrobots. Since the surface tension force scales linearly with the leg contact length and mass scales with length cubed, these robots need to use supporting legs that are substantially longer than their bodies. These supporting legs create challenges for locomotion in other environments, such as on land or underwater. In this study, we develop a 1.6 g quadrupedal microrobot that is capable of walking on land, moving on the surface of water, and transitioning between land, the water surface, and aquatic environments (Supplementary Movie 1 ). We address two challenges that are unexamined in previous studies: gait design for multimodal locomotion 24 in terrestrial and aquatic environments, and strategies for transitions between these environments. First, we develop gaits for locomotion on land and the water surface utilizing a quadrupedal robot with two independently controlled degrees-of-freedom (DOFs) in each leg. Second, we design “feet” that utilize both surface tension and surface tension induced buoyancy to generate the necessary supporting force without inhibiting terrestrial locomotion. The “feet” utilize electrowetting 25 , 26 , 27 to break the water surface. Electrowetting refers to the changing of the liquid to solid surface contact angle in response to an applied voltage, and it is commonly found in microfluidics or electronic paper display applications 26 . We further examine the influence of surface tension on the robot during underwater-to-land transitions. Design changes to the robot’s legs and transmission (compared to a previous version 28 ) allow it to overcome a force that is twice its body weight and break the water surface to transition back onto land. In summary, this work develops multimodal strategies for locomotion in terrestrial and aquatic environments, describes novel mesoscale devices for water surface to underwater transitions, and analyzes the influence of surface tension on microrobot aquatic to terrestrial transitions. These studies culminate in the first terrestrial-aquatic microrobot that adapts to complex environments, representing advances in mesoscale fabrication and microrobot locomotive capabilities. Results Robot design and demonstration We base our robot design on the Harvard Ambulatory MicroRobot (HAMR), which is a 1.43 g quadrupedal robot with eight independently actuated DOFs 29 . Two piezoelectric actuators control the swing (fore/aft) and lift (vertical) motion of each leg. The robot is fabricated based on the PC-MEMS process 28 , 30 and the robot transmissions are made of compliant 25 µm polyimide flexures (Kapton, Dupont). In previous studies 28 , 29 , a number of walking gaits such as the trot, pronk, and jump are shown to be capable of high speed locomotion on land. Several design modifications are implemented (Fig. 1a ) to enable locomotion on the water surface, controllable sinking, and transitions from underwater to land. The legs are equipped with passive, unidirectional flaps to facilitate swimming (Fig. 1b ), and with electrowetting pads (EWP-Fig. 1c ) to generate surface tension and buoyant forces to support the robot’s weight on the water surface. To break the water surface and transition into water, the EWPs utilize electrowetting to modify surface wettability. In addition, design modifications are made to the robot’s chassis and circuit boards to reduce the volume of air trapped during sinking. To avoid shorting underwater, the circuitry is coated in ~10 µm of Parylene C. Finally, the robot transmissions are manually stiffened approximately two times to improve vehicle payload, which allows a submerged robot to break the water surface and transition back to land. Fig. 1 Design of a hybrid terrestrial-aquatic microrobot and its electrowetting pads. a A quadrupedal, 1.6 g, 4 cm × 2 cm × 2 cm hybrid terrestrial-aquatic microrbot. The robot is powered by eight piezoelectric actuators and each leg has two independent degrees-of-freedom. Each robot leg consists of an electrowetting pad (EWP) and two passive flaps. b Two passive flaps are connected to the central rigid support via compliant polyimide flexures. These passive flaps retract under drag forces opposing the robot’s heading but remain open under thrust forces in the same direction as the robot’s heading. c Perspective and front views of an EWP on the water surface. The EWP supports the robot weight via surface tension effects and the flaps paddle underwater to generate thrust forces. Scale bars ( b – c ), 5 mm Full size image This robot can walk on level ground, transition from ground onto the water surface, swim on the water surface to evade underwater obstacles, sink into water by actuating its EWPs, walk underwater, and transition back onto land by climbing an incline (Fig. 2a , and Supplementary Movie 1 ). Figure 2b shows a corresponding experimental demonstration of robot locomotion. The robot moves at a speed of 7 cm s −1 using a 10 Hz trot gait on level ground. To walk from land onto the water surface (Fig. 2c , and Supplementary Movie 2 ), the robot walks down a 7° incline using a trot gait with a 1 Hz stride frequency to avoid breaking the water surface in this process. Once the robot is afloat, it swims (Fig. 2d , and Supplementary Movie 3 ) at a speed of 2.8 cm s −1 using a 5 Hz swimming gait. To dive into water, the actuators are switched off and a 600 V signal is applied to the EWPs. The locally induced electric field modifies the surface wettability, reduces the surface tension force, and causes the robot to sink into water (Supplementary Movie 4 ). Once the robot sinks to the bottom, it can walk underwater (Fig. 2e , Supplementary Movie 5 ) using a trot gait at stride frequencies up to 4 Hz. To transition back onto land, the robot climbs an incline of up to 6° and gradually moves through the water-air interface (Fig. 2f, g , and Supplementary Movie 6 ). Further, the robot can demonstrate turning on land, on the water surface, and underwater to avoid obstacles. In the following sections, we describe detailed results on robot swimming and transition between land, the water surface, and underwater environments. Fig. 2 Robot demonstration. a An illustration of robot locomotion. The robot can walk on level ground, swim on the water surface, dive into water, walk underwater, and make transitions between ground, the water surface, and the underwater environment. b Top view composite image of the robot demonstrating hybrid locomotion described in a . Scale bar, 5 cm. c Side view of the robot walking down an incline and transitioning from land to the water surface. d The robot swims on the water surface. e The robot climbs an incline when it is fully submerged in water. f The robot gradually emerges from the air–water interface. g The robot completely exits water. Scale bars ( c – g ), 1 cm. In c – g , two drops of blue food coloring are added to deionized water to enhance the color of water in side view images Full size image Floating and controllable sinking through electrowetting One of the major challenges in developing a robot capable of moving on the water surface is to support the robot’s weight. Previously developed water strider-inspired robots 5 , 16 , 18 are 10 times larger and 100–1000 times heavier than natural water striders, and they rely on multiple, non-moving legs to support themselves on the water surface. Such a design limits these robots’ ability to move through cluttered environments and to traverse other types of terrains. Here, we develop a novel design that substantially reduces leg length, enables controllable sinking through electrowetting, and allows for multimodal locomotion on land, on the water surface, and underwater. An EWP (Fig. 3a ) is installed on each leg of HAMR. The EWP is ~1 cm in diameter, and it is made of a folded 5 µm thick copper sheet coated by 15 µm Parylene (see Methods). This hydrophobic, dielectric coating insulates the copper from water. Unlike previous water surface supporting devices that only utilize surface tension, our device relies on surface tension and surface tension induced buoyancy. The maximum net upward force generated by our device is given by: $$F = - \gamma L\cos \theta + \rho _{\mathrm{w}}gAh_{\mathrm{w}}$$ (1) where γ is the water surface tension coefficient, L is the net contact length, θ is the contact angle between Parylene C and water, ρ W is the water density, g is the gravity, A is the EWP’s flat area, and h w is the maximum deformation of the water surface before breaking. The value of h w relates to the contact angle between the EWP and the water surface, and consequently the buoyancy term is dependent on the surface tension. The dependence of h w on contact angle will be specified in equation ( 4 ), and the values of the constants are given in Supplementary Table 1 . For our EWP design, equation ( 1 ) estimates that surface tension contributes ~25% of the net upward force, and surface tension induced buoyancy force accounts for the rest. We note that the buoyancy contribution becomes even more important than surface tension in heavier (>1 g) water striding robots because contact area grows faster than contact length as robot size increases. Our robot weighs 1.65 gram, and it can carry 1.44 gram of additional payload on the water surface. This additional payload allows the robot to paddle its legs (up to 10 Hz) without breaking the water surface (Supplementary Movie 4 ), which is crucial for robot locomotion. Fig. 3 Electrowetting pad and controllable transition through the air-water interface. a Fabrication of an EWP. An EWP is laser machined from a 5 µm copper sheet, folded manually, wired, and then coated with 15 µm Parylene. b Modification of contact angle through electrowetting. When a 600 V signal is sent to the EWP, the contact angle between the EWP’s vertical sides and the water surface decreases, which reduces the surface tension force. c Spontaneous wetting of the EWP’s charged horizontal surface. The increase of surface wettability causes water to flow onto the EWP’s upper surface, consequently sinking the robot. d Composite image of a robot sinking into water when all four EWPs are actuated with a 600 V signal. Scale bars ( a – d ), 5 mm. e Experimental characterization of the maximum upward force generated by an EWP at different voltages. Due to change of contact angle and spontaneous wetting, the net upward force decreases as the input voltage increases Full size image This device further enables controllable and repeatable transitions through the water surface. We define transition controllability as the robot’s ability to dive into water at a desired location and time. This form of controllability is absent in a previous study 24 that demonstrates transition by coating a microrobot with a surfactant. Furthermore, we are only concerned with sinking in shallow (<15 cm) and undisturbed water. Under these conditions and without control of the robot’s pose during sinking, the robot always lands on its feet because its center of mass is lower than its geometric center. The EWPs initiate sinking with the electrowetting process—the modification of a surface’s wetting properties under an applied electric field. When a voltage is applied to a conductive surface coated with a dielectric layer, there is a reduction of the contact angle between an electrolyte and the solid surface. This reduction of contact angle leads to two effects that enable sinking: surface tension reduction and spontaneous wetting. First, the electrowetting process reduces surface tension by reducing the contact angle between the EWP’s vertical walls and the meniscus surface (Fig. 3b ). When a 600 V signal is sent to the EWP, an electric field perpendicular to the meniscus surface (parallel to the free water surface) is generated and it leads to a change of the contact angle governed by: $$\cos \theta = \cos \theta _{\mathrm{N}} + \frac{{{\it{\epsilon }}_0{\it{\epsilon }}_{\mathrm{l}}}}{{2\gamma d_{\mathrm{H}}}}V^2.$$ (2) Here \({\it{\epsilon }}_0\) is the permittivity of free space, \({\it{\epsilon }}_{\mathrm{l}}\) is the relative permittivity, d H is the dielectric coating thickness, θ N is the nominal contact angle, and V is the applied voltage. This reduction of contact angle reduces the upward surface tension force governed by the first term of equation ( 1 ). According to equation ( 2 ), a hydrophobic coating ( θ N > 90°) can become hydrophilic ( θ < 90°) under a large voltage input, which changes the weight-bearing surface tension force to a downward pulling force. The EWP’s vertical walls are important for reducing surface tension because they lower the required operating voltage and mitigate the problem of dielectric breakdown. Without the vertical walls, the fringing electric fields generated by the EWP’s horizontal surface are much weaker. This requires a higher input voltage to achieve a similar reduction in contact angle, and can potentially cause dielectric breakdown of the insulating Parylene coating. Compared to a flat foot pad, the EWP’s vertical walls strengthen the electric field between the device and the water surface meniscus (Fig. 3b ) under the same input voltage. The conflicting relationship between contact angle reduction and dielectric breakdown is illustrated in Supplementary Figure 1a . The quadratic curve shows the required voltage that achieves a 100° contact angle reduction as a function of coating thickness, and the straight line shows the maximum EWP operating voltage before dielectric breakdown. The intersection of these lines predicts the minimum required coating thickness. To account for coating inhomogeneity of the fabrication process and the contact angle saturation effect, we choose a coating thickness of 15 µm and an operating voltage of 600 V. The height of the EWP’s vertical sidewalls can be determined by analyzing the water meniscus profile 31 . The meniscus height (Fig. 3b ) near the vertical sidewalls relates to the local contact angle: $$h_{\mathrm{v}} = \sqrt {2(1 - \sin \theta )} k^{ - 1},$$ (3) where \(k^{ - 1} = \sqrt {\gamma /\rho g}\) and it is defined as the characteristic length. This formula predicts the meniscus height to be ~3 mm. To ensure the local electric field is approximately perpendicular to the water meniscus, we set the EWP’s sidewalls to be 4 mm tall, which is slightly larger than the meniscus height. Following the reduction of surface tension, the buoyancy force also decreases due to change of h w , which leads to spontaneous wetting on the EWP’s horizontal surface. Recall that h w is the maximum height difference between a static meniscus and a flat surface before the liquid spontaneously spreads on the surface (Fig. 3c ). The relationship between h w and the surface contact angle is given by: $$h_{\mathrm{w}} = 2k^{ - 1}\sin \frac{\theta }{2}.$$ (4) For the EWP design, h w reduces from 5 mm to 2 mm when the input voltage increases from 0 V to 600 V, which implies that the surface tension induced buoyancy force is reduced (second term of equation 1 ). Consequently, both surface tension and buoyancy force reduce and cause the robot to sink into water (Fig. 3d ). Supplementary Figure 1b and Supplementary Movie 4 illustrate the spontaneous wetting process on a flat copper sheet coated with 15 µm of Parylene. In summary, charging the EWP’s sidewalls reduces the surface tension force, and charging the EWP’s horizontal surface lowers the buoyancy force. We characterize the EWP performance by measuring the maximum surface tension force at different input voltages (see Methods for experimental setup). Figure 3e compares the experimental measurement with the predicted values from equations ( 1 ), ( 2 ), and ( 4 ). The model shows good agreement with experiments for input voltages smaller than 400 V. The model underestimates the net upward force for input voltages higher than 400 V because it does not consider the contact angle saturation phenomenon 25 , which is an experimental observation that no material can become completely hydrophilic regardless of the input voltage amplitude. In future studies, this discrepancy between model and measurement may be reduced by using alternative dielectric coatings that have smaller saturation angles. Our experiments show that the maximum upward force an EWP generates is 11.5 mN. This force reduces to 8.2 mN when a constant signal of 600 V is sent to the device. This measurement is an upper bound of the EWP’s performance as it is rigidly mounted on a force sensor. When installed on the mircorobot, the EWP’s horizontal surface may not be parallel to the water surface due to fabrication imperfection, causing a reduction in contact area and maximum surface forces. As a result, we measure the maximum robot weight to be 3.09 g (65% of the maximum static measurement) before sinking. Locomotion on the water surface In addition to floating on the water surface, the robot is capable of locomotion including swimming forward and turning. Existing designs 14 , 15 , 16 , 17 , 18 , 19 , 20 based on water striders cannot be applied to our robot because stationary, long supporting legs inhibit walking on the ground. Instead, we require the robot to use the same set of actuators and legs to move on the water surface. This requirement imposes two major challenges: symmetric walking gaits for terrestrial locomotion cannot generate net propulsive force due to the time reversibility property of low Reynolds number flow, and the amplitude of the robot legs’ swing motion, and thus the induced drag force, is substantially less than that of biological examples such as the diving beetles ( Dytiscus marginalis ) and the diving flies ( Ephydra hians ). Diving beetles swim underwater by paddling their hind legs asymmetrically (Fig. 4a ) to generate unidirectional thrust 7 . During the power stroke, the hind leg tarsus and tibia flatten to maximize projected area and increase forward thrust. During the recovery stroke, the hind leg tarsus and tibia retract to minimize the projected area and reduce backward drag. Previous biomimetic studies analyzed the diving beetle’s paddling leg trajectories and showed that they can be modeled by two serial links connected to each other and the body by two actuated rotational joints 32 , 33 . Fig. 4 Aquatic flapping kinematics and dynamics. a Swimming behavior of a diving beetle. The power stroke and the recovery stroke are asymmetric (figure taken from 7 ). b Bioinspired robot swimming kinematics feature asymmetric upstroke and downstroke without active control of the flap rotation. c Periodic control signal of the robot swing actuator is asymmetric. d Images of a single leg’s swinging motion and the passive flap rotation in water. The images are taken 0.1 period apart, corresponding to the time scale of c . Asymmetric leg swinging motion leads to favorable passive flap rotation that increases net thrust. Scale bar, 5 mm. e Comparison of experimentally measured and simulated flapping motion ψ and passive flap rotation α. f Simulated instantaneous thrust force as a function of time. The experiments and simulations shown in c – f use the same control signal. e , f show that the quasi-steady model qualitatively agrees with the experimental result, and it predicts that an asymmetric driving signal generates larger net thrust force Full size image Taking inspiration from the diving beetle’s physiology and swimming mechanics, we develop passive swimming flaps 34 that generate asymmetric gaits for water surface locomotion. The robot leg and its flap constitute a two-serial links system: the leg motion is controlled and the flap rotation is passively mediated through an elastic joint. As shown in Fig. 1c , the flaps are fully submerged in water while the robot rests on the water surface. The flaps are designed to be passive devices that retract in a single direction. During the fast downstroke (Fig. 4b ), the flaps remain fully open to generate forward thrust. During the slow upstroke (Fig. 4b ), the flaps collapse to reduce drag. The flap rotation is passively mediated by forces from an elastic flexure, drag from the surrounding fluid, and the flap inertia. Consequently, developing an appropriate driving motion for the robot leg is crucial for achieving desired flapping kinematics 35 . To design the swimming kinematics and determine the flap area, inertia, and flexure stiffness, we conduct at-scale flapping experiments and construct quasi-steady, dynamical simulations (see Supplementary Notes 1 – 3 and Supplementary Figure 2 ). The kinematic parameters of stroke angle ( ψ ) and pitch angle ( α ) are defined in the Supplementary Note 1 , and they are labeled in Fig. 4d and Supplementary Figure 2c . During swimming, the robot’s lift actuators are switched off and a piecewise sinusoidal driving signal is sent to the swing actuator (Fig. 4c ). As shown in Fig. 4c , the fast downstroke occupies 10% of the flapping period, the slow upstroke takes 50% of the flapping period, and the actuator remains stationary for the remaining 40% of the time to allow the flap to return its nominal orientation. Figure 4d shows a single leg flapping experiment using this driving signal at 5 Hz. The flap remains flattened during the fast downstroke ( T = 0 to T = 0.1). At stroke reversal ( T = 0.1), the flap begins to collapse while the leg slows down and reverses direction. This collapsing behavior is mainly due to the flap inertia and the force from the surrounding fluid. During the slow upstroke ( T = 0.1 to T = 0.6), the flap remains collapsed to reduce drag. During the recovery phase ( T = 0.6 to T = 1), the actuator is held stationary and the flap slowly rotates back to its nominal position due to the restoring torque from the flexure. Figure 4e shows the tracked stroke angle ( ψ ) and the passive flap angle ( α ), and superimposes the simulated stroke and flap motion based on the input signal from Fig. 4c . We observe qualitative agreement between the quasi-steady simulation and the experimental measurement. The error of the maximum predicted flap angle and phase offset are 4° and 6% period, respectively. As detailed in the supplemental material, this error arises primarily from ignoring the added mass effects and the collision between the flap and the central strut during the downstroke. We further estimate the drag force profile using the quasi-steady model. As shown in Fig. 4f , the thrust force is mainly generated during the downstroke, whereas the drag force during the upstroke and stroke recovery is small. Here the model estimates the time averaged force to be 0.13 mN from a single leg actuated at 5 Hz. Driving all four robot legs with the same signal from Fig. 4c , we demonstrate robot forward swimming on the water surface. For the experiment shown in Fig. 5a , the robot swims 32 cm in 12 seconds, with an average speed of 2.7 cm s −1 (0.7 body length (BL) per second). Figure 5b shows the instantaneous robot swimming speed extracted from a high-speed video of another swimming experiment (Supplementary Movie 3 ). The maximum and mean swimming speed are 8.1 (2.1 BL s −1 ) and 2.8 cm s −1 (0.7 BL s −1 ), respectively. In addition to swimming forward, the robot can demonstrate left or right turns by turning off the actuators on the left or right side, respectively. Figure 5c, d , and Supplementary Movie 3 show the robot can make a complete right or left turn in 13 and 11 s, respectively. Fig. 5 Robot swimming and turning on the water surface. a The robot moves on the water surface at 2.8 cm s -1 with a 5 Hz swimming gait. b The robot’s instantaneous swimming speed tracked using a high-speed video (Supplementary Movie 3 ). c The robot makes a complete left turn on the water surface in 13 seconds. d The robot makes a complete right turn on the water surface in 11 seconds. Scale bars ( a , c , d ), 2 cm. These demonstrations show that the robot can controllably move on the surface of water Full size image We further compare robot locomotive efficiency in different environments by calculating the cost of transport: $$c = \frac{{P_{avg}}}{{mgv_{avg}}},$$ (5) where P avg and v avg are the net electrical power consumed by the robot and the average speed, respectively. The input power is calculated by measuring the voltage and current consumed by each actuator and then summing over all eight actuators for 20 periods: $$P_{avg} = \frac{1}{T}\mathop {\sum }\limits_{j = 1}^8 \mathop {\int }\nolimits_0^T v_j\left( t \right)i_j\left( t \right)dt.$$ (6) Supplementary Table 2 lists the robot cost of transport for locomotion on land, underwater, on the water surface, and on an incline. The cost of transport for moving on the water surface is 18% higher than that of walking on land. Underwater to land transition Our microrobot is capable of walking and avoiding obstacles underwater (Supplementary Movie 5 ) and climbing up an incline to transition back onto land (Supplementary Movie 6 ). When it is fully submerged, the robot uses terrestrial walking gaits to demonstrate turning and walking. We estimate that buoyancy force accounts for 40% of the robot weight, which enhances locomotion and improves payload capacity underwater. In the aquatic-to-terrestrial transition process, there are two major challenges: overcoming the surface tension and climbing an incline. As the robot moves through the air-water interface, the water surface exerts an inhibiting, downward force whose magnitude is approximately equal to the robot’s weight. Although the robot can climb up an 11° incline underwater on surfaces covered by polydimethylsiloxane (PDMS), it is unable to break the water surface of while climbing up a 6° PDMS surface (Fig. 6a , and Supplementary Movie 7 ). Figure 6b shows the tracked robot front and hind leg trajectories: when completely submerged, the robot’s front legs lift higher than its hind legs (red colored regions in Fig. 6b ) due to the rearward location of its center of mass. Surface tension forces push down on the front of the robot as it moves out of water, inducing a clockwise torque with respect to the robot center of mass and causing the hind legs to lift higher than the front legs (blue colored regions in Fig. 6b ). As the robot continues to climb upward, the body torque induced by surface tension becomes counterclockwise, causing the front legs to lift higher again (red colored regions in Fig. 6b ). In this process, the downward surface tension force gradually increases and ultimately causes the robot hind legs to stick to the incline surface, preventing forward locomotion. Fig. 6 Robot water to land transition. a The robot is stuck at the air-water interface as it climbs an incline from underwater. The surface tension force exerts a counter-clockwise torque on the robot body, preventing the robot hind leg from lifting off. b The trajectories of the robot’s front right and hind right legs. The ramp incline is subtracted from the trajectories to show leg lift at each step. When the robot is stuck, its hind legs cannot lift off the incline surface. c The net force on a robot as it is pulled out of water vertically. The net surface tension force exceeds the robot weight. d Top and side view images of a robot getting stuck at the air-water interface. The robot hind legs splay outward due to larger surface tension force on the rear of the robot. e After stiffening robot transmission in the lift DOF, the robot moves through the air–water interface on a 3° incline. f Robot leg trajectories during the water to land transition. The ramp incline is subtracted from the trajectories. During the transition process, the lift motion of both robot front and hind legs are reduced due to the inhibiting surface tension force. The robot leg motion recovers after the robot exits the surface of water. Scale bars ( a , d , e ), 1 cm Full size image To quantify the magnitude of water surface tension force during the transition process, we measure the net force on a robot as it is slowly pulled out of water (Supplementary Fig. 3 ). As shown in Fig. 6c , the surface tension forces on the robot’s circuit board and EWPs are approximately 16 mN and 9.3 mN, respectively. Although these forces are comparable to the robot’s weight, previous work has demonstrated that HAMR can carry 2.9 g of additional payload 36 on flat surfaces. These force measurements are conducted while mounting the robot parallel to the water surface, and consequently they do not account for the influence of the torque induced by the surface tension force during the transition process. To quantify the influence of the surface tension torque, we pull the robot through the water surface on a 3° incline and quantify the deformation of the robot legs’ transmission. As shown in the top view of Fig. 6d , the robot hind legs are splayed out further than the robot front legs, indicating that there is a larger force pushing down on the back of the robot. In an unloaded configuration (Fig. 1a ), the robot’s legs are approximately perpendicular with respect to the ground. In the configuration pictured in Fig. 6d , the robot hind legs splay outward 19° compared to the nominal leg orientation. This causes the back of the robot body to sag down and its front to tilt up. The side view in Fig. 6d shows that the robot body pitching increases to 14° ( θ B ) on a 3° ( θ I ) incline. This unfavorable body pitching θ B exacerbates the adverse effects of climbing an incline, causing the robot’s front legs to lift higher and preventing the robot’s rear legs from lifting off the ramp surface. We make two major modifications to mitigate the adverse effects caused by surface tension during underwater-to-ground transitions. First, the legs and swimming flaps are redesigned and fabricated monolithically using a 150 µm carbon fiber laminate, substantially reducing the deformation of the entire structure under load. In addition, we reduce the compliance of the legs’ lift DOF by manually biasing the leg downward during the assembly process as detailed in a previous study 37 . This preloads the flexures to create a force bias that opposes gravity, effectively altering the transmission ratio to increase stiffness and reduce sagging. Second, we attach PDMS-coated foot pads to the robot’s front legs to increase friction. The experiment illustrated in Fig. 6a, b shows that the robot’s front and hind legs serve different functions during the transition process. The surface tension induced torque inhibits hind leg liftoff and increases friction on the hind legs. In contrast, the front legs experience lower normal and frictional forces which results in slipping. Attaching PDMS foot pads to the robot front legs increases friction on the front legs and reduces slipping. Figure 6e shows a composite image of the robot climbing out of water on an acrylic, 3° incline at an average speed of 0.75 cm s −1 . Compared to the side view in Fig. 6d , this image shows that the modified robot does not pitch up noticeably during the transition process. Figure 6f further illustrates the corresponding front and hind leg trajectories. The lift motions of both front and back legs reduce as the robot feet emerge from the water surface. Once the robot front legs completely exit the water surface, the front legs’ lift amplitudes recover. Due to surface tension force and torque, the cost of transport for transitioning from underwater to land is approximately four times larger than walking on level ground (Supplementary Table 2 ). Discussion Our presentation of a hybrid terrestrial-aquatic microrobot includes a novel mesoscale device design that uses electrowetting to control surface tension magnitude and achieve controllable and repeatable water surface to underwater transitions, a multimodal strategy for locomotion on terrestrial domains and the water surface, and a detailed analysis of the challenges imposed by surface tension during a microrobot’s transition from underwater to land. Our design satisfies the various constraints imposed by microrobot actuation, payload, and the different environments. Although many of these constraints lead to conflicting design requirements, they can be reconciled by leveraging physics, such as electrowetting or surface tension, unique to the millimeter scale. For instance, we design foot pads that rely on surface tension and surface tension induced buoyancy to reduce foot size. Furthermore, by leveraging electrowetting principles, we design four 25 mg EWPs that can both statically support 1.9 times the robot weight and sink the robot when actuated. Finally, whereas legged terrestrial locomotion involves discontinuous contact dynamics and friction, aquatic flapping locomotion at the low Reynolds number regime is continuous and requires asymmetric strokes. Using a combination of passive flaps and asymmetric driving, we develop a swimming strategy that has a similar cost of transport compared to robot terrestrial locomotion. Compared to hybrid terrestrial-aquatic insects, microrobots have shortcomings in actuation and power density but also possess advantages in using engineered materials and electrostatic devices. To dive into water, a diving fly 4 can exert a downward force larger than 18 times its weight to overcome surface tension. Piezoelectric actuators cannot deliver such high force; however, a microrobot can utilize electrowetting to modify wettability−something that is unobserved among insects. In the case of swimming, the shortcomings in actuation can be compensated by exploiting fluid-structure interactions in passive mechanisms. Whereas a diving beetle 7 can paddle with asymmetric power stroke and recovery stroke by independently controlling the motion of its tibia and tarsus 8 , a microrobot can generate similar asymmetric paddling motion by merely controlling its leg (analogous to tarsus) swing. The flap (analogous to tibia) rotation is passively controlled through the coupling between the fluid flow, the flap inertia, and the elastic hinge to achieve efficient locomotion on the surface of water. Further, this is a demonstration of a microrobot capable of performing tasks that are difficult for larger robots. To the best of our knowledge, no existing robot can walk on land, swim on the water surface, and transition between these environments. By leveraging surface effects that dominate at the millimeter scale, this work shows a microrobot can outperform larger ones in specific applications. In search and rescue missions, our robot has the potential to move through cluttered environments that are not accessible to larger terrestrial or aquatic robots. Future studies can explore several topics to further improve microrobot locomotive capabilities in complex environments. Due to the lack of control during the sinking process, the current robot may flip over in presence of disturbances such as surface waves and dynamic flow underwater. Further, the robot cannot return to land without a modest ramp. These limitations can be addressed by enabling underwater swimming and improving climbing capabilities in hybrid terrestrial-aquatic microrobots. To demonstrate swimming, future research could involve designing meso-scale devices for buoyancy control 24 , developing leg structures and associated gaits to generate lift force in addition to thrust, and conducting dynamical analysis to investigate underwater stability. To climb steeper incline or to return to land without a ramp, future research can incorporate electrostatic adhesion 13 , gecko-inspired adhesives 38 , or impulsive jumping mechanisms 24 . Methods EWP fabrication The robot foot pad is fabricated from a 5 µm copper sheet. First, the copper sheet is cut into the desired pattern (Fig. 3a ) using diode-pulsed solid state (DPSS) laser. Next, vertical walls around the foot pad base are manually folded by 90° under a microscope (Fig. 3a ). Then we solder the foot pad using 51-gauge quadruple insulated wire and coat the device using Parylene C. The coatingprocess takes ~12 h to deposit a uniform layer of 15 µm Parylene C. Finally, the foot pad is attached to a 70 µm thick, circular fiber glass (FR4) piece (Fig. 3a ). The fiber glass connection piece prevents the foot pad from shorting to the robot chassis. Experimental setup for measuring surface tension on the EWP An EWP is mounted on a capacitive force sensor and slowly pushed into water at ~0.2 mm s −1 (Supplementary Fig. 1c ). The net instantaneous force is measured by the force sensor. The red arrows in Supplementary Figure 1 d, e indicate the difference between the minimum force and the net force once the EWP is completely submerged. This value represents the maximum upward force an EWP can generate. As shown in Supplementary Figure 1 d, e, the net upward force reduces by 30% when a 600 V signal is sent to the EWP. Experimental setup for robot locomotion demonstration We built a 45 cm × 45 cm × 8 cm aquarium to conduct robot locomotion experiments in terrestrial and aquatic environments (Supplementary Fig. 4a ). The aquarium is filled with deionized water at a depth of 4 cm. A 5 cm tall, 6° ramp is placed in the aquarium for the robot to walk from land to the water surface. An ~3 cm tall underwater obstacle is placed in the robot’s swimming path. A 4 cm tall, 3° ramp is placed in the aquarium for the robot to climb out of water. Two cameras are placed above and on the side of the aquarium to take top and side view videos. The water in the aquarium is connected to electrical ground. The robot end-to-end locomotion experiments are conducted four times to demonstrate robot robustness and repeatability, and these trajectories are overlaid in Supplementary Figure 4b . Data availability The data and code that support the findings of this study are available from the corresponding author Y.C. upon reasonable request.
In nature, cockroaches can survive underwater for up to 30 minutes. Now, a robotic cockroach can do even better. Harvard's Ambulatory Microrobot, known as HAMR, can walk on land, swim on the surface of water, and walk underwater for as long as necessary, opening up new environments for this little bot to explore. This next generation HAMR uses multifunctional foot pads that rely on surface tension and surface tension induced buoyancy when HAMR needs to swim but can also apply a voltage to break the water surface when HAMR needs to sink. This process is called electrowetting, which is the reduction of the contact angle between a material and the water surface under an applied voltage. This change of contact angle makes it easier for objects to break the water surface. Moving on the surface of water allows a microrobot to evade submerged obstacles and reduces drag. Using four pairs of asymmetric flaps and custom designed swimming gaits, HAMR robo-paddles on the water surface to swim. Exploiting the unsteady interaction between the robot's passive flaps and the surrounding water, the robot generates swimming gaits similar to that of a diving beetle. This allows the robot to effectively swim forward and turn. "This research demonstrates that microrobotics can leverage small-scale physics—in this case surface tension—to perform functions and capabilities that are challenging for larger robots," said Kevin Chen, a postdoctoral fellow at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and first author of the paper. HAMR's multifunctional foot pads rely on surface tension and surface tension induced buoyancy when HAMR needs to swim but can also apply a voltage to break the water surface when HAMR needs to sink. Credit: Yufeng Chen, Neel Doshi, and Benjamin Goldberg/Harvard University The most recent research is published in the journal Nature Communications. "HAMR's size is key to its performance," said Neel Doshi, graduate student at SEAS and co-author of the paper. "If it were much bigger, it would be challenging to support the robot with surface tension and if it were much smaller, the robot might not be able to generate enough force to break it." HAMR weighs 1.65 grams (about as much as a large paper clip), can carry 1.44 grams of additional payload without sinking and can paddle its legs with a frequency up to 10 Hz. It's coated in Parylene to keep it from shorting under water. Once below the surface of the water, HAMR uses the same gait to walk as it does on dry land and is just as mobile. To return to dry land HAMR faces enormous challenge from the water's hold. A water surface tension force that is twice the robot weight pushes down on the robot, and in addition the induced torque causes a dramatic increase of friction on the robot's hind legs. The researchers stiffened the robot's transmission and installed soft pads to the robot's front legs to increase payload capacity and redistribute friction during climbing. Finally, walking up a modest incline, the robot is able break out of the water's hold. "This robot nicely illustrates some of the challenges and opportunities with small-scale robots," said senior author Robert Wood, Charles River Professor of Engineering and Applied Sciences at SEAS and core faculty member of the Harvard Wyss Institute for Biologically Inspired Engineering. "Shrinking brings opportunities for increased mobility—such as walking on the surface of water—but also challenges since the forces that we take for granted at larger scales can start to dominate at the size of an insect." Next, the researchers hope to further improve HAMR's locomotion and find a way to return to land without a ramp, perhaps incorporating gecko-inspired adhesives or impulsive jumping mechanisms.
10.1038/s41467-018-04855-9
Medicine
Early CTE disease process found to be mechanistically different than what occurs in late stages
Adam Labadorf et al, Inflammation and neuronal gene expression changes differ in early versus late chronic traumatic encephalopathy brain, BMC Medical Genomics (2023). DOI: 10.1186/s12920-023-01471-5 Journal information: BMC Medical Genomics
https://dx.doi.org/10.1186/s12920-023-01471-5
https://medicalxpress.com/news/2023-03-early-cte-disease-mechanistically-late.html
Abstract Background Our understanding of the molecular underpinnings of chronic traumatic encephalopathy (CTE) and its associated pathology in post-mortem brain is incomplete. Factors including years of play and genetic risk variants influence the extent of tau pathology associated with disease expression, but how these factors affect gene expression, and whether those effects are consistent across the development of disease, is unknown. Methods To address these questions, we conducted an analysis of the largest post-mortem brain CTE mRNASeq whole-transcriptome dataset available to date. We examined the genes and biological processes associated with disease by comparing individuals with CTE with control individuals with a history of repetitive head impacts that lack CTE pathology. We then identified genes and biological processes associated with total years of play as a measure of exposure, amount of tau pathology present at time of death, and the presence of APOE and TMEM106B risk variants. Samples were stratified into low and high pathology groups based on McKee CTE staging criteria to model early versus late changes in response to exposure, and the relative effects associated with these factors were compared between these groups. Results Substantial gene expression changes were associated with severe disease for most of these factors, primarily implicating diverse, strongly involved neuroinflammatory and neuroimmune processes. In contrast, low pathology groups had many fewer genes and processes implicated and show striking differences for some factors when compared with severe disease. Specifically, gene expression associated with amount of tau pathology showed a nearly perfect inverse relationship when compared between these two groups. Conclusions Together, these results suggest the early CTE disease process may be mechanistically different than what occurs in late stages, that total years of play and tau pathology influence disease expression differently, and that related pathology-modifying risk variants may do so via distinct biological pathways. Peer Review reports Background Chronic traumatic encephalopathy (CTE) is a progressive neurodegenerative disease that is caused at least in part by exposure to repeated traumatic head impacts most commonly observed in the brains of athletes of contact sports and combat veterans. CTE pathology is associated with many clinical symptoms, including behavioral, personality and mood changes as well as memory and cognitive function deficits. Upon autopsy, CTE is diagnosed by the presence of neurofibrillary tangles found around blood vessels and in the depths of the cortical sulcus. The primary risk factor for developing CTE is the duration of exposure to repetitive head impacts (RHI), which can be measured as total years of play for contact sport athletes [ 1 ]. The neurological symptoms of CTE often only manifest decades after players have retired, suggesting the effects of RHI set in motion a progression of pathological events ultimately leading to disease. However, not all individuals with a similar degree of exposure will go on to develop CTE, and while some genetic evidence suggests specific genes are associated with increased risk of disease, the mechanisms underlying the disease process are currently unknown. It is also currently unknown whether early disease states differ from those that are coincident with severe pathology. Our previous work showed that two genetic risk variants in the Apolipoprotein E (APOE) and Transmembrane Protein 106B (TMEM106B) genes also influence the frequency and severity of CTE. The association of the APOE e4 allele with increased Alzheimer’s Disease (AD) risk is well documented [ 2 ] and APOE e4 has also been shown to be associated with increased severity of CTE pathology [ 3 ]. Variants in TMEM106B are associated with increased neuroinflammation in aging [ 4 ], frontotemporal lobar degeneration (FTLD)-TDP [ 5 ], and with AD [ 6 , 7 ] and our prior study suggested a protective role of the TMEM106B variant rs3173615 in CTE [ 8 ]. While prior research has provided some information about the functional roles of APOE [ 9 ] and to a lesser extent TMEM106B [ 10 , 11 , 12 , 13 ] in both normal and disease contexts, the role they play in the development of CTE is poorly understood. To address these knowledge gaps, we profiled whole transcriptome gene expression of prefrontal cortex (Brodmann Area 9) by mRNASeq in a cohort of 66 CTE and 10 RHI controls. The CTE samples were subdivided into 13 low (CTE-L, Stages I & II) and 53 high (CTE-H, Stages III & IV) pathology groups using the McKee staging criteria [ 14 , 15 ], with the goal of identifying molecular changes associated with early versus late disease. The RHI controls are individuals who experienced a similar level of repetitive head impacts exposure as the disease groups but displayed no CTE pathology upon autopsy. In this study, we sought to identify genes and biological processes associated with differences between CTE and RHI controls, repetitive head impacts exposure as measured by total years of play, amount of tau pathology using immunohistological quantification of Phospho-Tau (AT8), and between APOE and TMEM106B risk variant carrier groups. Methods Sample characteristics Autopsy participants with a history of RHI exposure were drawn from the from the Understanding Neurologic Injury and Traumatic Encephalopathy (UNITE) study. Inclusion criteria for UNITE include a history of contact sports participation, military service, or domestic violence [ 16 ]. Participants were excluded if they lacked fresh frozen prefrontal cortex, died from drug overdose, hanging, or gunshot wound to the head, or had motor neuron disease or other significant neurodegenerative disease in the absence of CTE. For this study, contact sports included American football (n=67), boxing (n=2), ice hockey (n=2), and professional wrestling (n=1). Years of consecutive contact sport participation was used as a proxy for RHI. Two participants without CTE had RHI from non-contact sports exposures (e.g. multiple traumatic brain injuries during military service) and were not included in the years of play analyses. Sample statistics are included in Table 5 . This study has been approved by the VA Bedford Healthcare System and Boston University School of Medicine Institutional Review Boards. CTE-L and CTE-H samples were categorized based on the McKee staging criteria [ 14 , 15 ] Stages I & II and Stages III & IV, respectively. Briefly, CTE stages I and II are characterized by patchy and focal neurofibrillary tangles and tau-positive processes present around blood vessels and within the sulcal depths primarily within the frontal lobes, but also involving temporal and parietal lobes in stage II. CTE stages III and IV are characterized by more widespread tau pathology that extends to the gyral crest in multiple cortical lobes and involves the medial temporal lobe and subcortical structures. This stratification was used to model early versus late disease stages by noting that CTE-L subjects were younger at time of death on average than CTE-H, and the model assumes that CTE-L individuals would have progressed to CTE-H had they lived longer. RHI subjects were selected as athletes with a similar age at death as the CTE-L subjects and a similar level of repetitive head impacts exposure as the CTE subjects but exhibiting no CTE pathology (McKee Stage 0) upon autopsy. Total years of play was collected from family members at the time of brain donation and cross-checked with available public records for American football players [ 16 ]. AT8 histology measurements were collected using digital pathology analyses via an Aperio slide scanner as previous described [ 17 ]. The raw AT8 measures were observed to be log-normal by inspection and therefore were log transformed for all downstream analyses. For this study, subjects with no APOE4 allele were classified as APOE 0 , and those with at least one APOE4 allele were classified as APOE 1 . Additionally, subjects with a G in TMEM106B rs3173615 were classified as TMEM 0 and those with homozygous C were classified as TMEM 1 . Sample processing, RNA extraction, and mRNA sequencing Grey matter tissue was dissected from the cortical ribbon of the Brodmann Area 9 gyral crest. Total RNA was extracted using the Promega Maxwell RSC simplyRNA Tissue Kit (Cat No. AS1340) according to the manufacturer’s protocol. The integrity and quality of RNA were verified by an Agilent 2100 Bioanalyzer using RNA 600 Nano Chips (Cat No. 5067-1511). Only cases with a RNA Integrity Number (RIN) of 4 or higher using an Agilent Bioanalyzer instrument were selected for study. Paired end poly-A selected mRNA sequencing libraries with a targeted library size of 80M reads were generated using a Kapa Stranded mRNA-Seq Kit according to the manufacturer instructions and sequenced on an Illumina MiSeq instrument by the Boston University Microarray and Sequencing Core. Samples were sequenced in four batches due to study size and were distributed among batches as to avoid any confounding of status, age at death, RIN, or other important experimental variables. mRNA-Seq analysis A custom analytical pipeline was developed to analyze the sequencing data. Sequencing reads adapter- and quality-trimmed using trimmomatic [ 18 ] and assessed for high quality using FastQC [ 19 ] and multiqc [ 20 ]. Trimmed reads were aligned against the human reference genome build GRCh38 with Gencode v34 [ 21 ] gene annotation using STAR [ 22 ]. Aligned reads were quantified to the gene level using the HTSeq package ? and the Gencode v34 gene annotation. For each analysis, genes with abundance estimates less than 0 in at least 50% samples within each group were filtered out. Differential expression (DE) analyses were conducted separately for case status, total years of play, and AT8 with the DESeq2 [ 23 ] package. The five DE analyses are listed and summarized in Table 1 . Group comparisons of CTE-L and CTE-H were performed separately with all RHI samples. For total years of play, AT8, APOE and TMEM106B , CTE-L and RHI samples were grouped together to increase sample size, as modeling continuous variables on the groups separately had insufficient statistical power to detect meaningful associations. CTE-H samples were analyzed as a group for these variables. Three samples (RHIN_0052, CTES_0064, CTES_0079) were filtered out of the years of play analysis due to extreme values (> 30 years) that drove spurious DE results. One sample, CTES_0068 was filtered out of the APOE and TMEM106B analysis due to missing TMEM106B value. All analyses included age at death, RIN, and sequencing batch in the model as covariates in addition to the variable of interest. Gene associations were considered significant if they attained a false discovery rate (FDR) of less than 0.1. Table 1 DE models and result statistics Full size table Differential expression results from each analysis were subject to preranked gene set enrichment against the Gene Ontology annotation [ 24 ] curated in the c5 MsigDB gene set database [ 25 ] using the fgsea [ 26 ] package. This gene set enrichment strategy uses ranked fold change for all genes irrespective of significance and compares this ranked list against the genes annotated to each gene set. The intuition of this analysis is that if the genes related to a biological pathway or process are collectively increased or decreased when comparing disease versus control, this suggests that biological pathway or process is involved in disease. The analysis produces a statistic (Normalized Enrichment Score, NES) and associated p -value with each gene set that may be positive or negative if the genes within that gene set are highly or lowly ranked, respectively. Since gene sets may have genes that either enhance or inhibit the overall activity of a pathway or process, we cannot interpret a positive or negative NES to mean the function of pathway or process overall is increased or decreased, just that the genes within the process are coordinately differentially expressed. GO categories were considered significant if they attained an FDR of less than 0.1. To aid in interpretation of GO terms, each enriched term was manually categorized by the investigators into one of a set of high level biological categories a priori. Categories include blood brain barrier (BBB), extracellular matrix (ECM)/membrane, cell cycle, cytoskeleton, development, immune/inflammation, ion homeostasis, metabolism/mitochondria, neuron, protein processing, signaling, transcription/translation, and an "other" category for terms that were not easily categorized. The immune/inflammation and neuron categories were further subcategorized due to the large number of enriched terms and to aid in further interpretation. All custom analysis was performed with python, R, snakemake [ 27 ] and Jupyter Lab [ 28 ] software. qPCR validation Delta CT expression values for an a priori set of 33 genes known to be implicated in CTE (full gene list in Additional file 2 : Table S2) in a set of 54 overlapping BA9 brain samples (37 CTE-H, 9 CTE-L, and 8 RHI) that overlapped with the samples presented in this study were analyzed for differential expression. Linear regression models were constructed for each gene modeling qPCR expression values as a function of each variable of interest separately adjusting for RIN and age at death, consistent with the DE models conducted with the mRNASeq data. Genes were marked as concordant between the qPCR and DE analyses if the direction of effect agreed (i.e. both either positive or negative log2 fold change), irrespective of significance in either dataset. Concordant/discordant genes were tabulated into confusion matrices for each analysis and Fisher’s exact test was applied to each to assess the likelihood the degree of concordance could occur by chance using a right-tailed p -value. Separately, Spearman correlation was computed for qPCR versus mRNASeq DE log2 fold change values, to assess overall agreement in ranked effect size. Data and code availability Raw and processed read data have been deposited into GEO under accessions GSE157330. All results, analysis, and figure code for this project are available on an Open Science Framework project located at . Results Distributions of these key sample characteristics are depicted in Figure 1 . Fig. 1 Sample characteristics for the variables examined in this study. A – C Distribution of total years of play, age at death, and log AT8 histochemical quantification, respectively, for RHI, CTE-L, and CTE-H sample groups. D , E Distribution of age at death for each sample group broken out into risk allele groups for APOE and TMEM106B, respectively. F , G Distribution of log AT8 for each sample group as in D and E Full size image Five separate differential expression (DE) and subsequent gene set enrichment analyses (GSEA) using Gene Ontology (GO) annotations were conducted in this study. Each analysis sought to identify genes whose expression was associated with the different comparisons of interest in the same set of samples as described in Table 1 . Specifically, separate DE models were conducted corresponding to case versus RHI control, number of years of play as a continuous variable, amount of tau pathology as measured by AT8 histochemistry, possession of one or more APOE e4 risk alleles, and possession of the TMEM106B risk allele (i.e. recessive model homozygous C for rs3173615 ). Samples were stratified into low (CTE-L, Stages I & II) and high (CTE-H, Stages III & IV) pathology groups and analyzed for each model separately. Table 1 contains sample count information and summary statistics for the five DE models reported in this study. CTE versus RHI Figure 2 depicts results from the case status DE models and subsequent gene set enrichment analysis. DE genes were primarily decreased in disease compared with RHI for both sample groups (Figure 2 A, B ). There was relatively consistent agreement in the direction of effect for genes when comparing the two disease groups with RHI controls, and the common FDR significant DE genes were all changed in the same directions (marked genes in Figure 2 C, listed in Table 2 ). However, there was little agreement in the direction of effect for gene set enrichment, where only four GO terms were significantly altered at FDR < 0.1 in both analyses, and three of those show an opposite direction of effect (Figure 2 D, E ). GO terms that trend toward significance (i.e. nominal p -value < 0.05) similarly showed substantial discordance in direction of effect for some GO terms (Figure 2 E), where processes related to immune response, inflammation, blood brain barrier, extracellular matrix/membrane, and metabolism were increased in high stage disease but decreased in low stage disease. Processes increased in both were related to protein processing, metabolism, neuronal functions, and metal ion homeostasis, while those decreased in both involve mostly ribosomal processes and transcription/translation (Figure 2 E). Fig. 2 DE statistics for case versus RHI comparisons. Early and late stage CTE versus RHI controls showed general concordance in direction of DE genes, but mixed agreement on the biological process level. A , B ) Distribution of log2 fold change for DE genes with FDR < 0.1 for both CTE-H versus RHI and CTE-L versus RHI, respectively. C Log2 fold change values for DE genes with FDR < 0.1 in either CTE-H or CTE-L analyses. D Normalized Enrichment Scores (NES) from Gene Set Enrichment Analysis (GSEA) of GO terms from the c5 MSigDB curated GO annotation at FDR < 0.1. Gene sets significant in both analyses are highlighted and colored based on concordance (i.e. same or different direction of effect). E Hierarchically clustered heatmap of NES for enriched GO terms from D that have nominal p -value less than 0.05 in both CTE-H versus RHI and CTE-L versus RHI. Diamond and X markers correspond to GO terms significant at FDR < 0.1 in both from D. GO term names are colored red if the direction of effect is discordant between analyses. NS—GO namespace of corresponding term: BP—biological process, CC—cellular component, MF—molecular function Full size image Table 2 Common significant genes for CTE-L and CTE-H vs RHI from Fig. 2 C Full size table To understand which processes were unique to CTE-H or CTE-L, we filtered the GO terms to include only those with FDR < 0.05 in one analysis and nominal p -value > 0.25 in the other. This strategy identified 290 and 4 GO terms that were strongly enriched in CTE-H versus RHI and CTE-L versus RHI, respectively. The numbers of these uniquely enriched GO terms as well as the 41 with nominal p -value < 0.05 in both analyses depicted in Figure 2 E organized by category are in Table 3 . Immune and inflammation processes have the highest number of enriched GO terms and were represented in both analyses, with 15 terms in common between them. All biological categories were implicated by both analyses except for cytoskeleton, apoptosis, and signaling terms, which were only identified when comparing CTE-H and RHI. Table 3 Counts of enriched GO terms grouped by high level category uniquely significant in CTE-L, in CTE-H, or implicated by both sample groups Full size table To better summarize the large number of significantly enriched GO terms, we manually categorized GO terms found to be significant in any analysis into 13 high level categories, as depicted in Figure 3 (see Additional file 1 : S1 for a complete categorization table). This categorization strategy enables concise comparison of different biological processes between groups, including direction of effect. The clustered heatmap of Normalized Enrichment Scores (NES) for GO terms significant at FDR < 0.1 in either CTE-H versus RHI or CTE-L versus RHI depicted in Figure 3 A shows that there were terms which trend in both concordant and discordant directions of effect, and there is no obvious consistency in the concordance pattern from the perspective of categories. When the significant terms were grouped by category as in Figure 3 B, immune/inflammatory processes appear as the most frequently increased in CTE-H versus RHI, while these processes appear decreased in CTE-L versus RHI. Fig. 3 Enriched GO terms for case versus control. Detailed GO term enrichment of early and late CTE versus RHI showed concordant neuronal processes and opposite direction of effect for inflammatory processes. A Heatmap of normalized enrichment scores (NES) for enriched GO terms. Row color bar represents significant pathways respective of columns (salmon colored bars to right of dendrogram indicate corresponding NES is FDR < 0.1) Rightmost color bar represents the GO category as listed in the legend. B Number of significant GO terms from A grouped by category. Terms with positive, negative NES (red, blue in A respectively) are plotted as bars to the right and left of 0, respectively. C Significant GO terms from the neuron category in B grouped by subcategory. D Significant GO terms from the immune/inflammation category in B grouped by subcategory Full size image Due to their relevance to this disease context, the immune/inflammation and neuron categories were further divided into subcategories based on their biological role (Figure 3 C, D ). From Figure 3 C, increased neuronal development terms comprise most of the significant terms between CTE-H and RHI, while a small number of increased synaptic processes were implicated by CTE-L versus RHI. Increased innate immune processes were the most numerous in the severe CTE group but were closely followed by immune cell migration, cytokine-related inflammation, and apoptotic processes, while the few significant terms in CTE-L versus RHI suggest decreased phagocytosis, innate immune, antigen presentation, and adaptive immune processes (Figure 3 D). Association with total years of play We next sought to identify genes associated with exposure as measured by total years of play stratified into low (CTE-L+RHI) and high (CTE-H) pathology sample groups as a model of early versus late changes associated with head impact exposure. We chose to group CTE-L and RHI samples together in this analysis for three reasons. First, a goal of this study is to identify potential early changes in result of exposure to repetitive head trauma. Although CTE-L and RHI are pathologically distinct, they represent a similar level of exposure in years of play and age at death (Figure 1 A, B ), and therefore we hypothesize that similar genes are involved. Second, the brain tissue was taken from the gyral crest of the prefrontal cortex, which tends to be relatively spared of pathology in early disease compared with late, thereby avoiding most effects caused directly by pathology that likely influence the CTE-H samples. Last, we wanted to maximize our statistical power to detect differences with the total years of play variable by combining into a larger sample size. Figure 4 A, B compare DE genes and enriched GO terms for genes associated with total years of play, respectively. We note that there were no significant DE genes at FDR < 0.1 for either analysis (Fig. 4 A, Table 1 ), and unlike disease versus RHI DE genes, there is no apparent relationship between the direction of effect of these genes between CTE-L+RHI and CTE-H, and only 3 genes were nominally significant in both analyses (Figure 4 A). GO term enrichment, on the other hand, identified many significant enriched terms at FDR < 0.1 that show consistently similar direction of effect of enriched terms, and all terms that are significant in both analyses are increased (Figure 4 C). Most of these common GO terms are related to immune response and inflammation, but unlike with comparison of disease versus RHI, all are increased in both CTE-L+RHI and CTE-H (Figure 4 D). With the exception of neuronal processes in CTE-L+RHI, all implicated GO terms are positively associated with years of play in both sample groups. Fig. 4 DE and GSEA statistics for genes associated with years of play. There were no FDR significant DE genes in either group but many significant gene sets in common that showed complete concordance in direction of effect between RHI + CTE-L and CTE-H. A Log2 fold change estimates of genes associated with years of play for CTE-L + RHI (Low) and CTE-H (High) sample groups. As no genes were significant after adjusting for multiple hypotheses, genes with nominal p -value less than 0.01 are plotted. B Gene set enrichment analysis of GO terms using log2 fold change. Gene sets are Concordant if they are significant at FDR < 0.1 in both comparisons and are modified in the same direction. C Clustered heatmap of enriched GO terms for both CTE-H and CTE-L + RHI associated with total years of play at nominal p -value < 0.05 in both. D GO terms from C grouped by category. E Neuron category terms. F Immune/Inflammation category terms Full size image Association with Tau pathology We next sought to identify genes associated with tau protein aggregation as measured by immunohistological quantification of phospho-tau (AT8). AT8 immunostaining values were log transformed to attain normally distributed values. We again group CTE-L and RHI together for reasons described above. Although the amount of AT8 differs between CTE-L and RHI, the amount of tau pathology is more similar between CTE-L and RHI than CTE-H (Figure 1 C, note log scale). A substantial number of DE genes were associated with log AT8 at FDR < 0.1 for both CTE-L+RHI and CTE-H groups, and a large number of genes were significantly associated with both (Figure 5 A). Strikingly, nearly all significant DE genes show an opposite direction of effect when comparing CTE-L+RHI and CTE-H. This inverse relationship is also observed in the enriched GO terms induced by the DE genes (Figure 5 B), where all common enriched terms are discordant in direction of effect. The inverse relationship is also visible in Figure 5 C, where immune/inflammation processes are primarily up in CTE-H but trend down in CTE-L+RHI, and the reverse is true of many other terms. This is shown clearly in Figure 5 D, where many immune/inflammation terms are increased in association with AT8 in the CTE-H group, similar to the association seen with total years of play. However, CTE-H also shows an equally large number of neuronal terms that are decreased with increasing amounts of tau and, curiously, these processes appear to be positively associated with AT8 in the CTE-L+RHI group. Fig. 5 DE and GSEA statistics for genes associated with log AT8. There were many FDR significant DE genes and gene sets associated with AT8 that showed almost complete discordance in direction of effect between RHI + CTE-L and CTE-H. A Log2 fold change estimates of genes associated with AT8 for CTE-L + RHI (Low) and CTE-H (High) sample groups. B Gene set enrichment analysis of GO terms using log2 fold change. Gene sets were discordant if they were significant in both comparisons and are modified in opposite directions. C Clustered heatmap of enriched GO terms for both CTE-H and CTE-L + RHI associated with total years of play at nominal p -value < 0.05 in both. D GO terms from C grouped by category. E Neuron category terms. F Immune/Inflammation category terms Full size image Risk variant effects We next investigated how risk variants of APOE and TMEM106B affect low and high pathology groups. A dominant encoding for the APOE e4 allele was used to separate samples in each sample group, i.e. individuals with at least one e4 allele were included in the risk group. The TMEM106B SNP rs3173615 G allele has been shown to be protective [ 8 ]. Thus, individuals homozygous for the non-protective C allele were grouped to form the risk carriers (i.e. a recessive encoding for the non-protective allele). This encoding makes comparing the results of the two variants more straightforward, as association of genes and GO terms have the same interpretation with respect to the risk carriers. DE analyses were conducted comparing the risk versus non-risk subjects within each of the CTE-L+RHI and CTE-H groups, resulting in four sets of DE results. Very few genes were significantly associated at FDR < 0.1, where only 2 met this significance level in the APOE comparisons and none for TMEM106B (see Table 1 ). However, 196 and 76 GO terms were significantly associated at FDR < 0.1 for CTE-L+RHI and CTE-H, respectively, with APOE risk, and 3 and 262 were significantly associated with these in TMEM106B risk. It is interesting to note that the number of significant GO terms is higher for APOE risk in the CTE-L+RHI group, which has the smallest number of samples, and also higher in the CTE-H group for TMEM106B risk, where the remaining two analyses yielded relatively few results. We next compared the log2 fold change associated with the risk allele group of genes between variants and sample groups (Figure 6 A–D). In principle, if the risk variants modulate disease risk using a common mechanism, the DE genes identified for the different risk genes and sample groups should show some similarity. Very few nominally significant genes overlapped between any of the comparisons, as illustrated in the Venn diagrams of Figure 6 . However, similar to the inverse relationship observed between CTE-L+RHI and CTE-H in the AT8 comparison, we observed that the effect size of the overlapping genes was inverted between these groups for APOE risk carriers (Figure 6 A scatter plot). The relationship was less consistent when comparing TMEM106B risk carriers between CTE-L+RHI and CTE-H, where there were both concordant and discordant directions of effect in the common genes (Figure 6 B scatter plot). Curiously, an inverse relationship is also observed when comparing the APOE and TMEM106B risk carriers within the CTE-L+RHI group (Figure 6 C scatter plot). There is no consistent relationship between the genes for the two risk carriers within CTE-H (Figure 6 D scatter plot). Fig. 6 DE and GSEA statistics for genes and gene sets associated with APOE ε 4 and TMEM106B risk alleles. Genes associated with risk alleles in both sample groups were largely disjoint, and gene sets associated with APOE and TMEM106B risk alleles were primarily found in CTE-L + RHI and CTE-H, respectively. A , B Venn diagram of nominally significant ( p -value < 0.05) gene overlap and scatterplot of L2FC of CTE-L + RHI versus CTE-H sample groups for APOE ( A ) and TMEM106B risk allele ( B ), respectively. C , D Venn diagram of nominally significant ( p -value < 0.05) gene overlap and scatterplot of L2FC of APOE ε 4 versus TMEM106B risk allele forCTE-L + RHI ( C ) and CTE-H ( D ), respectively. E NES heatmap of GO terms significant at FDR < 0.1 in at least one condition. Bars on left of the heatmap indicate significance of corresponding NES in the heatmap columns respectively, GO IDs that were significant at FDR < 0.1 is in red. F , G GO pathways that were FDR < 0.1 grouped by category. Values left of zero corresponds to number of significant pathways with negative NES. Values right of zero corresponds to number of significant pathways with positive NES Full size image Similar to the gene set enrichment analyses presented earlier, there was a mix of concordance and discordance in the direction of effect of GO terms implicated by the DE genes. The clustered heatmap in Figure 6 E depicting Normalized Enrichment Scores for each analysis showed little consistent pattern of concordance between them, except to note that the sample groups clustered together more closely than the variant groups. The discordance in direction of effect for APOE variants across sample groups was also apparent (1st and 3rd column of heatmap), consistent with the inverted gene expression relationship from Figure 6 A. More generally, the patterns observed in comparing concordance between pairs of columns in the heatmap are consistent with the scatter plots by inspection. There was a surprising number of significantly enriched GO terms associated with APOE risk carriers in the CTE-L+RHI group in Figure 6 F. No other analysis conducted by this study found so many significant results in this sample group, which was notable considering this was the sample group with the smallest sample size. Many GO terms in various categories, most notably neuron, are negatively associated with APOE risk carriers, meaning neuronal gene expression overall was decreased in CTE-L+RHI individuals who have the APOE risk allele. In contrast to the results from APOE , the pattern of biological processes implicated by TMEM106B were more consistent with other comparisons made in this study, namely that CTE-H exhibited the majority of the significantly associated GO terms which were primarily composed of decreased neuron and increased immune/inflammation categories (Figure 6 G). Validation of associations with qPCR To provide orthogonal validation of these findings, we examined previously generated qPCR expression quantification on an a priori set of 33 genes known to be implicated in CTE (full gene list in Additional file 2 : Table S2) in a subset of 54 participants (37 CTE-H, 9 CTE-L, and 8 RHI). These genes were chosen based on our previous investigations as being involved more generally in neurodegenerative disease processes and as such many were not significantly DE in our mRNASeq CTE analyses (see Additional file 3 : Table S3). We therefore focused on comparing the direction of effect (i.e. log2 fold change) between the qPCR and DE genes to assess concordance and tabulated the results into confusion matrices found in Table 4 . Table 4 DE genes vs qPCR log2 fold changes for all five models reported in this study Full size table The level of agreement across sample groups and models between qPCR and mRNASeq DE varied. Some comparisons showed very little agreement in the direction of effect, particularly the CTE-L versus RHI and total years of play models. Note from the mRNASeq analysis of total years of play above (Figure 4 ) that there were very few DE genes implicated and very little agreement on the direction of effect, which was consistent with the results of Table 4 b. On the other hand, some comparisons show very high concordance, the most noteworthy being those for AT8 (Table 4 c). Here the unexpected inverse relationship of CTE-L+RHI and CTE-H with AT8 (see Figure 5 A) was also observed; note most genes are decreased and increased in CTE-L+RHI and CTE-H, respectively, and the Fisher’s Exact test for CTE-L+RHI attained significance and CTE-H trended toward significance. The Spearman correlation of log2 fold changes for AT8 are modest but significant at p -value < 0.05. The TMEM106B risk allele comparisons (Table 4 e) also show substantial agreement. Overall, the concordance of results from qPCR and mRNASeq was remarkable, especially considering the genes were chosen independently. Comparative analysis To aid in summarizing results presented in this study, subpanels of GO category enrichment for each of the five primary analyses were included in Figure 7 . These plots depict the number of significantly enriched GO terms grouped by high level category as in each analysis specific figures presented earlier. The subfigures have been annotated to emphasize several salient patterns observed across analyses. Increased immune/inflammation processes were implicated in all comparisons involving CTE-H except with APOE risk. CTE versus RHI and total years of play analyses (Figure 7 A, C ) were nearly absent of neuronal processes, while in AT8 and TMEM106B risk comparisons, neuronal processes were substantially decreased for CTE-H (Figure 7 B, D ). Comparisons with CTE-L had very few enriched GO categories except for APOE risk, where there was a substantial decrease in immune/inflammatory categories. Fig. 7 Enriched GO term categories for all five primary analyses presented in this work as found in previous figures. Increased inflammation was associated with all CTE-H analyses except APOE risk carriers, and decreased neuronal processes were only associated with AT8 and TMEM106B risk. A CTE versus RHI, B AT8, C total years of play, D TMEM106B risk carriers versus non-risk carriers, and E APOE risk carriers versus non-risk carriers. Dashed boxes and bolded text are annotated to aid in interpretation Full size image Discussion This study presents the largest transcriptome-wide gene expression analysis of post-mortem human brain in CTE to date. We set out to identify gene expression patterns observed in low- (CTE-L) and high-stage (CTE-H) CTE as a model of early versus late disease as they relate to presence of pathology (i.e. case vs. RHI), the amount of repetitive head impacts exposure (i.e. total years of play), quantitative measures of tau pathology in affected brain tissue, and genetic variants known to influence CTE symptoms and severity. We showed that there were substantial gene expression effects in individuals with severe CTE across all of these axes while the comparisons involving RHI and CTE-L had fewer significant results. Case versus control comparisons yielded mixed concordance in the direction of effect of implicated pathways, while processes associated with total years of play were in good agreement between low compared with high pathology sample groups. In contrast, the processes associated with the amount of tau pathology had almost exactly opposite directions of effect between these groups. Furthermore, the DE genes associated with APOE and TMEM106B risk variants did not have a high degree of overlap and suggest distinct processes. In every comparison except APOE risk carriers, substantial neuroimmune and neuroinflammatory processes were positively associated with conditions that increase risk of severe disease in CTE-H subjects. Processes in the innate immune response subcategory were the most numerous, but many different components of the immune response, including cytokine activity and apoptosis, were also well represented in these comparisons. While some inflammatory processes implicated when comparing CTE-H APOE risk carrier groups were consistent with these findings with disease, RHI, tau pathology, and TMEM106B risk, the small number of processes identified may suggest that the effects of APOE risk variants largely precede the development of severe disease. This idea is further supported by the finding that the comparison of APOE risk carriers in CTE-L+RHI samples produced many significantly enriched gene sets, whereas no other comparison implicated nearly as many for this sample group. Curiously, these processes, most notably immune/inflammation, appear reduced in APOE risk carriers relative to non-risk carriers in this sample group, forming an almost exact mirror image of the increased processes observed in CTE-H. This suggests the possibility that the APOE risk variant may impair these processes in early disease, which in turn might predispose risk carriers to developing more severe pathology over time than non-risk carriers. An alternative explanation might be that the APOE risk allele causes an aberrant increase in the inflammatory state of the brain which is then compensated by homeostatic mechanisms that are competent in early disease but become less effective over time, leading to dramatically increased inflammation in late stage disease. A second noteworthy pattern we observed is that neuronal gene expression decreases with increasing tau pathology and with the presence of TMEM106B risk. It is interesting to note that comparatively few neuronal processes are implicated when comparing CTE-H and CTE-L to RHI or when examining gene expression associated with total years of play, and almost none were associated with APOE risk status. Synapse was the neuronal subcategory with the largest number of enriched gene sets across these comparisons, followed by neuronal development. With the exception of CTE-L versus RHI, synaptic genes are decreased with increasing disease risk factors, suggesting that synaptic density is reduced in affected tissues, as has been recently shown in mouse models of RHI [ 29 ]. TMEM106B risk, in particular, was associated with decreased expression of many neuronal process pathways in CTE-H, which is consistent with previous reports demonstrating an association with decreased neuronal density [ 30 ]. In addition, the previously demonstrated increased variation in synaptic density in CTE suggests that synapse turnover may be a feature of RHI and CTE and may be associated with greater variation in synapse related gene expression [ 31 ]. The amount of head injury exposure as measured by total years of play appears to have only a weak effect on individual gene expression across individuals, but the biological processes they implicate are relatively consistent when comparing exposure groups. A given individual’s exposure to repetitive head impacts may have occurred many years prior to death when these samples were collected, and while we adjusted for the effects of age at death as well as possible, this duration paired with diverse life experiences and disease progression could easily distort any consistent signal on the gene level. The consistency between sample groups on the biological process level is therefore noteworthy and suggests there may indeed be a common response to exposure detectable even many years after exposure (Table 5 ). Table 5 Sample statistics Full size table In contrast with years of play, tau pathology in the post-mortem samples captures an immediate condition of the brain when gene expression is measured. Indeed, the presence of tau pathology appears to have a strong effect on gene expression in both CTE-L+RHI and CTE-H sample groups. Thousands of genes and hundreds of enriched GO term gene sets are associated with AT8 immunopositivity, primarily implicating inflammatory and neuronal processes. The striking and nearly complete inverse relationship between both gene expression fold changes and gene set enrichment direction suggests a qualitatively different effect of pathology, or precursors to development of pathology, in individuals with low compared with high levels of exposure. Since the amount of tau pathology is relatively low but increased in CTE-L compared with RHI, this suggests a fundamentally different process affects these groups than in severe disease. With the exception of inflammation, development, and ECM/membrane, all biological processes trend as negatively associated with increasing amounts of tau. We recently found similar gene expression changes comparing sulcal versus gyral crest in dorsolateral prefrontal cortex in CTE subjects [ 32 ]. The concordant effects of years of play and the discordant effects of tau explain the mixed concordance observed when comparing case versus RHI. Because the amount of tau pathology and years of play vary among individuals in both low and high exposure groups, we may therefore interpret the case versus RHI comparison as a convolution of these disparate effects. Although years of play and pathology are highly correlated, this motivates considering these as separate effects that modulate disease expression, which may have important therapeutic implications. The relative lack of strong gene expression signal in APOE risk carriers is somewhat surprising, considering evidence of the role this gene is thought to play in the development of tau pathology in CTE and AD. Also surprising is the lack of overlap of DE genes and the discordance of the direction of effect between APOE and TMEM106 B risk carriers versus non-risk carriers. Risk alleles of both genes increase the risk of developing severe CTE pathology, but the low overlap of the DE genes and biological processes suggests that the molecular mechanisms underlying this increased risk are distinct. As in the AT8 comparison, there is an inverse relationship between the nominally significant DE genes comparing CTE-L+RHI versus CTE-H APOE risk groups as well as TMEM106B versus APOE in the CTE-L+RHI sample group. The similarity of expression profiles between tau and TMEM106B does not appear to be driven by the amount of tau pathology, as the distribution of tau in the TMEM106B risk carriers and non-carriers does not significantly differ. The reasons for this are unclear, but further suggest a different molecular process is at play when considering the effects of these genes in combination with tau and exposure. The relatively small number of FDR significant DE and GSEA results for the CTE-L versus RHI and CTE-L+RHI analyses in the study is likely due in part to low statistical power on account of relatively small sample size, but, since the results are sparse, the influence of false positives is minimal. On the other hand, the relative paucity of results compared with severe disease comparisons may indicate that early molecular changes in the brain of those with CTE are in fact similar to those who experienced a similar level of repetitive trauma exposure but lack specific CTE pathology. This would suggest a model where the pathology itself is a driver of molecular changes in later disease stages, which is supported by the observation that gene expression in many genes is associated with the amount of tau pathology. While our understanding of the antemortem precedents in the CTE brain remains in infancy, it is important to consider these findings within the broader context of basic biological and clinical studies in neurotrauma. Gene expression changes are strongly influenced by epigenetic alterations such as histone modifications and DNA methylation in response to traumatic brain injury [ 33 , 34 ]. Basic research into these alterations in brain have been investigated in rodent models, and a few studies have examined antemortem human biofluids including blood and cerebrospinal fluid. No studies to date have investigated epigenetic changes in postmortem CTE brain. The lack of availability of antemortem brain tissue is a fundamental limitation of this research, but biofluid correlates of the disease processes shown in this study would provide strong evidence of the relevance of these findings and is the subject of future studies. Such correlates are badly needed, as they could inform diagnostic and prognostic assessments and therapeutic approaches where no reliable markers are currently known [ 35 ]. Conclusions In conclusion, these data present compelling evidence of widespread gene expression changes in late stage CTE and less pronounced changes in early disease and those with repetitive head impacts exposure but without CTE pathology. Furthermore, these results suggest individuals with low exposure and little to no pathology experience a different set of molecular processes than those with late disease as well as a distinct association with tau pathology and genetic risk factors. Therapeutics and biomarkers developed as targets with late-stage signatures might not be effective for individuals early in the progression of disease. Future studies should endeavor to further characterize the active disease process in younger individuals with less exposure. Availability of data and materials The high throughput sequencing data used in this study is publicly available on Gene Expression Omnibus accession GSE193407. All code and processed files are accessible at . Abbreviations CTE: Chronic traumatic encephalopathy APOE: Apolipoprotein E TMEM106B: Transmembrane protein 106B DE: Differential expression GO: Gene ontology
Millions of people, including athletes who play contact sports, members of the military and victims of domestic violence, are exposed to repetitive head impacts (RHI), which is the primary risk factor for developing chronic traumatic encephalopathy (CTE). Symptoms of CTE often manifest years to decades after exposure to RHI and very little is known about what happens in the brain in the interim. The brains of people who die with CTE are marked by the accumulation of a protein called tau, the same protein found to aggregate in Alzheimer's disease (AD) brain. The amount of tau pathology in CTE correlates with severity of disease, where early-stage brains have very little pathology and late stage show severe, widespread involvement. The amount of RHI exposure, which for athletes can be measured in terms of the number of years they played a violent sport, as well as genetic risk variants influence the extent of tau pathology and associated disease severity. However, the molecular and genetic mechanisms that underly the development disease, and to what extent those effects are consistent throughout disease progression, are poorly understood. To address these questions, researchers from Boston University Chobanian & Avedisian School of Medicine conducted genetic analysis of the largest collection of post-mortem CTE brains, primarily from professional athletes, donated to the BU CTE Brain Bank. They found evidence that early and late CTE brains are similar in some ways but dramatically different in others. In particular, neuro-inflammation and neuronal stress are strongly implicated in disease, albeit to different extents and in different directions depending on the severity of disease. This is the first study to show that the molecular pathways involved in early CTE are different from those involved in late-stage disease. "A better understanding of the early CTE disease process may lead to more informative diagnostics, biomarkers and ultimately therapies," said co-corresponding author Adam Thomas Labadorf, Ph.D., assistant professor of neurology. "In addition, since the type of pathology found in the brains of people with CTE is similar to that found in AD, a better understanding of how the brain responds to this kind of pathology in CTE is likely to better inform our understanding AD as well." The researchers studied the prefrontal cortex tissue from 76 individuals (66 CTE, 10 control) who donated their brains to the BU UNITE Brain Bank. The sample set contained brains that spanned the full range of disease severity, affording the researchers the unique opportunity to see whether the gene expression in people with early stage CTE differ from those with late stage. The researchers generated gene expression data for each individual and then performed bioinformatic and statistical analyses of the different subsets of these samples to look for gene expression patterns that are associated with different clinical, histological and genetic markers that are relevant to CTE. They then identified genes and biological processes associated with total years of play as a measure of exposure, amount of tau pathology present at time of death, and the presence of APOE and TMEM106B risk variants. The researchers found substantial gene expression changes were associated with severe disease for most of these factors, primarily implicating diverse, strongly involved neuro-inflammatory and neuro-immune processes. In contrast, low pathology groups had many fewer gene expression changes and neuroimmune or inflammatory processes implicated and showed striking differences for some factors when compared with severe disease. According to the researchers, if the active disease process in early disease differs substantially from late-stage disease, this could have important implications for both diagnostic and therapeutic targets. "This might explain why therapeutic targets identified from late-stage human tissue have largely failed to influence disease progression in clinical trials for many neurodegenerative diseases. In addition, if there are distinct markers of early disease progression that are absent in late disease, this would provide an opportunity to explore different diagnostics and biomarkers that we otherwise wouldn't know to look for," explained co-corresponding author Thor Stein, MD, Ph.D., a neuropathologist at VA Boston Healthcare System and associate professor of pathology and laboratory medicine. These findings appear online in the journal BMC Medical Genomics.
10.1186/s12920-023-01471-5
Nano
An analysis of the system-wide costs and benefits of using engineered nanomaterials on crop-based agriculture
Leanne M. Gilbertson et al. Guiding the design space for nanotechnology to advance sustainable crop production, Nature Nanotechnology (2020). DOI: 10.1038/s41565-020-0706-5 Journal information: Nature Nanotechnology
http://dx.doi.org/10.1038/s41565-020-0706-5
https://phys.org/news/2020-06-analysis-system-wide-benefits-nanomaterials-crop-based.html
Abstract The globally recognized need to advance more sustainable agriculture and food systems has motivated the emergence of transdisciplinary solutions, which include methodologies that utilize the properties of materials at the nanoscale to address extensive and inefficient resource use. Despite the promising prospects of these nanoscale materials, the potential for large-scale applications directly to the environment and to crops necessitates precautionary measures to avoid unintended consequences. Further, the effects of using engineered nanomaterials (ENMs) in agricultural practices cascade throughout their life cycle and include effects from upstream-embodied resources and emissions from ENM production as well as their potential downstream environmental implications. Building on decades-long research in ENM synthesis, biological and environmental interactions, fate, transport and transformation, there is the opportunity to inform the sustainable design of nano-enabled agrochemicals. Here we perform a screening-level analysis that considers the system-wide benefits and costs for opportunities in which ENMs can advance the sustainability of crop-based agriculture. These include their on-farm use as (1) soil amendments to offset nitrogen fertilizer inputs, (2) seed coatings to increase germination rates and (3) foliar sprays to enhance yields. In each analysis, the nano-enabled alternatives are compared against the current practice on the basis of performance and embodied energy. In addition to identifying the ENM compositions and application approaches with the greatest potential to sustainably advance crop production, we present a holistic, prospective, systems-based approach that promotes emerging alternatives that have net performance and environmental benefits. Main The global agriculture system is complex and integrated, and touches nearly all aspects of our daily lives, both directly and indirectly. Agriculture is the cornerstone of a prosperous global society and, therefore, it is critical to maintain and protect it for future generations. Sustainable development of the agriculture and food system has become a priority of prominent national and global organizations. For example, the United Nations Food and Agriculture Organization recognizes the need to advance sustainable agriculture alongside goals to eradicate food insecurity and malnourishment 1 , and half of the United Nations Sustainable Development Goals pertain to food systems, which include directly addressing hunger (Goal 2), clean water (Goal 6) and responsible consumption and production (Goal 12) 2 . In addition, the National Academies of Engineering identify nitrogen (N)-cycle management as one of 14 Grand Challenges for Engineering 3 , and the National Science Foundation recently recognized the critical imbalance of the anthropogenically altered phosphorus (P) cycle 4 . Given the current resource inefficiencies of food-system processes—upstream processing energy, on-farm agrochemical and water resources, and downstream food distribution and waste—innovative approaches are needed to increase the agricultural productivity while minimizing its resource intensity and environmental implications. The 2020 forecasted global agrochemical annual use is 120 × 10 6 t (metric tonnes) for N-based fertilizers 5 , 50 × 10 6 t for phosphate-based fertilizers 5 and over 2.6 × 10 6 t for pesticides 6 . Without efforts to mitigate the environmental impacts of agriculture and technological advancements, food-system impacts (that is, greenhouse gas emissions, cropland and water use, and N and P applications as fertilizers) are estimated to increase 50–90% by 2050 7 . Agrochemicals, which include fertilizers and pesticides, are critical to crop development, and conventional farming practices often overapply to maximize crop yields. Currently, more than 50% of the applied N and 85% of the applied P are not assimilated by crops 8 , 9 , and less than 10% of the applied pesticides reach their targets 10 , 11 , 12 , 13 . Although this practice of overapplication is a feasible on-farm practice in many parts of the world under current raw material costs and availability (for example, primary fertilizer prices per tonne dropped an average of 40% between 2013 and 2017/2018 14 , 15 to as low as US$350 t –1 of potash), more efficient delivery technologies will be needed in the future and are estimated to result in multibillion dollar benefits 16 . Net losses of agrochemicals to the environment introduce contamination to the surrounding ecosystems—soils, surface water and groundwater—which carries substantial adverse human health and environmental consequences (for example, anoxic water bodies, ecotoxicity, loss of biodiversity and contaminated drinking-water sources) 17 , 18 , 19 , 20 . Further, agrochemical losses translate into a substantial waste of embodied resources, both materials and energy. For example, the energy-intense ammonia production by the Haber–Bosch process requires 41.8 GJ t –1 of ammonia 21 , 22 , which translates into 158 PJ annually (based on 2014 US consumption values 23 ). Ammonia production accounts for 80% of the energy consumed by the N-fertilizer industry 24 . The increased investments in innovation and technology that increase N-utilization efficiencies are predicted to aid in achieving future targets with respect to both managing the N cycle and meeting food demands 25 . This will include technical innovations, such as nano-enabled agrochemicals, and the following analysis demonstrates the criticality of considering environmental trade-offs of the entire technology life cycle. Nanoscale material properties can be leveraged to develop agricultural intensification solutions, which is the increase in food production per unit resource (for example, agrochemical and water input, and land area) 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 . Although nanotechnology is also being developed for use across the food system (for example, in chemical and biological sensors, food packaging and water treatment) 12 , 27 , 28 , 35 , 36 , we focus here on the potential for nano-enabled solutions to increase agrochemical use efficiency and enhance crop production. Specifically, we compare the performance, environmental and economic trade-offs of engineered nanomaterials (ENMs) and conventional alternatives in three applications, (1) soil amendments, (2) seed coatings and (3) foliar treatment. The benefits of using ENMs to facilitate crop growth have been reported and attributed to their potential for a targeted delivery to plants and improvements in agrochemical utilization efficiencies 28 , 37 , 38 , 39 , 40 , 41 , 42 , 43 . Yet, the development of nanotechnology to advance sustainable crop agriculture must be pursued concomitant with consideration of their environmental footprints to ensure a net sustainability gain is achieved (for example, the environmental burden is not shifted to other life-cycle stages). This analysis of existing studies in these three applications aims to identify (1) the most promising uses of ENMs to sustainably advance crop production and (2) which ENMs offer the greatest performance-to-impact ratio within each application. The results highlight certain applications that show more promise to obtain a net benefit and provide holistic guidance for the design of sustainable nano-enabled solutions. A systems approach to evaluating environmental trade-offs Solutions that address agriculture sustainability must consider the entire life cycle of the product or process. This perspective emerged from numerous historical accounts of promising new technologies developed indiscriminately and focused on optimizing performance in the production phase. Potential environmental and human health consequences, particularly those that resulted from other life-cycle stages, were considered secondary rather than being incorporated as a design objective. One example is corn–ethanol biofuel as an alternative to fossil-based fuels. The unintended consequences of this solution for energy independence and climate change mitigation include substantial increases in corn prices (between 2005 and 2012 prices increased fourfold 44 ), substantial shifts in land use 45 , diversion of a food source (to ethanol production) and intensified the resource requirements (for example, water, fertilizer and pesticides 46 ). Further, the single-minded focus failed to account for contributions that other life-cycle stages might have on the overall system performance. For example, if consumers rebalanced their diets to more closely adhere to recommended dietary guidelines, particularly with respect to sources of protein, water and nutrient requirements could be reduced by as much as 30 and 40%, respectively 47 , 48 . The intimate connection between food, energy, water and humans calls for a proactive systems-level approach to design and develop sustainable solutions 20 so that unintended consequences are not realized during the use, nor shifted to other life-cycle stages. Trade-off evaluation of proposed on-farm applications As a mature industry, fertilizer production is efficient. However, the fertilizer industry consumes large amounts of energy in the fixation of N 2 from the atmosphere via the Haber–Bosch process (see above) and the application of fertilizer is far from efficient 49 . This results in economic loss, loss of valuable embodied resources and environmental damage (for example, almost 65% of US estuaries and coastal waters are negatively affected by excess nutrient inputs 50 ). ENMs are proposed to promote plant growth with reduced nutrient inputs through more efficient use in soil, seed and foliar treatment applications 37 , 38 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 . Compared with conventional fertilizers, ENMs could have higher unit costs and a high resource intensity (for example, embodied energy, embodied water, and aqueous and atmospheric emissions from upstream processing) 59 , 60 , 61 . To preclude introducing unintended consequences, it is critical to consider these embodied resources when evaluating the potential of ENMs to improve nutrient use efficiency. Here we use upstream energy trade-offs as a metric of environmental impact comparison among different ENMs and conventional alternatives within each application: soil amendments, seed coatings and foliar treatment. Cumulative energy demand (CED, millijoules per unit input) is a commonly used metric because it is relatively straightforward to determine with a high certainty compared with that of other metrics and it correlates well with other environmental indicators 62 . Further, CED has been used by others to assess the relative feasibility of nano-enabled solutions to their non-nano alternatives 63 , 64 , 65 , 66 . In addition to utilizing CED as the primary metric for comparison, another unique approach is taken to evaluate trade-offs specifically for soil amendments by calculating the concentration of ENMs that is equivalent to the energy of the N-based fertilizer being offset through their use. This concentration is therefore dependent on the embodied energy of the ENM. The ENMs with a lower embodied energy can be applied at a higher concentration in the soil for an equal amount of N-fertilizer offset. The analysis highlights the orders of magnitude difference in concentrations to those being used in laboratory studies. For the other two applications (seed coatings and foliar treatment), the CED associated with the concentration of the ENM used in a given study to achieve a performance improvement is used as a comparison across all proposed ENMs and conventional or bulk (non-nanomaterial) alternatives. The different approaches were pursued to offer useful guidance in the continued design and development of these applications and to identify the ENMs that have the greatest potential to sustainably advance crop production. Future analysis, which includes additional impact categories and the incorporation of use-phase data, at scale can be used to more comprehensively assess the net benefits of ENMs in these applications. ENMs as soil amendments The use of soil amendments to improve soil properties that favour improved crop growth is a common agricultural practice. Soil amendments generally have three objectives: (1) to provide nutrients, (2) to prevent insect- or microbial-induced disease and/or (3) to increase the availability of nutrients in the surrounding soil system. Proposed ENM amendments can serve directly as a nutrient source (as single or mixed-composition particles), as carriers of nutrients that improve the nutrient utilization efficiency of plants and lower environmental impacts, or indirectly enhance the nutrient and/or water uptake by impacts on the crop 33 , 35 , 67 . The high surface area of ENMs and porous structures of some compositions enable high nutrient loading amounts 68 . Metal- and metal-oxide-based ENMs possess inherent slow-release mechanisms that increase the uptake efficiency 69 , 70 , 71 , 72 , 73 , 74 . Although a range of crop production benefits have been demonstrated in these designs, soil is complex, and the underlying mechanisms through which the ENM soil amendments induce beneficial, rather than adverse, impacts remains unresolved 35 and is an active research area. Regardless of the mechanism, ENM soil addition is an opportunity to offset conventional fertilizer inputs to achieve the same or higher yields with fewer inputs and less nutrient runoff. Given the potential energy saving in the fertilizer offset and the high embodied energy of ENMs, we established a relationship (Methods) to identify and demonstrate the trade-offs of this application through two sets of analyses (Fig. 1a,b ). The ENM-embodied energy was obtained from cradle-to-gate life-cycle assessments (LCAs) of multiple synthesis methods (details in Methods and Supplementary Data 1 ). The range in the embodied resources required for different synthesis methods of a given ENM and the reported soil application concentrations used are reflected in the whiskers for each data point. The average embodied energies of different ENMs added per volume of soil (MJ m – 3 ) at previously reported concentrations (Supplementary Data 4 ) are shown as orange data points in Fig. 1a with the reported range represented by the whiskers. The solid line represents the current level of embodied energy of adding N fertilizers for crop production (here shown for corn). For scenarios in which ENMs offset a percentage of the fertilizer input, the available energy level (that is, the break-even point) shifts. For example, substituting 1% of the present N-fertilizer usage with TiO 2 nanoparticles (which have the lowest embodied energy per kilogram among the ENMs evaluated) results in a greater net energy use (as seen by comparing the nano-TiO 2 embodied energy range with the break-even energy for 1% offset (Fig. 1a , aquamarine line)). Fig. 1: From an embodied energy perspective, ENM soil amendments at the currently studied application concentrations do not present a sustainable alternative to conventional fertilization practice. a , Embodied energy of the ENM applications in soil (MJ m – 3 ) based on reported concentrations (orange) compared with the embodied energy of the current N-fertilizer application (solid line). Dotted lines show the available energy limit when ENMs substitute a portion of N fertilizers, here 1, 30 and 50% are shown. Whiskers represent the lowest and highest possible combination of concentrations and ENM synthesis CEDs. b , Concentrations of ENMs used as soil amendments to improve crop production found in the literature (orange) and theoretical maximum ENM concentrations that result in a net neutral energy scenario (blue). Data points represent averages. For theoretical values, the ENM-embodied energy was obtained from cradle-to-gate life-cycle-impact assessments, which include multiple different synthesis methods (details in Methods and Supplementary Data 1 ), with the range in values reflected in the whiskers for each ENM. The data points represent a 30% N-fertilizer replacement and blue shaded ranges represent the potential offset range of 1–50%. Whiskers for the empirical data represent the minimum and maximum concentrations obtained from the literature. Blue squares represent theoretical maximum concentrations based on reported theoretical (but not yet realized) minimum CEDs for the CNTs. Assumptions that surround the N-fertilizer use are based on US corn production with an average embodied energy for all the major N-based fertilizers: urea, diammonium phosphate and urea ammonium nitrate. It is assumed that the applied ENMs were only active for a single growing season with equivalent yields for both scenarios. Analyses to derive the theoretical concentrations for other impact categories are represented for cerium oxide nanoparticles and SWCNTs, as triangle data points: eutrophication potential (red), acidification potential (green) and global warming potential (aquamarine).
A team of researchers affiliated with several institutions in the U.S. has conducted an analysis of the system-wide costs and benefits of using engineered nanomaterials (ENMs) on crop-based agriculture. In their paper published in the journal Nature Nanotechnology, the group describes their analysis and what they found. As scientists have come to realize that vast improvements in agricultural practices are needed if future generations are going to be able to grow enough food to feed the expected rise in population. They have increasingly turned to technology-based solutions, rather than just looking for biological advances. One such approach involves the design and use of ENMs on crops as a means of improving pest control and fertilizer efficiency. Prior research has shown that some ENMs can be mixed into the soil as a form of pest control or as a means of diverting fertilizer directly to the roots, reducing the amount required. In a similar vein, some prior research has shown that ENMs can be applied to parts of the plant above-ground as a means of pest control. What has been less well studied, the researchers note, is the overall impact of ENMs on crops and the environment. In this new effort, they have tested the impact of ENM use on various crops and how they impacted the environment in which they were used. The researchers tested metal-oxide-based ENMs and carbon nanotubes that have been designed to improve fertilizer efficiency, and found their use was no better than crops with no ENMs added. They did, however, find that when ENMs were used as seed coatings or as a leaf spray, they provided additional protection from pests. They next attempted to test the impact of ENM use on the environment and found that there was a potential for such materials to make their way into the water system. They also found there was a potential for ingestion by humans eating the treated crops, with unknown consequences. But they also found that there appeared to be more of an impact on the environment during the manufacture of the ENMs than during their use. They conclude by suggesting that much more research is required on ENMs before they are used as commercial products.
10.1038/s41565-020-0706-5
Earth
'Fossil earthquakes' offer new insight into seismic activity deep below Earth's surface
L. R. Campbell et al. Earthquake nucleation in the lower crust by local stress amplification, Nature Communications (2020). DOI: 10.1038/s41467-020-15150-x Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-020-15150-x
https://phys.org/news/2020-03-fossil-earthquakes-insight-seismic-deep.html
Abstract Deep intracontinental earthquakes are poorly understood, despite their potential to cause significant destruction. Although lower crustal strength is currently a topic of debate, dry lower continental crust may be strong under high-grade conditions. Such strength could enable earthquake slip at high differential stress within a predominantly viscous regime, but requires further documentation in nature. Here, we analyse geological observations of seismic structures in exhumed lower crustal rocks. A granulite facies shear zone network dissects an anorthosite intrusion in Lofoten, northern Norway, and separates relatively undeformed, microcracked blocks of anorthosite. In these blocks, pristine pseudotachylytes decorate fault sets that link adjacent or intersecting shear zones. These fossil seismogenic faults are rarely >15 m in length, yet record single-event displacements of tens of centimetres, a slip/length ratio that implies >1 GPa stress drops. These pseudotachylytes represent direct identification of earthquake nucleation as a transient consequence of ongoing, localised aseismic creep. Introduction Earthquake mechanics are mostly studied in the context of the brittle upper crust, where earthquakes predominately occur 1 . However, earthquakes also nucleate in the continental lower crust in mechanically strong lithologies 2 , 3 , 4 . Deep continental earthquakes tend to nucleate along intraplate faults, or faults cutting thick, cold cratons 5 , 6 , 7 , 8 . Earthquakes in continental interiors have resulted in significantly higher casualties than earthquakes at plate boundaries 9 . Thus, a thorough understanding of the earthquake cycle in intracontinental settings is essential, and requires knowledge of the mechanical behaviour and seismogenic potential of the lower crust. The occurrence of pseudotachylytes (solidified frictional melts) formed at lower crustal conditions has been taken as geological evidence of both high mechanical strength and the occurrence of seismic rupture below the typical seismogenic zone 10 , 11 , 12 , 13 , 14 . Seismological observations of lower crustal earthquakes, e.g. in East Africa 2 and in the India-Tibet collision zone 3 , compare favourably with geological studies suggesting a dry, metastable, strong and seismic lower crust 15 , 16 . Earthquakes in dry (e.g. <0.1 wt. % H 2 O 17 ) lower crustal rocks at depths ≥25–30 km require either transiently high differential stresses or local weakening mechanisms (e.g. high pore fluid pressure 5 ). One explanation for transient high stresses is the downward propagation of an earthquake rupture from the shallower seismogenic zone 4 , 18 . These stress spikes account for transient post-mainshock deformation including rapid postseismic strain rates 19 and clusters of aftershocks beneath the mainshock rupture area 4 , 20 , 21 . Whilst large continental earthquakes that nucleate in the upper crust and propagate downwards to depths of 20–25 km are not uncommon 7 , 22 , 23 , 24 , the mechanisms of earthquakes that nucleate within the lower crust are still intensely debated. Proposed mechanisms include thermal-runaway plastic instabilities 25 , dehydration reactions leading to increased fluid pressure 26 and/or local stress redistributions 27 , or eclogitisation reactions 28 . These examples, however, require syn-deformational reactions that may not be occurring in all locations hosting local, lower crustal seismicity. Here we suggest a mechanism where earthquakes nucleate within dry and strong lower crustal rocks without the need of syn-deformational reactions or seismic loading from shallower crustal levels, but rather as a direct consequence of loading of low strain domains during deformation along a network of intersecting ductile shear zones. This is an important advance in our understanding, because this mechanism can explain lower crustal seismicity in regions without shallow seismicity or evidence for fluids, such as deep earthquakes observed in the northern Central Alpine foreland 29 . We characterise an exhumed network of highly localised shear zones recording viscous deformation at lower crustal conditions. We describe the geometry of pseudotachylyte veins that cut between shear zones of varying orientations, outline the evidence that these markers of seismicity were coeval with viscous creep of the shear zones at lower crustal conditions, and use measurements of fault length and displacement to calculate the moment magnitudes and static stress drops of these seismic events. We interpret this mechanism of seismicity to be a mechanical response to strain incompatibility across the shear zone network during localised viscous shear, with strong blocks undergoing seismic failure at points of local stress amplification. This is the first evidence for in-situ, high stress drop earthquake nucleation in the lower crust driven predominantly by the geometry of a shear zone network, as a consequence of differential creep rates and high viscosity contrasts. Results The Nusfjord shear zone network The Lofoten-Vesterålen islands of Norway expose a 1.9–1.7-Ga Anorthosite-Mangerite-Charnockite-Granite suite that extensively preserves an anhydrous granulite facies assemblages 30 . In the SE of Flakstadøy (Supplementary Fig. 1 ) the Nusfjord coarse-grained anorthosite is cut by mylonitic shear zones that were active at 650–750 °C and 0.7–0.8 GPa 31 . The mylonitic shear zones, concentrated within an E-W striking high strain zone of ~1 km apparent thickness, occur in three main sets (Sets 1–3) that match the orientations of regional tectonic lineaments (Fig. 1 ). This high strain zone comprises mylonitic shear zones ranging from numerous narrow (typically 5–20 cm thick) structures to less frequent wider structures consisting of multiple shear zone strands (Figs. 1 and 2 ). Most Set 1 shear zones (average foliation dip/dip direction of 54°/156° and average lineation plunge/azimuth of 48°/184°) include mylonitised pseudotachylytes (type-1 pseudotachylytes) and have accumulated appreciable oblique normal displacement (Fig. 2 ). Set 2 shear zones strike NW-SE (average foliation 82°/050°, and lineation 18°/328°), contain fewer type-1 pseudotachylytes and show sinistral strike-slip movement. Set 3 shear zones are generally minor structures with strike varying from N-S to NE-SW (average foliation orientation of 88°/278°, Fig. 1 ). Fig. 1: Structure of the Nusfjord region. a Map of Nusfjord anorthosite showing locations of shear zones and large dykes. Red shading indicates the extent of the high strain zone containing frequent cross-cutting shear zones. Basemap © OpenStreetMap contributors (openstreetmap.org.uk/copyright). b Block diagram showing three-dimensional structure of interlocking shear zones in various orientations. c Cross section taken from line X–X′ shown on map in 1a. The thickness of shear zones in the map is not representative of the actual thickness in field. Full size image Fig. 2: Shear zones and sheared pseudotachylytes in the Nusfjord anorthosite. a Pseudotachylyte localised along the margin of a relatively fine-grained anorthosite dyke. A viscous overprint in the pseudotachylyte can be seen in the apparently sheared geometry of the injection veins [68.0563°N 13.3648°E]; b pseudotachylyte fault breccia with viscous overprint shown by alignment of deformed, elongate clasts parallel to the shear zone foliation [68.0572°N 13.3687°E]; c well-foliated mylonite consisting of significant proportions of pseudotachylyte—interpreted to be a high strain equivalent to the breccia in b. Outside the sheared pseudotachylyte, the anorthosite is undeformed [68.0572 °N 13.3687 °E]; d view looking west of an internal anorthosite block (with orange overlay) between two bounding shear zones emphasised in white and detailed in Fig. 3a [68.0557°N 13.3746°E]. Full size image The shear zones exploited precursor dykes and type-1 pseudotachylyte-bearing faults (Fig. 2a ). Strain localisation into shear zones rather than the surrounding anorthosite was promoted by several mechanisms including: the reduced grain size (10–30 µm in the pseudotachylyte compared to ~10 cm in the surrounding anorthosite), the increased phase mixing in the pseudotachylytes, and the increased water content in the pseudotachylytes (0.4 wt.% vs 0.05 wt.% in the anorthosite 31 ). All these mechanisms are inferred to promote grain size sensitive creep at lower stresses than required to deform the anorthosite 31 . In addition, there is a mechanical anisotropy introduced by the tabular geometry of these precursors. The coarse-grained anorthosite between the shear zones is microfractured (Supplementary Fig. 2 ) but does not show evidence of dislocation creep in the plagioclase or pyroxenes. Deformation microstructures indicative of local (and limited) dislocation glide occasionally occur in plagioclase and pyroxenes in the form of undulatory extinction and lattice distortion (Supplementary Fig. 2 ). In general, however, the anorthosite blocks show no evidence of internal high strain deformation. The high strain zone therefore consists of a network of variously oriented, intersecting, narrow shear zones separating blocks of barely deformed anorthosite (Fig. 1b, c ). Coeval viscous creep on all shear zone orientations is indicated by mutually offsetting shear zones and convergent stretching lineations at intersection zones. Pseudotachylytes and shear zones Whilst type-1 pseudotachylytes are commonly mylonitised, type-2 pseudotachylytes are dominantly pristine veins, undeformed and unaltered from their origin as crystallised melts, inferred to be coseismic. Type-2 pseudotachylytes occur along small-displacement faults that dissect anorthosite blocks bounded by either subparallel or intersecting shear zones (Fig. 3 ). These shear zone-confined blocks are observed at length scales ranging from 1 m to 15 m (Fig. 3 , Supplementary Fig. 3 ) and typically occur between Set 1 shear zones (Fig. 3a ) or Set 1—Set 2 shear zone intersections (Fig. 3b ). The confined type-2 pseudotachylyte-bearing faults neither offset, nor are they offset by, the bounding shear zones. Fig. 3: Type-2 pseudotachylyte faults within shear zone-bounded blocks. a map of pseudotachylyte fault network developed between two SE-dipping Set 1 shear zones [68.0557 °N 13.3744°E] and stereonet of fault and shear zone orientations for the region (separated versions of the photo-map and sketch maps for parts a and b are available in Supplementary Fig. 4 ); b map of pseudotachylyte fault network developed between a SE-dipping Set 1 shear zone and a SW-dipping Set 2 shear zone [68.0552°N 13.3678°E] and stereonet of shear zone and pseudotachylyte orientations; c southern boundary shear zone of system in 3a shows pseudotachylyte faults dragged into shear zone foliation; d micrograph of type-2 pseudotachylyte vein showing transition from radiating microlites to fine-grained and viscously sheared margin of pseudotachylyte vein (cross-polarised light); e detail of the intersection between the two shear zones, including scattered pseudotachylytes which cross from the undeformed internal region into the shear zones and are partially transposed along the foliation; f sinistral stepover jog developed in type-2 pseudotachylyte. Full size image Figure 3a shows a type-2 pseudotachylyte-bearing fault network between two Set 1 shear zones, spaced ~10 m apart. The bounding Set 1 shear zones dip to the SE and show transtensional kinematics. Type-2 pseudotachylytes are locally dragged into the southern Set 1 mylonite (Fig. 3c ), but outside the centimetre-thick dragging zone the original pull-apart geometry and pristine microstructures of type-2 pseudotachylytes are well-preserved (Fig. 3c, d ). The mutually intersecting type-2 pseudotachylyte-bearing faults show a combination of dextral and sinistral separations on the outcrop surface. The dextral faults typically dip moderately NE to SE and show oblique normal-dextral kinematics; sinistral faults dip steeply towards north or south and are dominantly strike-slip (Fig. 3a ). A similar structural geometry as shown in Fig. 3a continues to the NW and SE along-strike extension of the mapped area for about 100 and 200 m respectively (e.g. Supplementary Fig. 3a ), though with some variations related to local segmentation and branching of the bounding shear zones. In Fig. 3b , type-2 pseudotachylytes are confined between intersecting Set 1 mylonitised type-1 pseudotachylyte (with transtensional kinematics) and Set 2 sheared pegmatite (exhibiting left-lateral strike-slip). Type-2 pseudotachylytes are concentrated close to the shear zones’ intersection. Within 1 m of the intersection, pseudotachylytes both cut across and are transposed along the shear zone foliation (Fig. 3e ), whereas, at a greater distance, they extend from the Set 1 towards the Set 2 shear zone (Fig. 3b ). These latter type-2 pseudotachylytes are associated with small faults typically with sinistral component of offset and variable orientation (dipping moderately SE and steeply NW and E (Fig. 3b ). In the examples of Fig. 3a, b , type-2 pseudotachylyte veins mostly preserve the pristine macroscopic geometry (e.g. en-echelon arrangement, pull apart jogs, chilled margins and equant clasts, Fig. 3f ) and, especially in the vein core, the microscopic microlitic/spherulitic texture (Fig. 3d ). However, in one case, microstructural analysis reveals that the pseudotachylyte vein margins localised solid-state shearing over a width of 1 mm (Fig. 3d ). This very discrete shearing is not easily observed in the field and does not account for any significant displacement. Other sampled type-2 pseudotachylytes do not show any viscous overprint (Supplementary Fig. 2a ). Evidence for earthquake nucleation within the lower crust The confinement of these faults within relatively intact, shear zone-bounded blocks, with dragging of pseudotachylytes into the shear zones, implies that seismic ruptures were coeval with viscous shear. Mylonitisation along the bounding shear zones occurred at lower crustal conditions of 650–750 °C and 0.7–0.8 GPa, based on amphibole-plagioclase geothermobarometry and thermodynamic modelling of the mylonitised pseudotachylyte assemblages 31 . These conditions can thus be assumed also for the generation of the type-2 pseudotachylytes, supported by the stability of the granulite facies mineral assemblage (plagioclase+clinopyroxene+hornblende+orthopyroxene +garnet+biotite±quartz±K-feldspar) found both in the host rock damage zone, within the sheared margins of pseudotachylyte, and crystallised within the vein itself. Therefore, type-2 pseudotachylyte-bearing faults represent earthquakes nucleated under lower crustal conditions. The concentration of type-2 pseudotachylytes near shear zone intersections reveals that earthquake slip was controlled by the interaction and geometry of shear zones. Concurrent slip of shear zones, delimiting polyhedral blocks of pristine anorthosite, forced low strain blocks to deform internally 32 . Earthquake source parameters Type-2 pseudotachylytes were formed by earthquake ruptures <~15 m in apparent length. Pseudotachylyte pull-aparts (Fig. 3f ) record single-event displacements in the range of 1 to 26 cm, with the highest displacements observed along longer faults (Fig. 4a ). The displacement/length ratio of these single-slip faults ranges between 10 −3 and 10 −1 , well exceeding the typical ratios of 10 −7 -10 −4 seen in kilometre-scale earthquake ruptures (Fig. 4a ) 22 , 33 . This may indicate that, due to interaction with the bounding shear zones, ruptures terminated prematurely. Assuming a circular rupture area, the type-2 pseudotachylytes record moment magnitudes ( M w ) ranging from 0.2 to 1.8 and their slip/length ratios imply static stress drops between 0.1 and 4.2 GPa (Fig. 4b ). This assumption of a circular rupture may underestimate the true rupture area. Although we have no observation of the vertical dimension of the pseudotachylyte-bearing faults, it is unlikely that their aspect ratio is >10, because the maximum vertical extent of anorthosite blocks with widths ~10 m is neither observed nor projected to extend much over 100 m (Fig. 1c ). Therefore, an elliptical fault with a long axis ten times the measured fault length provides an upper bound for rupture area and moment magnitude ( M w 0.8–2.6), and a lower bound for stress drop (0.06–2.5 GPa) (Fig. 4b , Supplementary Fig. 5 ). Fig. 4: Seismic source parameters of type-2 pseudotachylyte. a length vs displacement graph, inset compares Nusfjord type-2 pseudotachylytes with published data on rupture length versus displacement 22 , 33 ; b seismic moment vs static stress drop with equivalent moment magnitudes ( M w ) superimposed. The Nusfjord type-2 pseudotachylytes are shown in both the circular fault and elliptical fault cases (further elliptical aspect ratios are shown in Supplementary Fig. 5 ). Published seismologically determined data are included from small earthquakes at shallow depth on the San Andreas Fault 34 , intraplate aftershocks from 10 to 36 km depth occurring after the 2001 Bhuj (India) earthquake 37 , small earthquakes from the New Madrid seismic zone 36 , and small earthquakes recorded from the Parkfield region of California 35 . Estimates from pseudotachylytes include an exhumed seismogenic fault zone in the Sierra Nevada that records seismogenic faulting at depths of 7–10 km 40 and a pseudotachylyte in lherzolites representing seismogenic faulting at >40 km 39 . Full size image These minimum stress drops calculated from type-2 pseudotachylytes are still high when compared to seismological records (Fig. 4b ) of upper crustal- 34 , 35 and intracontinental lower crustal-seismicity 36 , 37 . These stress drops are also generally higher than other values calculated from pseudotachylyte-bearing faults 38 , including those recording continental lower crustal and mantle seismicity 11 , 39 , although are on the same order as those derived from pseudotachylytes at depths of 7–10 km 40 and in lawsonite-eclogite facies peridotites 41 . We note, however, that the calculated stress drops may be elevated because the rupture area is limited to the size of the block, analogous to large stress drops seen in some laboratory shear experiments 42 . The large stress drops imply that the failure shear strength of the intact anorthosite must be >1 GPa, consistent with both the high strength of anorthite reported from experimental studies 43 and the stresses required for failure of intact anorthosite and subsequent frictional sliding at high lower crustal confining pressures. High viscous strength in the anorthosite blocks would enable seismic failure through elastic energy accumulation, because the dry, coarse-grained plagioclase could not flow viscously—even at geological strain rates—without a reduction in grain size, changed mineralogy, or fluid influx 43 , 44 . Discussion Our results imply that lower crustal earthquake nucleation may result from localised viscous creep along shear zone networks within dry granulitic lower crust. In the Nusfjord anorthosite, local high differential stresses are inferred to have arisen from the interaction of localised viscous shear zones at lower crustal levels, where a high viscosity block experienced stress amplification imposed by flow of the surrounding, weaker material (cf. refs. 45 , 46 , 47 , 48 ). We argue that strain incompatibility across the deforming system was accommodated by transient seismic failure along new faults nucleating at sites of stress amplification within the strong anorthosite blocks. Over long timescales, the effect of episodic seismic activity was to approximate strain compatibility across the shear zones, at least enough to facilitate ongoing viscous deformation. A similar model was hypothesised to explain the cyclic generation of pseudotachylytes in the lower crustal rocks of the Musgrave Ranges 14 . Here we provide the first evidence for such in-situ seismic lower crustal faulting based on detailed field maps of the Nusfjord ridge. Earthquake ruptures may be encouraged both by the interaction of differently oriented shear zones and by their differential creep rates. In this interpretation, seismic faulting took place as punctuated failure episodes constrained to individual internal blocks (Fig. 1a, c ). Within each block, stresses increased as viscous creep on the bounding shear zones progressed alongside a continued absence of deformation in the internal block 10 , 45 (Fig. 5a ). The magnitude of stress amplification would increase with increasing volumetric block to shear zone ratio, and with an increasing viscosity contrast between the blocks and the bounding shear zones 45 , 46 . Spatial heterogeneity of stress amplification was likely controlled by the geometry of the bounding shear zones and the internal block 32 . Progressively, continuing deformation along bounding shear zones would have increased the geometrical strain incompatibility, and, in the absence of viscous deformation in the anorthosite, increasing elastic strains within the blocks would have locally increased shear stresses towards the anorthosite failure strength. Seismic rupture released the amplified stress via the coseismic stress drop. In this way, cycles of elastic stress accumulation and release occurred locally within each block in response to displacements on the bounding shear zones (Fig. 5a ), but cumulative seismic failure across several blocks could (over some unknown time-scale) facilitate ongoing creep of the entire kilometre-wide high strain zone (Fig. 5b ). Fig. 5: Model of pseudotachylytes in the context of the shear zone network. a development through time of type-2 pseudotachylyte network within an internal block. Red star represents hypothetical rupture nucleation site; b Vertical 2D schematic section of Nusfjord shear zones showing how seismic failure of individual blocks accommodates compatibility across the shear zone network. It is possible that these shear zones could represent the roots of a crustal-scale fault system, but the geological record of this shallower crustal level is no longer preserved. Also shown are strength profiles for wet diffusion creep of anorthite-diopside aggregates 44 , representing localised deformation within the shear zones 31 , and for dry dislocation creep of anorthite-diopside aggregates 44 representing the hypothetical onset of viscous deformation in the anorthosite blocks. Full size image The absolute magnitudes of the proposed stress amplifications are difficult to estimate from comparisons of existing work due to differences in model geometry, rheology, deformation mechanisms, strains, and strain rates, and would benefit from further constraints from microstructural and numerical modelling studies appropriate to the Nusfjord context. However, indications from models of strong inclusions within a viscously deforming matrix suggest that stresses in the inclusions can be increased by an order of magnitude 45 , 46 given a strength ratio >100 between the inclusion and the matrix, especially if there are additional effects resulting from interactions between strong inclusions 45 , although even an isolated inclusion in an otherwise homogenous viscous matrix will invoke significant differences in stress within and around the inclusion relative to the surrounding material 47 , 48 . We therefore believe it feasible that the >1 GPa failure stresses required by the high seismic stress drops could be transiently reached in the internal blocks, given that the viscosity contrast between the shear zones and the dry anorthosite is much larger than that in the cited models 49 . The mechanism of rupture nucleation within relatively strong rocks in the lower crust is a new alternative to models of thermal runaway or mineral reactions that may also initiate frictional slip within otherwise viscous regimes 12 , 25 , 26 and to models of downward rupture propagation or stress pulses from shallower crustal levels 4 , 18 . This new mechanism is hence a somewhat simpler explanation to account for lower crustal seismicity in continental regions that, for example, lack a major overlying fault zone, are separated by overlying seismic activity by a significant depth interval, or are thought to be anhydrous and lack evidence for eclogitisation. In the new model, the proposed stress amplification requires only the presence of a network of localised viscous shear zones 50 , within a strong, dry, block-forming material that prohibits weakening and viscous creep in the internal blocks. Such conditions are found in many intraplate lower crustal granulite terranes 14 , 15 , 50 , 51 and this model can thus account for observations of low-magnitude present-day deep continental seismicity. This is most obviously applicable to continental settings where upper crustal seismogenic faults are not present, for example, the lower crustal seismicity of the northern Central Alpine foreland basin 29 , but also along crustal-scale fault structures where deeper seismicity can be shown to be spatially and/or temporally isolated from any shallower ruptures (e.g. Baikal Rift 52 , East African Rift 53 ). The new model remains compatible with the co-existence of coseismic loading of lower crustal shear zones from earthquakes nucleating in the overlying crust, as such loading can induce transient increases in both the differential stress and the postseismic strain rates across the lower crustal shear zones 49 , 54 , resulting in an increased driving force for the internal blocks to deform. In this context, the seismicity observed in the internal blocks might represent deep aftershocks to a shallower mainshock (e.g. comparable to the deeper aftershocks of the Bhuj 2001 earthquake 6 ). However, the mechanism presented here does not require upper crustal earthquakes to generate high stresses within the lower crust, only synchronous viscous deformation across a network of shear zones separating relatively high viscosity domains. Similarly, the new model does not need the ongoing or episodically triggered mineral reactions required by other in-situ models for lower crustal earthquake nucleation. Earthquakes with high stress drops may nucleate in the dry, plagioclase-rich continental lower crust in response to locally derived stress heterogeneities. The high stresses required for failure of the strong anorthosite blocks within the shear zone network are related to coeval viscous creep across a network of highly localised shear zones mimicking the array of pre-existing tabular anisotropies, and do not need to be generated by shallower seismicity or by syn-deformation reactions. Seismic fracturing allows deformation to be kinematically sustained between adjacent shear zones and across the shear zone network as a whole. Methods Displacement and length of faults Several dikelets within the internal blocks act as markers for fault offset and allow the orientation of the slip vector to be calculated where two or more such markers are cut by the same fault, using the separation and offset of those markers. The displacements along pseudotachylyte faults were measured using dilational pull-aparts (Griffith et al. 40 ) in order to discount any additional component of viscous displacement on the offset of markers (Fig. 3d ). The measured displacements are considered the result of a single slip event, due to the lack of macroscopic reworking of the pseudotachylyte seen either in the field or from thin sections, and from the lack of fragmented or cataclastic margins that might indicate pre-existing fault zones before melting occurred. Seismic parameters Fault length, area and displacement are input into calculations for seismic source parameters of the earthquakes which generated these pseudotachylytes. The displacement ( S ) is measured from pull-apart openings. The fault area ( A ) is derived from the fault length measured in the field, initially assuming a circular fault shape where the fault length forms the diameter. The case of an elliptical fault, where the vertical fault width extends to up to ten times the measured (horizontal) fault length, is also considered. The moment magnitude ( M W ) for each fault is calculated using the seismic moment ( M 0 ), $$M_0 = \mu AS$$ (1) where µ is the shear modulus (38 GPa for anorthosite 55 ). The moment magnitude is calculated as $$M_{\mathrm{W}} = \frac{{{\mathrm{log}}\,M_0}}{{1.6}} \, - 6.07$$ (2) The static stress drop (Δ σ ) is calculated as $$\Delta \sigma = \frac{\mu }{C}\frac{S}{r}$$ (3) where r is the fault radius (for a circular fault) or semi-minor axis (for an elliptical fault) and C is the geometrical coefficient calculated for transverse faults 56 . We calculate stress drops for a circular fault and an elliptical fault where the vertical extent of the fault is greater than the horizontal fault length measured in the field, with horizontal strike-slip fault movement. In this case, where a is the semi-minor axis of the ellipse and is parallel to slip, and b is the semi-major axis of the ellipse, $$C = \frac{4}{{3E\left( k \right) + \, \frac{{a^2}}{{b^2}}\frac{{K\left( k \right) \, - \, E\left( k \right)}}{{k^2}}}}$$ (4) where K(k) and E(k) are complete elliptical integrals of the first and second kind, respectively, and k is defined as 40 , 56 , $$k = \sqrt {1 - a^2/b^2}$$ (5) when the slip direction is parallel to the semi-minor axis a . Data availability We declare that all the data used to support the conclusions of this study are accessible within the paper and its Supplementary files. The source data for Fig. 4b is displayed in Fig. 4a .
A major international study has shed new light on the mechanisms through which earthquakes are triggered up to 40km beneath the earth's surface. While such earthquakes are unusual, because rocks at those depth are expected to creep slowly and aseismically, they account for around 30 percent of intracontinental seismic activity. Recent examples include a significant proportion of seismicity in the Himalaya as well as aftershocks associated with the 2001 Bhuj earthquake in India. However, very little is presently known about what causes them, in large part due to the fact that any effects are normally hidden deep underground. The current study, published in Nature Communications and funded by the Natural Environment Research Council, sought to understand how such deep earthquakes may be generated. They showed that earthquake ruptures may be encouraged by the interaction of different shear zones that are creeping slowly and aseismically. This interaction loads the adjacent blocks of stiff rocks in the deep crust, until they cannot sustain the rising stress anymore, and snap—generating earthquakes. Emphasising observations of quite complex networks created by earthquake-generated faults, they suggest that this context is characterised by repeating cycles of deformation, with long-term slow creep on the shear zones punctuated by episodic earthquakes. Although only a transient component of such deformation cycles, the earthquakes release a significant proportion of the accumulated stress across the region. The research was led by the University of Plymouth (UK) and University of Oslo (Norway), with scientists conducting geological observations of seismic structures in exhumed lower crustal rocks on the Lofoten Islands. Project scientists in the Lofoton Islands shown in a still from the film Pseudotachylyte. Credit: Heidi Morstang, University of Plymouth The region is home to one of the few well-exposed large sections of exhumed continental lower crust in the world, exposed during the opening of the North Atlantic Ocean. Scientists spent several months in the region, conducting a detailed analysis of the exposed rock and in particular pristine pseudotachylytes (solidified melt produced during seismic slip regarded as 'fossil earthquakes') which decorate fault sets linking adjacent or intersecting shear zones. They also collected samples from the region which were then analysed using cutting edge technology in the University's Plymouth Electron Microscopy Centre. Lead author Dr. Lucy Campbell, Post-Doctoral Research Fellow at the University of Plymouth, said: "The Lofoten Islands provide an almost unique location in which to examine the impact of earthquakes in the lower crust. But by looking at sections of exposed rock less than 15 metres wide, we were able to see examples of slow-forming rock deformation working to trigger earthquakes generated up to 30km beneath the surface. The model we have now developed provides a novel explanation of the causes and effects of such earthquakes that could be applied at many locations where they occur." Project lead Dr. Luca Menegon, Associate Professor at the University of Plymouth and the University of Oslo, added: "Deep earthquakes can be as destructive as those nucleating closer to the Earth's surface. They often occur in highly populated areas in the interior of the continents, like in Central Asia for example. But while a lot is known about what causes seismic activity in the upper crust, we know far less about those which occur lower. This study gives us a fascinating insight into what is happening deep below the Earth's surface, and our challenge is now to take this research forward and see if we can use it to make at-risk communities more aware of the dangers posed by such activity." A trailer for the film Pseuodtachylyte. Credit: Heidi Morstang, University of Plymouth As part of the study, scientists also worked with University of Plymouth filmmaker Heidi Morstang to produce a 60-minute documentary film about their work. Pseudotachylyte premiered at the 2019 Bergen International Film Festival, and will be distributed internationally once it has screened at various other festivals globally. The study, "Earthquake nucleation in the lower crust by local stress amplification by Campbell et al," is published in Nature Communications.
10.1038/s41467-020-15150-x
Nano
Honey: a cost-effective, non-toxic substitute for graphene manipulation
Richard C. Ordonez et al. Rapid Fabrication of Graphene Field-Effect Transistors with Liquid-metal Interconnects and Electrolytic Gate Dielectric Made of Honey, Nature Scientific Reports (2017) DOI: 10.1038/s41598-017-10043-4 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-10043-4
https://phys.org/news/2017-09-honey-cost-effective-non-toxic-substitute-graphene.html
Abstract Historically, graphene-based transistor fabrication has been time-consuming due to the high demand for carefully controlled Raman spectroscopy, physical vapor deposition, and lift-off processes. For the first time in a three-terminal graphene field-effect transistor embodiment, we introduce a rapid fabrication technique that implements non-toxic eutectic liquid-metal Galinstan interconnects and an electrolytic gate dielectric comprised of honey. The goal is to minimize cost and turnaround time between fabrication runs; thereby, allowing researchers to focus on the characterization of graphene phenomena that drives innovation rather than a lengthy device fabrication process that hinders it. We demonstrate characteristic Dirac peaks for a single-gate graphene field-effect transistor embodiment that exhibits hole and electron mobilities of 213 ± 15 and 166 ± 5 cm 2 / V · s respectively. We discuss how our methods can be used for the rapid determination of graphene quality and can complement Raman Spectroscopy techniques. Lastly, we explore a PN junction embodiment which further validates that our fabrication techniques can rapidly adapt to alternative device architectures and greatly broaden the research applicability. Introduction The lowering cost of graphene synthesis has created opportunities that include lightweight electronics such as wearable, flexible, and electromagnetic sensors 1 . However, to explore such scientific phenomena one must be trained and certified in sophisticated optical characterization and microfabrication techniques to design and fabricate graphene devices such as graphene field-effect transistors (GFETs). Furthermore, the complexity of the required fabrication steps must be conducted in a controlled environment to increase device yield and minimize exposure to harmful chemicals. This process can be quite time-consuming and demanding depending on the scope of the device architecture and the allotted time given for expenditure of project funds. Such drawbacks reduce the accessibility of graphene research across academic and private institutions. In a typical GFET fabrication process, graphene is synthesized via chemical vapor deposition and transferred onto a target substrate via delamination and the graphene quality is measured via Raman Spectroscopy. Lift-off processes are then used to construct contact electrodes, gate-dielectrics, and gate-contacts 2 , 3 , 4 , 5 , 6 , 7 . Unfortunately, numerous articles have reported degradation in graphene transistor performance due to mechanical and electrical strain put on graphene that is a direct result of photolithography and physical vapor deposition 8 , 9 , 10 . In addition, graphene has been shown to exhibit high contact resistance with standard electrode materials relative to its size that create non-ideal performance across the board 11 . This performance limitation will potentially force a researcher back into the cleanroom to identify drawbacks in his or her fabrication process. In this paper, we demonstrate the rapid fabrication of a three-terminal graphene-field effect transistor embodiment with the use of commercially available eutectic liquid-metal Galinstan interconnects and electrolytic gate dielectric comprised of honey ( LM-GFET ). We demonstrate comparable GFET performance with the proposed inexpensive, non-traditional materials to much more common ionic gel materials 12 , 13 . Galinstan is a non-toxic liquid-metal alloy consisting of 68.5% gallium, 21.5% indium, and 10% tin 14 that possesses desirable conformal properties. In a previous study, Galinstan device interconnects exhibited less than a 5.5% change in resistance for a graphene two-terminal device when subjected to repeated deformations as small as 4.5 mm radius of curvature 15 . Moreover, graphene transistors can greatly benefit from the conductivity of Galinstan (2.30 × 10 6 S / m ), a desirable vapor pressure (<1 × 10 −6 Pa at 500 ° C ) compared with mercury (0.1713 Pa at 20 ° C ), and a stable liquid state across a broad temperature range (−19 ° C to 1300 ° C ) 16 . Honey is typically produced via sugary secretions from bees, harvested, and packaged under various brand names for commercial food consumption. Honey contains various concentrations of water, vitamins, minerals, amino acids, and sugars: fructose, glucose, sucrose that can be controlled via bee production and honey extraction techniques 17 . To our benefit, honey formulates an ionic gel-like solution analogous to ion gels. Ion gels consist of room-temperature ionic liquids and gelating triblock copolymers 12 , 13 . Recently, ion gels have demonstrated ideal performance as electrolytic gate dielectrics for flexible GFET devices due to an ability to produce extremely high capacitance and high dielectric constants required for high on-current and low-voltage operation 12 . The introduction of honey as an electrolytic gate dielectric is advantageous for rapid fabrication of GFET devices due to the commercial availability, non-toxicity, control of ionic content that can be used to alter dielectric properties, and quick mixing that reduces preparation time of honey. Currently, ion gels require special preparation in an atmospheric controlled environment to mitigate outgassing and combustion, which only adds to its complexity. Results and Discussion Transistor Architecture and Operation The transistor architecture of the LM-GFET device is shown in Fig. 1a and an image of the LM-GFET is shown in Fig. 1c . The device is comprised of liquid-metal Galinstan source and drain electrodes that are overlaid on monolayer graphene transferred to Polyethylene Terephthalate (PET). A detailed fabrication process for the LM-GFET device is described in the Methods Section. Honey, was used as a gel-like electrolytic gate dielectric to generate an enhanced electric field-effect response above graphene. Due to the presence of ions and high polarizability of honey, a diffusion of charge is formed at the thin layer between honey and graphene. This layer forms an electric double layer and is typical when ionic liquids contact conductive materials 12 , 13 . Due to the nanoscale separation distance of the electric double layer, usually from 1-10 nm , a large charge gradient is formed on the surface of graphene. For example, in the case a gate electrode is positively charged and submerged in honey, anions will accumulate at the gate/honey interface and cations will accumulate at the honey/graphene interface. The resultant electrical double layer at the honey/graphene interface will then alter the graphene conductivity, Fig. 1d . The opposite is true for the case in which the gate electrode is negatively charged. Figure 1 ( a ) Illustration of a graphene field-effect transistor with liquid-metal source and drain electrodes ( LM-GFET ); and honey. ( b ) Raman spectroscopy profile of graphene sample transferred to Polyethylene Terephthalate. Inset: Location of sites A–C. ( c) Image of the LM-GFET and ( d ) representation of charge distribution in electrolytic gate dielectrics comprised of honey. Full size image Single-Gate Transistor Characteristics A schematic representation of the electrical measurements is shown in Fig. 2a . Fig. 2b–d illustrates graphene transport characteristics for a single-gated LM-GFET device. The V-shaped curve of the relationship between top-gate voltage \({V}_{TG}^{\ast }\) and drain-to-source current I ds in Fig. 2c highlights the ambipolar operation that is characteristic of any graphene field-effect transistor, and provides designers the flexibility to bias the device in either hole or electron conduction mode. A well-documented model extraction technique was utilized to extract graphene parameters from the device’s transfer curve 18 . The model fit in Fig. 2d determined hole and electron mobilities of 213 ± 15 and 166 ± 5 cm 2 / V · s respectively, at a drain bias of 100 mV . Despite the rapid and inexpensive fabrication process, the LM-GFET devices exhibited performance comparable to that of much more elaborately fabricated GFET device 1 , 12 , 13 . In addition, Fig. 2c illustrates the device’s transconductance that reaches a considerable value of 38 μA / V with a large degree of symmetry and linearity within the operational range of −0.5 V to 0.5 V . This linear, ambipolar transconductance has significant utility in ambipolar electronic circuits such as radio frequency mixers, digital modulators, and phase detectors 19 . Figure 2b illustrates the I ds − V ds response to the various transconductances associated with different \({V}_{TG}^{\ast }\) values. Fortunately, due to the monotonic transconductance of the LM-GFET devices, the drain-to-source voltage V ds sweep does not demonstrate an inflection point. Particularly, the varying \({V}_{TG}^{\ast }\) curves do not intersect one another, which cannot be presumed to occur in all standard GFET devices 20 . Instead, the I ds − V ds trend is encouraging because the drain currents diverge at higher V ds biases. In a GFET sensor application, a designer can first bias their device at a desired V ds and I ds . Therefore, an external stimuli can trigger a change in the device’s gate voltage, which will create a significant change in I ds . Note, the dielectric constant of honey was measured to be 21 and the gate capacitance was measured to be 2.3 μF / cm 2 using a LCR meter which is very comparable to what is described in literature 21 , 22 . Figure 2 ( a ) Schematic representation of a LM-GFET device with a single-gate and location of the electric double layer. ( b ) Drain-to-source current as a function of drain-to-source voltage for varied top-gate voltage ( I ds − V ds ). ( c ) Left : Relationship of drain-to-source current as a function of top-gate voltage \(({I}_{ds}-{V}_{TG}^{\ast })\) and Right : Transconductance, G m , as a function of top-gate voltage \(({G}_{m}-{V}_{TG}^{\ast })\) . ( d ) Model fit overlaid on drain-to-source resistance as a function of top-gate voltage \(({R}_{ds}-{V}_{TG}^{\ast })\) . Full size image Comparison of Ion Gel with an Electrolytic Gate Dielectric Made of Honey Transport characteristics were compared with an ion gel LM-GFET to validate the use of honey as a electrolytic gate dielectric. The ion gel used was comprised of 1-Ethyl-3 Methylimidizalium Bis(Trifluoromethylsufonyl)imide ([EMIM][TFSI]) ionic liquid in Polystyrene-Poly (Ethylene Oxide) (PS-PEO) diblock copolymer. A detailed description of the ion gel synthesis is given in the Methods section. Figure 3a , illustrates the difference in the drain-to-source current, ON/OFF , ratio as a function of top-gate voltage ( I ON / I OFF − V TG ). The transport characteristics are presented as a ratio of I ON and I OFF due to the significant difference in drain-to-source current between the sample measurements. The maximum operating current of the device with ion gel was 124 ± 10 μA with an I ON / I OFF = 1.7, where V ds = 1.2 V . The maximum operating current of the device with honey was 620 ± 40 μA with an I ON / I OFF = 3.1. A model fit determined the hole and electron mobilities for the ion gel device was 61 ± 3 and 189 ± 3 cm 2 / V · s and 100 ± 4 and 126 ± 5 cm 2 / V · s respectively for the Honey LM-GFET . Figure 3 ( a ) Comparison of drain-to-source current, ON / OFF , ratio as a function of top-gate voltage ( I ON / I OFF − V TG ) for a LM-GFET with an electrolytic gate dielectric comprised of honey (blue) and ion gel (red), V ds = 1.2 V . ( b ) Comparison of drain-to-source current as a function of top-gate voltage ( I ds − V TG ) for a LM-GFET before (green) and after (cyan) rinsing of the honey and liquid metal electrodes. Full size image The importance of a high capacitance electrolytic gate dielectric became clear in our comparison of the honey and ion gel ON/OFF ratios. The high capacitance of honey (~2 μF / cm 2 ) enabled a significant electric field-effect, hence enabled a large drain current. The opposite was seen for the lower capacitance of the ion gel (0.1 μF / cm 2 ). The reduced field-effect of the ion gel device may be limited by a relatively high gate leakage current that is typical of electrolytic gate dielectrics 23 . The gate leakage current of the ion gel was measured to be 20 μA , which was 4 x greater than the honey. Although, the gate leakage was high for the ion gel device, the formation of the electric double layer at the graphene/ion gel interface allowed for ambipolar operation. In addition, the dirac peaks measured for ion gel devices were not as sharp as the dirac peaks measured by the honey devices. This phenomenon may be indicative of charge inhomogeneity at the electric double layer surface and can be caused by improper/incomplete mixing of the ionic liquid and copolymer solutions. Due to the liquid properties of honey and liquid-metal, these materials may be rinsed in boiling deionized water after initial measurements have been completed. The authors include an investigation of the electrical performance of graphene before and after rinsing of the liquid-metal electrodes and electrolytic gate dielectric made of honey. It was determined there is a slight change in the electrical performance of the graphene devices after rinsing, Fig. 3b . Before rinsing the extracted hole and electron branch resistances was 917 ± 6 Ω and 1062 ± 1 Ω. After rinsing the extracted electron and hole branch resistances increased to 1065 ± 15 Ω and 1170 ± 20 Ω. The slight increase in resistance is assumed to be due to trace amounts of liquid metal residue that remained after rinsing. The liquid metal residue gradually oxidizes over time and contributes to a parasitic resistance at the contacts. Future efforts can investigate dual-rinse processes that include weak solvents followed by a DI water rinse. Despite a DI water rinse, there is no significant shift of the charge neutrality point (Dirac peak). Additionally, the mobilities extracted from the model fit show negligible, if any, degradation. The hole and electron mobilities before rinsing the LM-GFET are 46 ± 6 and 189 ± 1 cm 2 / V · s respectively at a drain bias of 100 mV . After rinsing, hole and electron mobilities are 88 ± 3 and 165 ± 5 cm 2 / V · s respectively, at a drain bias of 100 mV . Notably, the extracted hole mobility increased, while the electron mobility slightly decreased. While this can be due to the nominal variance of each of the extracted values, it is believed that the DI water with subsequent heating removed several charge impurities that would have otherwise contributed to cross-sections of electron/hole scattering 24 . Therefore, the experimental results suggest that the proposed rinse process minimally impacts device performance, and yet allows designers to rapidly and prudently explore new device architectures with the same graphene material. The reuse of graphene is an incentive to reduce carbon waste. Rapid Characterization of Graphene Quality to Aid Raman Spectroscopy Raman spectroscopy is the industry standard for graphene characterization and provides a researcher with the number of graphene layers, as well as the impurities or dopants present within a graphene material 25 . However, the equipment required to perform these measurements is costly, and the measurements can be time-consuming. Due to the optical magnification necessary for spatially dependent graphene Raman measurements, investigations are limited to a few grain boundaries. Moreover, inhomogeneity across graphene due to topographical imperfections and varying concentration of dopants can change quite drastically in large scale devices 26 . In GFET devices, charge carriers encounter numerous grain boundaries from source to drain. Transport characteristics such as Current-Voltage (I–V) measurements average many grain boundaries and enable a way to analyze the electrical performance of large graphene channels (on the order of several hundreds of microns). Raman spectroscopy of graphene transferred onto polymer substrates is quite challenging for unspecialized laboratories. The reason being, there exist strong polymeric vibrational modes thousands of times more sensitive to Raman scattering near the G band of graphene, Fig. 4a . Moreover, the G Band is not easily identifiable and one must take great care when analyzing Raman data. To identify the G band, static measurements are required and consist of several prolonged exposure acquisitions that increase the signal-to-noise ratio 27 , Fig. 4b . Furthermore, subtraction techniques are required to remove the background PET Raman signature, so that, the graphene ( I 2 D / I G ) ratios can be computed to extract the number of graphene layers. The authors’ best attempt for Raman Spectroscopy characterization of graphene on PET with post-processing to remove the PET signature took approximately 1 hour per sample for a single spot. To conduct Raman measurements of three separate samples (which is the industry standard) will take up to almost 3 hours with post-processing included. Moreover, a Raman Spectroscopy map of a graphene channel will take much longer. Figure 4 ( a ) Raman spectroscopy profile of a graphene sample transferred to Polyethylene Terephthalate (PET) (blue) and PET substrate (red). The location of the G Band and 2 D Band are labeled with arrows. ( b ) Static measurement of graphene G Band with curve fit. ( c ) Static measurement of graphene 2 D Band with curve fit. Full size image The authors utilized their proposed LM-GFET rapid fabrication methods to compare graphene on PET samples with both high and low quality that were previously determined via 514.5 nm Raman spectroscopy. The intention for this experiment was to validate the use of our methods as a useful tool to complement Raman Spectroscopy data for large scale graphene devices. As previously discussed, Fig. 1b illustrates Raman spectroscopy measurements for the graphene sample used in the single-gate transfer characteristics section and illustrated in Fig. 2 . The extracted ( I 2 D / I G ) ratios for Sites A–C are 3.10, 1.92, and 2.13 respectively, thus indicated high quality monolayer graphene. As was determined from I–V measurements, the hole and electron mobilities for the high quality graphene was on the order of 213 ± 15 and 166 ± 5 cm 2 / V · s respectively for a drain bias of 100 mV . On the other hand, the extracted ( I 2 D / I G ) ratios for a sample of low quality graphene was determined to be 2.44, 1.52, and 0.70 respectively. Despite being labeled low quality via the Raman spectroscopy measurements, our rapid fabrication methods determined the hole and electron mobilities were 128 ± 4 and 101 ± 4 cm 2 / V · s . Although, the computed mobilities are lower than what was measured in the high quality samples, the low quality samples were respectively still quite comparable. It has become common practice by commercial graphene manufacturers to provide graphene quality via Raman spectroscopy data of a few (1–3) spots with purchased graphene samples. Due to comparable results for the high and low quality graphene samples, there is reason to believe that our methods can complement Raman Spectroscopy measurements of large samples. One may consider graphene to be of low quality immediately after undesirable Raman spectroscopy measurements, and therefore may not utilize and dispose of the graphene sample. Our proposed methods provide an additional metric to explore graphene quality beyond spatially dependent Raman spectroscopy and in a transistor embodiment. The devices in this paper were relatively large. Users simply need to perform any I–V measurements of their fabricated devices, then use an automated code to extract electrical performance. The extracted mobility can be correlated to graphene quality as previously demonstrated 25 , and the location of the Dirac peak can provide crucial information on the impurities present. PN Junction Transistor Architecture and Operation FETs operate by the formation of junctions through an inherent electric field within a bulk semiconductor. The electric field can be created via chemical doping, yet will often be fixed upon fabrication. One can create a similar adaptive effect in a GFET device by manufacturing a lateral electric field with collinear wires suspended above graphene either in a dielectric, electrolyte, or slightly conductive liquid. Moreover, the electric field does not need to be referenced to graphene or any gate electrodes. An assessment of such a transistor comprised of graphene and honey, shows that the newly formed lateral electric field (E-lateral) creates an effective PN junction because any free electrons within the graphene channel are swept to one side of the channel and any holes are swept to the other end. A representation of this idea is illustrated in Fig. 5a and b and an image of the device is shown in Fig. 5c . In the event the graphene PN junction were forward biased, a higher concentration of holes will be attracted to the negative terminal and a higher concentration of electrons will be attracted to the positive terminal. Figure 5d , illustrates the fundamental graphene PN junction transport properties for when the electric field generator is switched from off to on. For the case when the E-field remains off, a relationship of R ds − V TG demonstrates typical GFET transport characteristics with a single Dirac peak. However, as the E-lateral is turned on, two Dirac peaks occur and the charge carrier distribution can be clearly identified. Figure 5 ( a ) Schematic representation of a graphene PN junction driven by an embedded lateral electric field in honey. ( b ) Visualization of PN Junction LM-GFET embodiment. ( c ) Location of liquid-metal Galinstan droplets, honey, and collinear E-field wire generators above graphene. ( d ) Graphene PN Junction characteristics as illustrated by the relationship of the drain-to-source resistance as a function of top-gate voltage ( R ds − V TG ) for cases when a lateral electric field (E-lateral) is turned on and off. The sudden changes in R ds illustrate changes in carrier concentration throughout the graphene channel. Full size image PN Junction Transistor Characteristics We further demonstrate PN junction phenomena of four separate cases. Case A (Fig. 6a ): Left terminal V L of Fig. 5a is set to a negative voltage and the right terminal V R is set to a positive voltage. This creates a forward bias scenario when the source is set to ground. For example, when V L = −15 V and V R = +15 V , E-lateral = 30 V between the two terminals. Case B (Fig. 6b ): V L and V R are set to negative voltages. Case C (Fig. 6c ): V L is set to a positive voltage and V R is set to a negative voltage, therefore creating a reverse bias scenario. Finally, Case D (Fig. 6d ): V L and V R are set to positive voltages. Figure 6 Drain-to-source resistance as a function of top-gate voltage ( R ds − V TG ) for ( a ) forward bias [− V L , + V R ], ( b ) both negative [− V L , − V R ], ( c ) reverse bias [+ V L , − V R ], and ( d ) both positive [+ V L , + V R ] when the source of Fig. 5a is set to ground. ( a , c ) Change in the amplitude and separation of Dirac peaks as the lateral electric field potential (E-lateral) was varied from 30–33 V . ( b ) Illustration of P-type doping as V L and V R are set to negative voltages. ( d ) Illustration of n-type doping as V L and V R are set to positive voltages. Insets: Magnified view of the Dirac peak shifts. Full size image As E-lateral was turned on and V TG was varied in Case A and Case C, dual Dirac peaks indicated the presence of two charge regions along the graphene channel. The amplitude and distance between the two Dirac peaks were than controlled by altering E-lateral from 30–33 V . This effect can be attributed to charge inhomogeneity within the graphene landscape that was driven by E-lateral. Such techniques are applicable in photodetector applications as one could carefully control optical transitions with an applied electric field. As the lateral electric field strength is altered, the distance between the two Dirac peaks changes. Moreover, the distance between the Dirac peaks indicates the work function required to generate photocurrent which is governed by the Fermi energy level \({E}_{F}=\hslash {v}_{F}\sqrt{\pi n}\) and whether optical interband transitions are allowed 28 , 29 . More noticeably in Case C, as E-lateral was increased, the dual Dirac Peaks merge into a single Dirac Peak. This effect, worthy of future study, may be attributed to non-linear screening of the spatial charge inhomogeneity within the graphene channel. Perhaps for Case C the charge inhomogeneity is of lower density, therefore, screening is more effective with E-lateral. There was a noticeable shift in the overall transport characteristics of Case A and Case C. To explore this phenomena, Case B and Case D demonstrate cases in which V L and V R are biased both negative and both positive, respectively. With respect to the intrinsic doping concentration, it was demonstrated that graphene underwent slight p-type doping (right shift) as E-lateral was biased more negatively. This effect can be attributed to an excess of charge carrier diffusion to the electric double layer that increases with E-lateral 30 and was similarly seen for a single-gated LM-GFET device operation. The opposite is true when the V L and V R were biased both positive and n-type doping occurred (right shift). An additional effect of biasing is the change in the drain-to-source resistance R ds maximum as the lateral electric field strength is altered, Inset of Fig. 6b and d . Summary Our transistors have demonstrated a method to rapidly characterize graphene materials with the use of non-toxic eutectic liquid-metal Galinstan interconnects and honey gate dielectrics in a three-terminal graphene field-effect transistor embodiment. The devices characterized in the paper were fabricated within less than 30 minutes and in a general laboratory setting. Our methods are repeatable, therefore one can adopt our methods into an automated quality assurance process at a per chip level for end-to-end fabrication increasing yield and eliminating tedious testing. Despite not being fabricated in a conventional cleanroom, our devices provided comparable performance to the current state-of-the-art. We demonstrated transport characteristics for a single-gate graphene field-effect transistor and introduced adaptive control over PN junction properties with only an applied lateral electric field bias. We anticipate an adaptive PN junction capability can be adopted into diodes. Furthermore, the manipulation of the physical characteristics of Galinstan is a precursor to flexible devices. Liquid-metal Galinstan can be embedded in microfluidic enclosures and exhibit shape deformability. There are many devices that can result from reconfigurability such as wearable diagnostics and conformal RF devices. Moreover, the liquid state of honey provides the potential for uniform and flexible gate dielectrics that are currently an issue for PVD-based gate dielectrics. In this paper, the authors only demonstrated a rigid architecture with inexpensive materials. The authors encourage the readers to explore alternate embodiments utilizing the liquid materials described and further explore the potential for flexible applications. The authors admit the use of liquid-metal Galinstan and honey for graphene devices in this paper was discovered by accident. We predict our transistors will lead towards the exploration of alternative materials that are slightly unconventional in the hopes these innovative discoveries provide a new class of materials that are non-toxic, biodegradable, and require minimal preparation time. Methods Liquid-metal Graphene Field-effect Transistor Fabrication In this work, graphene was commercially acquired and a quality measure was conducted with a Renishaw InVia 514.5 nm (Green) Micro-Raman Spectroscopy System for three different graphene sites. The absence of a defect band D and analysis of the peak intensity ratio of the 2 D and G Bands ( I 2 D / I G ≈ 2) indicated high quality graphene in all three sites, Fig. 1b . With high-quality monolayer graphene identified, a strip of graphene on Polyethylene Terephthalate (PET) was cut with standard cutting tools and adhered onto a glass microscope slide with the graphene side upward and PET side downward. Liquid-metal Galinstan droplets with volume 0.6 mm 3 each were then dispensed with a blunt-tip syringe to act as source and drain electrodes. Honey was commercially acquired and dispensed from a plastic dropper at a volume of 1.0 mm 3 between the two liquid-metal droplets to act as the electrolytic gate dielectric. For the PN junction LM-GFET embodiment, two wires were suspended above graphene and inside the honey gate dielectric. Both wires were biased with an Agilent E3648A Dual Power Supply. Current, voltage, and capacitance measurements were performed with an Agilent 4155C Semiconductor Parameter Analyzer, probe station, and Hioki IM3570 Impedance Analyzer in air and in a standard laboratory environment. Ion Gel Synthesis Due to the toxicity and degradation of the ion gel components in contact with atmospheric oxygen, synthesis was conducted in a nitrogen-purged atmospheric controlled glove box (Vacuum Atmospheres Company OMNI-LAB). Only when the glove box oxygen content was reduced to below 2 ppm, 1-Ethyl-3 Methylimidizalium Bis(Trifluoromethylsufonyl)imide ([EMIM][TFSI]) ionic liquid (weight 0.467 g, measured with a Fisher Scientific Microbalance) was mixed with Polystyrene-Poly (ethylene oxide) (PS-PEO) diblock copolymer (weight 0.036 g) using a magnetic stirrer. 5.8 mL of acetonitrile solvent (weight 4.56 g) was added with a standard syringe needle to the ionic liquid/copolymer solution to thoroughly dissolve the copolymer. The final ion gel solution consisted of 9.21% [EMIM][TFSI] ionic liquid, 0.71% PS-PEO copolymer, and 90.07% acetonitrile solvent that was evaporated before I-V measurements. The ion gel was dispensed on the LM-GFET devices with a plastic dropper and measurements were all conducted in a fume hood to reduce exposure to the ion gel.
Dr. Richard Ordonez, a nanomaterials scientist at the Space and Naval Warfare Systems Center Pacific (SSC Pacific), was having stomach pains last year. So begins the story of the accidental discovery that honey—yes, the bee byproduct—is an effective, non-toxic substitute for the manipulation of the current and voltage characteristics of graphene. Ordonez' lab mate and friend Cody Hayashi gave him some store-bought honey as a Christmas gift and anti-inflammatory for his stomach, and Ordonez kept it near his work station for daily use. One day in the lab, the duo was investigating various dielectric materials they could use to fabricate a graphene transistor. First, the team tried to utilize water as a top-gate dielectric to manipulate graphene's electrical conductivity. This approach was unsuccessful, so they proceeded with various compositions of sugar and deionized water, another electrolyte, which still resulted in negligible performance. That's when the honey caught Ordonez' eye, and an accidental scientific breakthrough was realized. The finding is detailed in a paper in Nature Scientific Reports, in which the team describes how honey produces a nanometer-sized electric double layer at the interface with graphene that can be used to gate the ambipolar transport of graphene. "As a top-gate dielectric, water is much too conductive, so we moved to sugar and de-ionized water to control the ionic composition in hopes we could reduce conductivity," Ordonez explained. "However, sugar water didn't work for us either because, as a gate-dielectric, there was still too much leakage current. Out of frustration, literally inches away from me was the honey Cody had bought, so we decided to drop-cast the honey on graphene to act as top-gate dielectric— I thought maybe the honey would mimic dielectric gels I read about in literature. To our surprise—everyone said it's not going to work—we tried and it did." Ordonez, Hayashi, and a team of researchers from SSC Pacific, in collaboration with the University of Hawai′i at Mānoa, have been developing novel graphene devices as part of a Navy Innovative Science and Engineering (NISE)-funded effort to imbue the Navy with inexpensive, lightweight, flexible graphene-based devices that can be used as next-generation sensors and wearable devices. "Traditionally, electrolytic gate transistors are made with ionic gel materials," Hayashi said. "But you must be proficient with the processes to synthesize them, and it can take several months to figure out the correct recipe that is required for these gels to function in the environment. Some of the liquids are toxic, so experimentation must be conducted in an atmospheric-controlled environment. Honey is completely different—it performs similarly to these much more sophisticated materials, but is safe, inexpensive, and easier to use. The honey was an intermediate step towards using ionic gels, and possibly a replacement for certain applications." Ordonez and Hayashi envision the honey-based version of graphene products being used for rapid prototyping of devices, since the devices can be created quickly and easily redesigned based on results. Instead of having to spend months developing the materials before even beginning to incorporate it into devices, using honey allows the team to get initial tests underway without waiting for costly fabrication equipment. Ordonez also sees a use for such products in science, technology, engineering and math (STEM) outreach efforts, since the honey is non-toxic and could be used to teach students about graphene. This latest innovation and publication was a follow-on from the group's discovery last year that liquid metals can be used in place of rigid electrodes such as gold and silver to electrically contact graphene. This, coupled with research on graphene and multi-spectral detection, earned them the Federal Laboratory Consortium Far West Regional Award in the category of Outstanding Technology Development. SSC Pacific is the naval research and development lab responsible for ensuring Information Warfare superiority for warfighters, including the areas of cyber, command and control, intelligence, surveillance and reconnaissance, and space systems.
10.1038/s41598-017-10043-4
Space
Research provides evidence of ground-ice on asteroids
Elizabeth M. Palmer et al, Orbital bistatic radar observations of asteroid Vesta by the Dawn mission, Nature Communications (2017). DOI: 10.1038/s41467-017-00434-6 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-017-00434-6
https://phys.org/news/2017-09-evidence-ground-ice-asteroids.html
Abstract We present orbital bistatic radar observations of a small-body, acquired during occultation by the Dawn spacecraft at asteroid Vesta. The radar forward-scattering properties of different reflection sites are used to assess the textural properties of Vesta’s surface at centimeter-to-decimeter scales and are compared to subsurface hydrogen concentrations observed by Dawnʼs Gamma Ray and Neutron Detector to assess potential volatile occurrence in the surface and shallow subsurface. We observe significant differences in surface radar reflectivity, implying substantial spatial variations in centimeter-to-decimeter-scale surface roughness. Our results suggest that unlike the Moon, Vesta’s surface roughness variations cannot be explained by cratering processes only. In particular, the occurrence of heightened hydrogen concentrations within large smoother terrains (over hundreds of square kilometers) suggests that potential ground-ice presence may have contributed to the formation of Vesta’s current surface texture. Our observations are consistent with geomorphological evidence of transient water flow from Dawn Framing Camera images. Introduction Using the communications antenna aboard the NASA Dawn spacecraft, we conducted the first orbital bistatic radar (BSR) observations of a small body at asteroid Vesta at grazing incidence angles during entry and exit from occultations. In this configuration, Dawn’s high-gain telecommunications antenna (HGA) transmitted X-band radio waves during its orbit of Vesta, while the Deep Space Network (DSN) 70-meter antennas received the signal on Earth. Dawn’s orbital trajectory was designed to ensure that the spacecraft’s HGA communications antenna would almost constantly be in the line of sight with ground stations on Earth 1 , but on occasion, the spacecraft inevitably passed into occultation behind Vesta—lasting as briefly as 5 min or as long as 33 min. By continuously transmitting basic telemetry data from Dawn’s antennas during these events, the opportunity arose to observe surface reflections of HGA-transmitted radar waves from Vesta’s surface. Over the past decades, orbital BSR experiments have been used to assess the textural (and in some cases, dielectric) properties of the surfaces of terrestrial bodies such as Mercury 2 , Venus 3 , the Moon 4 , 5 , 6 , Mars 7 , 8 , Saturn’s moon Titan 9 , and now comet 67P/CG 10 . In contrast to most orbital BSR experiments, Dawn’s HGA beam intersected Vesta’s surface at grazing angles of incidence near 89° and in a microgravity environment with substantial variability in its gravity field 11 , leading to a low and variable orbital velocity and hence a more challenging detection of the Doppler-shifted surface echo (hereafter simply referred to as the “surface echo”). While the Mars Global Surveyor (MGS), for instance, orbited at ~3,400 m s −1 with respect to Mars’ rotating surface during its BSR experiment, Dawn orbited Vesta at a relative velocity of only ~200 m s −1 . A high orbital velocity like that of the MGS results in a large Doppler shift between surface-reflected echoes and the direct signal, which greatly simplifies the need to distinguish the two peaks during spectral analysis. In the case of Dawn’s BSR observations of Vesta, however, much shorter averaging time and higher frequency resolution is needed to distinguish surface echoes 12 . Through radar power spectral signal analysis of each surface echo, we assess relative surface roughness at centimeter-to-decimeter scales on Vesta and address its application to understanding the textural evolution of the surface. This is accomplished by measuring the radar cross section σ of each area that is illuminated by the radar lobe of Dawn’s HGA (hereafter referred to as the “site” or the “echo site”), which quantifies the cross-sectional surface area that—if equally scattering in all directions—would reflect the echo power measured at the receiver. Hence, larger values of σ are associated with stronger echoes. In turn, echo strength depends on the roughness of the surface at wavelength scales, the angle of incidence, and the intrinsic reflective and absorptive (dielectric) properties of Vesta’s surface material at radar wavelengths. Assuming each surface echo is measured at the same angle of incidence and is reflected from equal surface area, we normalize σ to the site of strongest reflection and use estimated dielectric properties of the surface material to assess relative centimeter-to-decimeter-scale surface roughness on Vesta with respect to a given reference site. Through the comparison of relative surface roughness with estimated surface ages of geologic units based on two crater counting methods 13 , we assess the physical processes that shaped Vesta’s surface roughness at the same scales as done previously for the Moon (e.g., ref. 14 ). In turn, surface roughness provides insight into the shock history of the body 15 and identification of fracturing mechanisms, such as those resulting from thermal erosion caused by diurnal expansion and contraction of volatiles within the surface host rocks 16 . The comparison of relative roughness with observations of hydrogen concentration [H] from Dawn’s Gamma Ray and Neutron Detector (GRaND 17 ); with hydrated material distribution from Dawn’s Visible and Infrared Mapping Spectrometer (VIR 18 ); and with surface thermal inertia and associated multi-meter-scale topography modeled from VIR data 19 enable further investigation into the relationship between volatile presence and centimeter-to-decimeter-scale surface roughness. Characterizing the roughness properties of Vesta’s surface and of other small bodies is also key to assessing landing, anchoring, sampling, and surface trafficability in future missions 20 . Given the low risk, low operational constraints, and opportunistic nature of the orbital BSR experiment by Dawn, other planetary missions can conduct similar observations even in the case of unlikely major science payload failure. Orbital BSR can also be used to constrain ambiguities associated with surface roughness and, potentially, surface dielectric properties of other small bodies as is proposed for the Jupiter Icy Moons Explorer (JUICE) mission at Ganymede 21 . In our study of Dawn’s BSR observations of asteroid Vesta, we successfully detect radar echoes from Vesta’s surface and find significant variations of radar reflectivity across the surface—where stronger surface echoes suggest smoother surfaces. Unlike the Moon, however, Vesta’s surface roughness variations cannot be explained by cratering only—particularly where smoother areas overlap areas of heightened [H], suggesting that volatile-involved processes have also contributed to shaping the surface of Vesta. Results Observation geometry and measurement constraints During the BSR experiment at Vesta, the HGA aboard Dawn is used to transmit telemetry data at X-band radar frequency (8.435 GHz, 3.55 cm wavelength) while the three 70-m DSN antennas at Goldstone (USA), Canberra (Australia), and Madrid (Spain)—which have similar receiving characteristics (Supplementary Table 1 )—are used to receive (Supplementary Fig. 1 ) 22 . Throughout the Dawn mission, the HGA continuously transmits right-hand circularly polarized (RCP) radio waves with a beamwidth of ~1.6°. The transmission frequency of the HGA is typically driven by the highly stable DSN uplink signal, as this allows for accurate Doppler and range tracking measurements 1 , 11 , 22 . When the uplink signal is not available—as anticipated in the minutes preceding and following an occultation of Dawn behind Vesta—the transmission frequency is instead driven by an internal auxiliary oscillator on board the spacecraft 1 , 22 . While the onboard oscillator generates too much Doppler noise to be used for gravity science to measure the absolute Doppler shift of the direct signal 23 , its frequency is sufficiently stable over the integration time of BSR measurements—a few seconds rather than one minute—to measure the relative Doppler shift between the surface echo and direct signal, which are equally affected by slow Doppler changes. Due to the opportunistic nature of the experiment, the HGA also remains in a fixed orientation pointed toward Earth throughout each BSR observation 1 . As a consequence, Dawn’s transmitted radio waves scatter from Vesta’s surface just before and after each occultation of the Dawn spacecraft behind Vesta, resulting in surface echoes at high grazing incidence angles of ~89°. The spacecraft’s trajectory is also designed to ensure that Dawn’s solar panels are constantly illuminated by sunlight. This geometry allows the primary observation instruments to have maximized visibility of Vesta’s sunlit surface throughout each orbit 1 . As a consequence, while Dawn is in a polar orbit around Vesta 24 , the sites intercepted by the spacecraft’s HGA beam yield surface echoes at mid-latitudes between 30°S and 45°N (Supplementary Fig. 2 ). In contrast to previous planetary BSR observations performed for large bodies, with incidence angles between ~0° and ~80°, the surface reflections from Dawn’s BSR experiment are almost entirely in the regime of forward scattering by which the polarization of a circularly transmitted wave is conserved in major part even after reflection from the target’s surface (e.g., ref. 25 ). As a consequence, we cannot employ the typical method of measuring the circular polarization ratio to evaluate surface roughness or the dielectric constant from surface echoes, e.g., ref. 26 , and instead develop a method to derive relative surface roughness by measuring the relative strength of reflected power from each echo site. While DSN station operators record receiver system temperatures as part of standard calibration procedures 26 , this information was not included with the raw BSR data set, as these were acquired during the downlink of engineering-only telemetry. Since radar data acquired by the DSN are not calibrated in absolute voltage, we instead calibrate the received power to theoretical received power derived from known orbital geometry, and transmitter and receiver specifications. While each of the 70-m DSN antennas are held to the same measurement requirements (listed in Supplementary Table 1 ), we observe a decrease of ~10% in the direct signal’s ratio of measured to theoretical power over the course of a 33-min occultation, and differences in the ratio by as much as 24% from orbit to orbit. Fluctuations in the received power are attributed to variations in the pointing accuracy of the HGA aboard Dawn, since one of the four reaction wheels aboard the spacecraft—used to counteract pointing errors—failed prior to rendezvous with Vesta. We minimize the effect of variable pointing accuracy by measuring the direct signal within a few seconds of the occultation echo observation. A full description of our error analysis is provided in the “Methods” section. As previously mentioned, in additional contrast to other planetary BSR experiments, Dawn also has a relatively slow orbital velocity (~200 m s −1 ) with respect to Vesta’s surface. Since the relative motions of the target surface, the transmitter, and the receiver determine the Doppler shift that separates surface echoes from the frequency of directly transmitted radar waves, we therefore expect a small Doppler shift from Dawn’s BSR experiment at Vesta. As a result, a frequency resolution of a few hertz is necessary to resolve surface Doppler-shifted echoes from the direct signal in the received power-frequency spectra. To optimize the trade-off between signal-to-noise ratio (SNR), frequency drift, and spectral resolution, our final spectra are each averaged over two windows of 2.5-s integration time. Each spectrum is separated by a 1-s time interval in Fig. 2 . Orbital BSR observations Our analysis begins with the calculation of the expected differential Doppler shift between the direct signal and surface reflections. This is compared with the observed differential Doppler shift in the power spectra to confirm the detection of surface echoes. From the power spectrum of each echo, we measure radar cross-sections of Vesta’s surface and finally estimate the relative surface roughness of each echo site. The differential Doppler shift δf between the direct signal and surface echo is attributed to the rotation of Vesta and the relative orbital motion of the Dawn spacecraft (see Supplementary Tables 2 and 3 ). Following the procedure for planetary BSR experiments 27 , theoretical δf is calculated for occultation entry of orbit 355. The surface echo is determined to have a Doppler-shifted frequency within ~2-20 Hz of that of the direct signal when considering uncertainties in (1) the ephemeris position of the spacecraft 28 ; (2) the orbital velocity of the spacecraft due to deviations in the gravity field from the homogeneous model; (3) the precise latitude and longitude of the echo-site center, given the large area that is illuminated by the HGA beam at grazing incidence; and (4) the estimated radius (and subsequent rotational velocity) of the echo site, due to surface topography that is illuminated within the large radar footprint. The theoretical value of δf ~2 Hz is therefore the lower limit of the differential Doppler shift between the surface echo and direct signal, under the assumptions that (1) the position and rotational velocity of the echo site on Vesta are well-represented by a point on the surface, and (2) Dawn’s orbital velocity is constant. Our calculation is consistent with the expectation of a small frequency separation due to Dawn’s low orbital velocity that is orders of magnitude smaller than observed in typical orbital BSR experiments at larger planetary bodies—such as for the orbital BSR experiment by the Mars Global Surveyor spacecraft, which detected radar reflections from the martian surface that were separated by as much as 10 kHz from the received direct signal 8 . Figures 1 and 2 show the temporal progression of received power spectra from BSR observations during orbit 355. In Fig. 1 , black spectra have the same circular polarization as the transmitted wave, RCP, while gray spectra correspond to the power received with opposite (left-hand) circular polarization, LCP. Fig. 1 Typical progression of received radar signal over the course of an occultation. The frequency spectra show power received in the same circular ( SC ) and opposite circular ( OC ) polarization a before/after occultation, b during entry into occultation, c during occultation, and d during exit from occultation of orbit 355. All surface echoes during occultation entry are Doppler-shifted to lower frequencies than the direct signal, while all surface echoes during occultation exit exhibit Doppler shifts to higher frequencies than the direct signal Full size image Fig. 2 Progression of received radar signal throughout entrance into and exit from occultation during orbit 355. Occultation entry a spans ~16 s while occultation exit b spans ~25 s. Each spectrum is generated from two averages of 2.5-s integrated spectra, corresponding to a total of 5 s of radar data that start at each listed timestamp. Spectra are vertically offset for display purposes, where each successively higher spectrum corresponds to a step forward by 1 s Full size image Figure 1a shows the direct signal prior to occultation at its peak strength of 50 dB relative to the noise level. The presence of measurable LCP power in the direct signal indicates imperfection in the transmitting antenna and the high sensitivity of the 70-m DSN receivers, as the power in LCP is ~2.5% of (~16 dB below) the power measured in RCP. Panel (b) shows a typical power spectrum during Dawn’s entrance into occultation behind Vesta. Most of the direct signal is still visible to the receiver on Earth, while a secondary peak (the surface echo) emerges with a relative Doppler shift δf of −12 Hz with respect to the direct signal. The LCP component is now ~28 dB below the RCP peak, potentially because the main lobe of the antenna has become partially obstructed by Vesta’s surface. Panel (c) shows typical RCP and LCP spectra observed amid full occultation of Dawn behind Vesta, and consist solely of receiver system noise. In the final panel (d), Dawn has partially exited from occultation behind Vesta. Most of the direct signal is again in the line of sight with the receiver, while a secondary surface echo peak is observed with a relative Doppler-shifted frequency +9 Hz higher than that of the direct signal. The LCP component is again ~28 dB weaker than the received RCP power, potentially due to partial obstruction of the antenna lobe. The observed δf of −12 Hz during occultation entry is consistent with our calculated range of theoretical δf values between ~2 and 20 Hz, where the upper limit of theoretical δf is attributed to uncertainties in the precise position and velocity of the spacecraft, and in the location and subsequent radius and rotational velocity of each echo site. Furthermore, the direct signal has a frequency width of ~10 Hz at 31 dB below the peak at full strength, such that the secondary peak of an emerging surface echo is not observable until the direct signal has been sufficiently weakened behind Vesta or when δf is greater than half the frequency width of the diminishing direct signal. This observation is further emphasized in Fig. 2a , which shows the progression of RCP power spectra over the course of 16 s during Dawn’s gradual entry into occultation behind Vesta during orbit 355. The lowest plotted spectrum is the same as that of panel (a) in Fig. 1 , showing the direct signal prior to occultation. With each successive second (indicated by the next higher, vertically offset spectrum), direct signal power decreases as the HGA boresight passes behind Vesta’s horizon, while reflections from the surface emerge at a lower frequency until the topmost spectrum at which point the spacecraft is completely obscured by Vesta; only receiver noise is detected. Figure 2b shows the same temporal progression for occultation exit. The lowest spectrum shows the first moment of observable surface echo after occultation, and progresses to the top spectrum at which point the direct signal is fully in the line of sight with the receiver. Another consequence of radar reflections at high grazing-incidence angle is that the polarization of the transmitted RCP waves are conserved in the forward-scatter direction (e.g., ref. 25 ). Since surface echoes do not contain a measurable LCP component, their spectra are excluded from Fig. 2 . For reference, plots of the received null LCP power spectra are provided in Supplementary Fig. 3 during entry and exit from occultation of orbit 355. In total, 20 cases of surface echoes are detected at mid-latitudes, 14 of which (1) reflect from sites with minimal topographic variability, and (2) are sufficiently distinguishable from the direct signal to allow for characterization of the surface’s scattering properties in these regions. The radar-illuminated sites of the 14 echoes are plotted in Fig. 3a on an equirectangular projection of Vesta’s surface. High-resolution images of the smoothest and roughest observed echo sites are provided in Supplementary Fig. 5 . Fig. 3 Comparison of BSR results with observations by the GRaND and VIR instruments aboard Dawn. Map a shows the distribution of relative radar cross section interpolated between echo sites and overlain upon an equirectangular projection of Vesta’s surface; b shows subsurface [H] to a depth of a few decimeters 17 ; c shows the distribution of hydrated material at the surface 18 ; and d shows the surface’s thermal inertia and multi-meter-scale topography modeled from VIR thermal observations 19 . O’s and X’s mark locations where BSR surface echoes have been detected during the associated orbit number Full size image Vesta’s surface radar properties are explored hereafter using the forward-scatter radar cross section σ with units of km 2 . This parameter quantifies the cross-sectional surface area of a perfectly isotropic scatterer that would reflect the same echo power that is measured at Earth. For a given acquisition geometry, σ depends on the surface’s dielectric and roughness properties at the radar wavelength, and is determined from the ratio of received echo power to transmitted power 12 . Typically, σ is normalized to the surface area illuminated at each echo site ( σ 0 ) and directly compared with backscatter measurements from other observations of the target surface (e.g., ref. 29 ) or of other planetary bodies. However, due to the ambiguity associated with topographic shadowing effects at grazing incidence and lack of directly comparable measurements in the forward-scattering regime, each σ measured at Vesta is instead assumed to have (1) approximately equal area illuminated during echo reflection, and (2) approximately equal power incident on the surface at 89°. Forward-scatter σ is then normalized to σ max , i.e., that of the site with maximum observed echo power. The resulting differences in relative forward-scatter radar cross sections ( σ/σ max ) are then used to infer variations in roughness at centimeter-to-decimeter scales across the surface of Vesta. Figure 3a shows the resulting distribution of ( σ/σ max ) on Vesta, where the reference site for σ max = 3588 ± 200 km 2 is located northwest of Caparronia crater (occultation exit of orbit 406). Values of ( σ/σ max ) range from –16.3 ± 0.5 dB at the site of weakest measured echo power, to zero dB at the σ max reference site. The intrinsic reflective and absorptive (i.e., dielectric) properties of Vesta’s surface material, as estimated from Dawn’s VIR observations, are found to be constant throughout the upper regolith in X- and S-band 30 . For our sites which have minimum topographic variability, changes in ( σ/σ max ) are therefore attributed to spatial variations in surface roughness at centimeter-to-decimeter scales across Vesta. Radar echoes received from smoother sites reflect strongly in the forward direction and hence exhibit the strongest measured power in the regime of forward scatter, whereas weaker echoes are observed when reflected from rougher surfaces 12 . Echo sites with the highest ( σ/σ max ) ratios ( blue in Fig. 3 ) therefore correspond to the smoothest observed surfaces on Vesta and sites with the lowest ratio (in red ) represent the roughest observed surfaces. Table 1 contains measurements of ( σ/σ max ) for all surface echoes, including brief descriptions of the terrain associated with each reflection site 31 . Table 1 Forward-scatter radar cross sections σ of Vesta’s surface measured at each echo site from high-incidence BSR surface reflections Full size table Discussion Vesta is presumed to have been largely depleted of volatiles during its differentiation 24 but recent observations by Dawn’s GRaND and VIR instruments suggest the potential introduction of hydrated material through meteoritic impacts 17 , 32 . Figure 3 shows the comparison of (a) the spatial distribution of relative forward-scatter radar cross section ( σ/σ max )—inversely proportional to centimeter-to-decimeter-scale surface roughness—with (b) the distribution of [H] measured by GRaND at a depth of a few centimeters to decimeters 17 ; (c) the distribution of hydrated surface material by VIR 18 ; and (d) thermal inertia and multi-meter-scale topography modeled from VIR thermal observations 19 . All echo sites within ±28° latitude from the equator have ( σ/σ max ) > –7 dB, [H] > 0.015% and overlap regions with hydrated surface material, suggesting that the smoothest observed terrains at centimeter and decimeter scales (relative to the smoothest reference site, which is observed northwest of Caparronia crater) are correlated with heightened [H] near Vesta’s equator. While [H], in turn, has been correlated with the presence of low-albedo surficial deposits of hydrated material (“dark material”) 17 , these deposits are proposed to have the mineralogical composition of carbonaceous chondrites 32 , which have indistinguishable dielectric properties from that of the surrounding lunar-like regolith—specifically, ε′ is estimated to be ~2.4 for Vesta’s basaltic regolith 30 , ε′ ~2.6 as measured for porous ordinary chondrites 33 , and ε′ ~2.6–2.9 as measured for porous carbonaceous chondrites 34 . Hence, the observed correlation of radar reflectivity with [H] is due to textural variation and not in the dielectric properties of the surface. The regolith should otherwise be particularly rough at the centimeter-to-decimeter scale in these geologic units—the cratered highlands, ejecta blankets of Octavia and Marcia crater, ejecta material from Rheasilvia, and Divalia Fossae 31 —in the absence of smoothing erosional processes, such as the melting, run-off and recrystallization of water ice after an impact, suggesting the potential presence of subsurface volatiles at these echo sites. The occurrence of heightened [H] with rougher surfaces, such as in the northern cratered trough terrain northeast of Caparronia crater potentially results from the following sequence: (1) initial smoothing due to impacts that induce ground-ice melting, run-off and re-crystallization of buried ice; and then (2) subsequent fracturing due to thermal erosion that is caused by the expansion and contraction of remnant volatile inclusions within the breccias and impactites that constitute a large fraction of the regolith’s surface and shallow subsurface (i.e., the first meter), due to significant diurnal temperature fluctuations. This hypothesized mechanism is further supported by the observation of gullies and flow features on crater walls from Dawn Framing Camera (FC) images that suggest transitional melting of ground-ice during the post-impact process 35 . Above 30°N, roughness is observed to lessen with increasing [H], suggesting less fracturing occurring from thermal erosion at higher latitudes, which is consistent with minimal solar illumination at northern latitudes due to seasonal shadowing during Dawn’s orbit of Vesta (e.g., ref. 19 ). On the Moon, there is a strong correlation between the surface age of geologic units and their radar backscatter properties, implying that impact cratering is the dominant process that governs the texture of the lunar regolith at centimeter-to-decimeter scales 36 . Younger units, especially crater ejecta, are rough at centimeter and decimeter scales due to the presence of shocked and fractured meter- and centimeter-sized fragments from impacts; over time, older cratered surfaces are covered with a layer of fine debris that buries rock fragments, therefore appearing smooth at the surface to radar at centimeter and decimeter scales 36 . However, we do not observe the same correlation on Vesta between relative surface roughness and relative surface ages. Williams et al. 13 studied the chronology of Vesta’s various geologic units through crater counting methods and developed two models for surfaces ages: one extrapolated from lunar-derived chronology, and the other from models of asteroid belt dynamics. In Supplementary Fig. 4 we plot the approximate surface age of each echo site’s geologic unit for each chronology against their observed relative radar cross section ( σ/σ max ) values, but we find no observable correlation of radar scattering properties with surface age, suggesting that cratering cannot be the only process shaping Vesta’s surface texture. The lack of observable correlation between cratering and radar-wavelength surface texture is further evidenced by high-resolution Dawn FC images of Vesta at the sites of strongest and weakest BSR surface-echo reflections—i.e., the smoothest and roughest sites observed by BSR, respectively, at centimeter-to-decimeter scales. Panel (a) of Supplementary Fig. 5 shows a regional view of the smoothest echo site at 66 m per pixel resolution on the left, and a subset of the echo site at 18 m per pixel resolution on the right. Panel (b) shows the roughest echo site at 62 m per pixel and 22 m per pixel. We do not observe a correlation between surface topography at the scale of meters to tens of meters with the centimeter-to-decimeter-scale surface roughness that is observed by Dawn’s BSR investigation. Together, these observations suggest that unlike the Moon, impact cratering processes cannot solely explain Vesta’s surface roughness, and that fracturing arising from subsurface volatile occurrence may have contributed to the formation of the current surface texture. This is further supported by the thermal inertia map of Vesta’s surface in Fig. 3d , which also shows no correlation between BSR-derived centimeter-to-decimeter-scale roughness and the multi-meter-scale topography that is derived during the process of thermal inertia modeling 19 . In total, we have identified 10 probable sites for potential shallow subsurface volatile occurrence that include occultation entry during orbits 644, 719, and 720, and occultation exit during orbits 355, 406, 407, 521, 719, and 720. Each corresponds to sites with the smoothest to intermediate surface roughness on Vesta and exhibit heightened subsurface [H] between 0.025 and 0.04% as observed by GRaND 17 . With regard to future landing and sample collection missions on asteroids 20 , the observed variation of centimeter-to-decimeter-scale surface roughness across Vesta further emphasizes the importance of these opportunistic observations to be systematically carried out to support safe landing, proper anchoring, and optimized collection of potentially volatile-enriched samples. BSR observations constrain the spatial variability of surface roughness on small bodies and can thereby support safe trafficability—especially for equatorial regions as are observed on Vesta, which are frequently considered for landing and sampling return sites 20 . While topographic maps derived from orbital observations provide first-order information about large-scale obstacles, high-resolution orbital BSR observations yield information about surface roughness at centimeter-to-decimeter scales that cannot be derived from Earth-based observations. On Vesta, no correlation is observed between topographic elevation and the distribution of surface roughness. In summary, the orbital BSR experiment at asteroid Vesta emphasizes the importance of utilizing standard communications antennas aboard spacecraft to derive constraints on surface roughness at sub-topographic scales. In our future work, we will apply the same analysis to BSR observations of icy asteroid Ceres, the second target of the Dawn mission to understand potential subsurface volatile occurrence. Methods Experimental constraints and data selection During the BSR experiment at Vesta, the Dawn spacecraft’s HGA was used to transmit telemetry data at X-band radar frequency (8.435 GHz, 3.55 cm wavelength) in RCP, while the three 70-m DSN antennas on Earth—at Goldstone (USA), Canberra (Australia) or Madrid (Spain)—were used to receive 22 . The three receiving systems have the same measurement requirements in terms of noise temperature, antenna gain, pointing loss, and polarization loss as listed in Supplementary Table 1 . In two-way coherent downlink mode, Dawn’s transmission frequency is driven by the DSN’s uplink frequency and yields a Doppler stability (Δ f/f ) of 10 −12 over 60-s measurements, i.e., a frequency drift of ~0.0001 Hz s −1 1 . In one-way or non-coherent downlink mode—used when the uplink signal is expected to be unavailable, such as the minutes preceding and following an occultation—the transmission frequency is driven by an onboard internal auxiliary oscillator with a maximum frequency drift of 0.05 Hz s −1 37 . One-way downlink mode contains too much Doppler noise to be used for gravity science 23 but is sufficiently stable for BSR measurements since they are integrated over a much shorter timespan—a few seconds, as opposed to one minute for gravity science measurements—and because we are measuring the relative Doppler shift between the surface echo and direct signal, which are equally affected by slow Doppler changes. Dawn’s HGA was almost constantly pointed at ground stations on Earth for communication 1 , so by continuously transmitting basic telemetry information, we had the opportunity to observe surface reflections of the radar signal just before and after each occultation of the Dawn spacecraft behind Vesta at highly oblique incident angles of ~89° as depicted in Supplementary Fig. 1 , similar to geometry of the BSR experiment at Mars by the Mars Global Surveyor 38 . In order to analyze the resulting BSR data, we selected occultations that occurred specifically during Dawn’s lowest-altitude mapping orbit (LAMO) around Vesta, and using the HGA to ensure the strongest observable surface echoes. During LAMO, there were 16 unique orbits during which an occultation occurred, and therefore 16 individual entries and 16 individual exits. Out of these events, several were discarded based on the following criteria: (1) if the radar amplitude data files containing RCP and LCP were copies of each other; (2) if the direct (carrier) signal’s power did not consistently exhibit 50 dB strength before or after its occultation; (3) if the surface echo was indistinguishable from the noise (i.e., < ~3 dB); (4) if the window of occultation entry or exit was so brief that the signal disappeared faster than our temporal resolution (e.g., if the carrier dropped by 30 dB within a few seconds, and usually showed no indication of a measurable surface echo); (5) if the δf was too small to distinguish the surface echo’s power from that of the carrier; or (6) if the surface echo occurs in a region of high topographic variability with respect to the incident HGA beam size, which is assessed below. In all, five entries and nine exits passed our criteria for analysis and measurement of σ (km 2 ) and σ/σ max (dB). The following sections describe the method used to process and analyze the resulting BSR data. Supplementary Table 1 summarizes the acquisition parameters of the BSR experiment. Processing DSN BSR data Radio waves received by the DSN 70-meter antennas are recorded as amplitude (voltage) versus time, and are collected in two channels: RCP and LCP. Within each channel, amplitude data is recorded in two components, in-phase I and quadrature Q , which correspond to the real and imaginary parts of the complex voltage, respectively. The raw modulated telemetry data collected at the DSN antenna is sampled at a rate of 16 kHz. In addition, a Doppler shift correction is applied to the X-band receiving frequency in order to counteract the calculated Doppler shift induced by Dawn’s orbit around Vesta. However, the DSN utilizes a local oscillator to subtract the ephemeride Doppler shifting, and what persists is a low frequency component of error, which we observe is as much as 300 Hz. The offset frequency of the carrier signal is observed to decrease as the spacecraft moves further away from Earth during occultation entry—and as expected, increases as the spacecraft moves closer toward Earth after leaving occultation. Given that most of the BSR observations of occultations were conducted in one-way mode, the accuracy of the predicted receiving frequency was also diminished in the time leading up to and following these unique orbital geometries. Hence, when radar amplitude data is plotted against time, a change is observed in the overall envelope frequency of the sinusoidal amplitudes over the course of, for example, 1 min, during which the carrier signal may shift from 200 to 100 Hz. Notably, this frequency offset does not impact the relative Doppler shift between the surface echo and direct signal. To seek surface echoes, the DSN I -and- Q amplitude time series data is converted into the frequency domain by taking the complex fast Fourier transform. The power frequency spectrum is generated in voltage-squared per Hz such that: $${P_{{\rm{spec}}}} = \frac{1}{{{t_{{\rm{spec}}}}}}{\left| {{\rm{FFT}}\left( {{A_I} + i{A_Q}} \right)} \right|^2}$$ (1) The duration t spec over which to generate each power spectrum is selected only after assessing the theoretical Doppler separation δf between the direct signal and any surface echoes, as this dictates the frequency resolution necessary to accurately distinguish each surface echo. Background noise is smoothed by averaging power spectra together, but surface-echo strength may be diminished in the process. The latter occurs when spectra are averaged over times that lack a surface-echo signal or while the echo has changed in frequency. The final parameters used to generate power spectra are outlined at the end of the following section. Calculating the differential Doppler shift In order to verify the detection of surface reflections within BSR power spectra, the theoretical differential Doppler shift δf is calculated between the direct signal and grazing surface-reflected echoes, and compared with measured values. Theoretical δf is estimated from the known positions and line-of-sight velocities between (1) the Dawn spacecraft (D) at the time of transmission; (2) the center coordinate of the radar-illuminated surface on Vesta (Vpt) when the signal reaches the surface; and (3) the receiving antenna on Earth (E) after accounting for light travel time; each of which are extracted from the reconstructed trajectory of Dawn’s orbit that is provided in SPICE ephemeride data 28 . For occultation entry during orbit 355, Supplementary Fig. 2 shows the instantaneous total velocities v of Dawn, the surface point of reflection on Vesta (hereafter referred to as “the echo site” or “the site”), and the receiving antenna on Earth within the bistatic plane—defined as the plane containing all three bodies at a given moment 12 . All positions and velocities of D, Vpt and E are obtained with respect to Vesta’s inertial frame of reference, such that the origin of the Cartesian coordinate system is Vesta’s center of gravity and Vesta’s equator defines the xy -plane. The individual components of each body’s instantaneous velocity is listed in Supplementary Table 2 for occultation entry during orbit 355, where each velocity v A is given in m s -1 along the line of sight with respect to the position \({\hat r_{\rm{B}}}\) . Velocity components are defined to be positive if body A is moving toward target B, and negative if away. For example, the notation \({\it v_{\rm{D}}}{\hat r_{{\rm{Vp\it t}}}}\) = −82.2 m s −1 (column 2, row 4) indicates that Dawn is traveling away from the echo site at a speed of 82.2 m s −1 . The differential Doppler shift δf between the surface echo and direct signal is calculated by: $$\delta f = \Delta {f_{{\rm{echo}}}} - \Delta {f_{{\rm{direct}}}}$$ (2) where the absolute Doppler shift Δ f of the direct signal or surface echo depends on the relative velocity of the transmitting and receiving bodies along their line of sight as described by Simpson 27 . Supplementary Table 3 shows the mathematical definition and calculation of individual contributions to the total theoretical differential Doppler shift ( δf total ) between the surface echo and direct signal, which are the result of motions along the line of sight between Dawn and Earth (column 1), Dawn’s orbital motion around Vesta (column 2), and Vesta’s rotation (column 3). The absolute Doppler shift of the direct signal Δ f direct is first calculated from the combination of Dawn’s instantaneous line-of-sight velocity toward the receiving antenna on Earth ( \({\it v_{\rm{D}}}{\hat r_{\rm{E}}}\) ), and Earth’s line-of-sight velocity toward Dawn’s position ( \({v_{\rm{E}}}{\hat r_{\rm{D}}}\) ). The differential Doppler shift due to Dawn’s orbital motion ( δf orbit ) is then calculated from the Doppler shift contributed by \({v_{\rm{D}}}{\hat r_{{\rm{Vp\it t}}}}\) and its difference from Δ f direct . In turn, the differential Doppler shift contributed by the rotation of Vpt ( δf rotation ) on Vesta’s surface is calculated from the Doppler shift contributed by \({v_{{\rm{Vp\it t}}}}{\hat r_{\rm{D}}}\) and \({v_{{\rm{Vp\it t}}}}{\hat r_{\rm{E}}}\) and their difference from Δ f direct . The combined Doppler shift contributions of δf orbit and δf rotation yield the total theoretical δf total , which is calculated to range from ~2 Hz, as listed in Supplementary Table 3, to as much as 20 Hz when considering uncertainties in spacecraft position and Vesta’s rotational velocity (detailed further below in our error analysis). Hence, the surface echo during occultation entry of orbit 355 is calculated to have a frequency shift that ranges from ~2 to 20 Hz higher or lower than that of the received direct signal. This calculation confirms that in the configuration of grazing incidence during occultation observations of Vesta by Dawn, and due to the spacecraft’s low orbital velocity of 200 m s −1 , the frequency separation δf between surface echoes and the direct signal will be small. Higher frequency spectral resolution is therefore necessary to distinguish the Doppler-shifted surface echoes from the direct signal at Vesta. However, this requires longer integration time of the observation. We chose 2.5-s integration time to obtain a frequency spectral resolution of ~0.4 Hz as a tradeoff between SNR, frequency drift and resolution. Our final frequency spectral analysis averages two 2.5-s looks, and repeats this calculation shifting the start time of the averaging by 1 s. The resulting spectra are exemplified for occultation entry and exit during orbit 355 in Supplementary Fig. 3 , which shows the power received in both RCP (reproduced from Fig. 2 ) and in LCP. Calculating the radar cross section at high incidence In order to assess the surface’s geophysical properties that contribute to the observed surface echoes, the radar cross section σ of Vesta’s surface is calculated in m 2 from the BSR equation 12 . This parameter is defined as the effective surface area that isotropically scatters the same amount of power as the echo site on Vesta, such that larger values of σ are associated with stronger surface echoes. Assuming (1) each echo site has approximately the same surface area illuminated by radar and (2) are observed at the same geometry (89° incidence), relative differences in σ imply differences in geophysical properties of the surface. The latter assumption is supported by excluding surface echoes that occurred in regions of high topographic variability with respect to the incident HGA beam diameter, where the illuminated surface area is estimated using a first-order spherical approximation to Vesta’s surface. Notably, this approximation excludes the effects of shadowing and diffraction, which are difficult to quantify at grazing incidence, and does not account for deviations of Vesta’s shape from the sphere within the large area illuminated by the HGA beam. Our first-order estimation yields an elongated radar footprint of ~51 km along the line of sight between Dawn’s HGA and Earth’s receiving antenna, and ~11 km in diameter perpendicular to the line-of-sight. Topographic variability is assessed by calculating the root-mean-square height h rms using elevations from Vesta’s digital terrain model 39 within an 11.5° by 11.5° grid (~51 × 51 km) centered on each echo site’s coordinates, and must be sufficiently smaller than the incident 11-km beam diameter. Calculated h rms range from 0.002 to 0.13 km. All but one echo site had h rms < 0.11 km (i.e., topographic variability of 1% of the incident beam diameter). The echo site of occultation entry during orbit 377 exceeded this criteria and is therefore excluded from the following analyses. Since HGA transmissions are measured continuously throughout the minutes that precede and follow an occultation of the spacecraft behind Vesta, only engineering data is included in the raw telemetry, and DSN receiver calibration measurements are not included with the raw radar data set. Because the BSR data are not calibrated in absolute voltage, calculating σ therefore requires calibrating the measured power to a known reference. This is made possible by calculating the theoretical received power of the direct signal P r dir|calc in watts, and comparing with measured received power P r dir|meas in data units. P r dir|calc is calculated from the one-way radar equation and depends on the transmitted power P t , gain of the transmitting G t and receiving G r antennas, the distance R DE between the transmitter aboard Dawn and the receiving antenna on Earth, and summed losses L . The one-way radar equation for P r dir|calc (W) is then as follows 12 : $${P} {_{{\rm{r}} \,{\rm{dir|calc}}}} = \frac{{{P_{\rm{t}}}{G_{\rm{t}}}{G_{\rm{r}}}{\lambda ^2}}}{{{{\left( {4\pi {R_{{\rm{DE}}}}} \right)}^2}L}}$$ (3) where the nominal range of each parameter—except for time-dependent R DE —is provided in Supplementary Table 1 . Note that losses contributed by the DSN 70-m antennas are published in the telecommunications parameters of the Deep Space 1 mission 40 . P r dir|meas is evaluated by measuring the area under the curve in non-logarithmic units during a time when the direct signal is not obstructed by Vesta. Since the data is discrete, P r dir|meas is the sum of the power in each frequency bin multiplied by the width of each frequency bin (subtracted by the noise power in the same bandwidth): $${P_{{\rm{r}}\,{\rm{dir|meas}}}} = \left( {{\sum} {{P_{\rm{i}}} \cdot \Delta {f_{{\rm{step}}}}} } \right) - \overline {{P_{\rm{N}}}} \cdot {f_{{\rm{BW}}}}$$ (4) where frequency step Δ f step is the spectral resolution of ~0.4 Hz, as previously determined when calculating the differential Doppler shift; P i is the non-logarithmic power in data units of each discrete point measured within a 10-Hz bandwidth ( f BW ) of the direct signal peak; and \(\overline {{P_{\rm{N}}}} \) is the average noise power (data units Hz −1 ) in the spectrum. The conversion factor between watts and power measured from BSR data units is therefore: $${C_{{\rm{ToWatts}}}} = \frac{{{P_{{\rm{r}}\,{\rm{dir}}\left| {{\rm{calc}}} \right.}}\left[ {{\rm{Watts}}} \right]}}{{{P_{{\rm{r}}\,{\rm{dir}}\left| {{\rm{meas}}} \right.}}\left[ {{\rm{Data}}\,{\rm{Units}}} \right]}}$$ (5) and the BSR equation, solved for σ , is then: $${\sigma _{\left( {{{\rm{m}}^{\rm{2}}}} \right)}} = \frac{{{{\left( {4\pi } \right)}^3}{R_{\rm{t}}}^2{R_{\rm{r}}}^2L}}{{{G_{\rm{t}}}{G_{\rm{r}}}{\lambda ^2}}}\left( {\frac{{{P_{{\rm{r}}\,{\rm{echo}}|{\rm{meas}}}}}}{{\left( {1 - {X_{{{\rm{P}}_{\rm{t}}}}}} \right){P_{\rm{t}}}}}} \right)$$ (6) where X Pt is the fraction of incident power that has been reduced due to partial obstruction of the HGA beam by Vesta’s surface, and P r echo|meas (W) = P r echo|meas (Data Units) × C ToWatts . By measuring the received direct signal P r dir|meas at a time close to each measured surface echo—10 s before a given occultation entry, and 10 s after an occultation exit—we minimize variations in the transmitting and receiving system characteristics, including changes in the HGA’s pointing accuracy, and potential differences in DSN receiver losses due to the use of different receiving stations with different system temperatures and atmospheric conditions. Hence, the measurement of the radar cross section σ (m 2 ) becomes independent of P t , G t , G r , L and λ , such that: $${\sigma _{\left( {{{\rm{m}}^{\rm{2}}}} \right)}} = \frac{{{{\left( {4\pi } \right)}^3}R_{{\rm{DVpt}}}^2R_{{\rm{EVpt}}}^2}}{{\left( {1 - {X_{{{\rm{P}}_{\rm{t}}}}}} \right)R_{{\rm{DE}}}^2}}\left( {\frac{{{P_{{\rm{r}}\,{\rm{echo|meas}}}}}}{{{P_{{\rm{r}}\,{\rm{dir|meas}}}}}}} \right)$$ (7) where R DVpt is the distance between the transmitter aboard Dawn and the echo site on Vesta’s surface at the time of the observed surface echo; R EVpt is the distance between the receiving antenna on Earth and the echo site at the time of the observed surface echo; and R DE is the distance between Dawn’s HGA and the receiver on Earth at the time when P r dir|meas is measured (10 s before occultation entries, and 10 s after occultation exits). Typically, the radar cross section is then normalized to the areal extent illuminated by the radar ( σ 0 ), and is reported in regime of diffuse backscatter due to the use of Earth-based radar antennas as both transmitter and receiver for many observations (e.g., ref. 29 )—where at increasingly high angles of incidence, the diffuse component of radar backscatter dominates the received signal 14 . BSR observations at Vesta are conducted in the forward scatter regime, however, whereby radar waves predominantly scatter in the forward direction and almost entirely within the plane of incidence 12 . In this regime, the polarization of a circularly transmitted wave is also conserved in major part even after reflection from the target’s surface 25 . Dawn’s measurements of σ 0 on Vesta’s surface are therefore not directly comparable with those observed in the backscatter regime on other planetary bodies. While the lack of comparability might be overcome by deriving surface roughness from σ 0 , two sources of uncertainty remain: (1) the absolute surface area contributing to forward-scattered surface echoes is difficult to quantify due to the effects of shadowing and multiple scattering that become important at such high, grazing incidence 38 ; and (2) there is no appropriate scattering model to address the impact of wavelength-scale surface roughness on radar reflections at grazing angles of incidence approaching 90° 41 . Instead, we calculate relative σ across Vesta’s surface with respect to the strongest observed surface reflection σ max by measuring σ when the direct signal is ~25 dB above the noise level for all surface echoes, and employ the assumption that the illuminated surface area is approximately equal for each site. Since incident power is also assumed equal for all surface echoes (and therefore X Pt assumed constant for Equations 6 and 7 ), we estimate that at least 50% of the HGA beam is obscured behind Vesta’s surface ( X Pt = 0.5) and report σ (km 2 ) as a lower limit for each echo site. Under the above assumptions of equal incident power and equal surface area illuminated at 89° incidence, the relative strengths of surface echo reflections ( σ/σ max ) can then be attributed to differences in the relative reflectivity of the surface material itself or variations in the roughness of the surface at the scale of the radar wavelength 14 , 26 . Potentially greater obstruction of the incident power would result in an increase of σ (km 2 ), but assuming equal obstruction for all surface echoes, this does not change the relative radar cross section ( σ/σ max ). Uncertainty in the differential Doppler shift The primary sources of uncertainty in theoretical δf include the positions and velocities of (1) the spacecraft and (2) the radar-illuminated surface echo site on Vesta as listed in Supplementary Table 4 . We assume that uncertainty in the position and velocity of Earth is negligible at such distances. For a given parameter Y ± Δ Y that depends on multiple variables X i ± (Δ X ) i , we calculate Δ Y by summing in quadrature the partial derivative of each contributing variable ( ∂X i /∂Y ) multiplied by its uncertainty (Δ X ) i . The position of the Dawn spacecraft is provided in Cartesian coordinates from SPICE ephemerides with an uncertainty of ± 3 m in the radial, along-track and cross-track directions from the reconstructed trajectory of LAMO 28 , while the error in the position of the echo site on Vesta’s surface is calculated from (1) uncertainty in the radius of the echo site from Vesta’s center ± ~0.5 km—due to topography within the large surface area illuminated by the HGA beam at grazing incidence—and (2) uncertainty in the latitude and longitude of the echo site center by ± ~0.2°. Uncertainties in the geodetic coordinates of the echo site are then converted to Cartesian coordinates at a height above or below a reference triaxial ellipsoid—which is defined using the best-fit ellipsoid derived from Hubble light-curve observations of Vesta, where R x = 289 km, R y = 280 km and R z = 229 km 28 , 42 . Uncertainties in the x , y and z coordinates of the echo site for occultation entry of orbit 355 are on the order of ~0.7 km for the occultation entry of orbit 355. With regard to spacecraft velocity, the SPICE ephemerides containing Dawn’s state vector were released by the optical navigation team in 2012 28 before the peer-reviewed publication of Vesta’s gravitational solution in 2014 11 . Hence, Dawn’s reconstructed trajectory does not include variations in orbital velocity due to the heterogeneous gravity field that exhibits accelerations between −1000 mGal and +2000 mGal (−1 cm s −2 and +2 cm s −2 ) relative to the homogeneous model 11 . Since each frequency spectrum produced in our BSR analysis is averaged over 5-s observations, unpredicted gravitational accelerations contribute –5 to +10 cm s −1 uncertainty in Dawn’s velocity vector. The rotational velocity of the echo site on Vesta’s surface depends on the distance of the site from Vesta’s center and the rotation period, the latter of which is known to high precision 11 . Given the large extent of surface area illuminated by the HGA beam on Vesta, we use the radius at the center of the echo site to calculate a representative rotational velocity. During occultation entry of orbit 355, uncertainty of ± 0.5 km in the radius yields an uncertainty of ± 16 cm s −1 in the echo site’s rotational velocity. Using the above-derived uncertainties in the positions and velocities of each body, we calculate the propagation of error into each line-of-sight velocity between Dawn, the surface echo site on Vesta, and the antenna on Earth that contribute to the calculation of theoretical δf . Uncertainty in the velocity of body A projected along the line of site with body B ( \({\it v_{\rm{A}}}{\hat r_{\rm{B}}}\) ) is calculated from the propagation of error in (1) the velocity vector of A, (2) the position of A, and (3) the position of B. Hence, the theoretical differential Doppler shift is calculated to range between ~2 and 20 Hz, and is consistent the observed δf of –12 Hz (Fig. 1b ) for the surface echo measured during orbit 355, occultation entry. Uncertainty in the absolute and relative radar cross section The primary sources of uncertainty in the radar cross section include (1) HGA pointing error, (2) uncertainty in the measurement of received power, (3) uncertainty in the position of the Dawn spacecraft, and (4) uncertainty in the position of the echo site center—where the latter two uncertainties have been previously quantified above. When outside of occultation, the measured power received from the direct signal P r dir|meas varies as a result of HGA antenna pointing inaccuracy due to unpredicted uneven gravitational torques on the spacecraft’s large solar panels while in Vesta’s microgravity environment 43 . The spacecraft’s reaction wheels are used to counteract accumulated spacecraft pointing errors, but one of the four wheels failed prior to Dawn’s arrival at Vesta 43 . Furthermore, corrections only occurred once every 1-3 days during LAMO, such that antenna pointing error increased steadily from zero to ~0.4° between corrections, and even exceeded 1.0° on a few occasions 43 . To quantify fluctuations in the direct signal power on the order of tens of seconds near the time of each surface echo observation, we measure the variation of P r dir|meas over 30 s preceding an occultation entry (and over 30 s following an occultation exit). We find that P r dir|meas varies less than ±2% for 11 of the 14 occultation observations but as much as ±8% before occultation entry of orbit 719. The standard error in the measurement of P r echo|meas and P r dir|meas are quantified by deviations of noise power from the mean. We compute the standard deviation of noise in a given spectrum over frequencies where no signal is present (from −6 to −1 kHz and from 1 to 6 kHz), and find that the standard deviation of a given spectrum ranges from ~0.14 \(\overline {{P_{\rm{N}}}} \) to 0.17 \(\overline {{P_{\rm{N}}}} \) . We use the upper limit of 0.17 \(\overline {{P_{\rm{N}}}} \) as a conservative estimate for all uncertainties in power measurement. Together, the above errors in HGA antenna pointing, measurement of received power, spacecraft position and echo site position amount to uncertainties in σ (km 2 ) that range from 1% to 10% depending on the surface echo—see Table 1 . Subsequent error in the relative radar cross section ( σ/σ max ) range from zero dB for the strongest surface echo reflection to ± 0.5 dB for the weakest surface echo reflection. Data availability Raw telemetry data from Dawn’s orbital BSR experiment at Vesta were generated from receiver output at stations of the NASA DSN and are managed by the NASA Jet Propulsion Laboratory of the California Institute of Technology. The unprocessed time-domain BSR amplitude data used in this study are available upon request from the corresponding author. Change history 23 November 2017 An incorrect version of the Supplementary Information was inadvertently published with this Article. The HTML has now been updated to include the correct version of the Supplementary Information.
Research at the USC Viterbi School of Engineering has revealed new evidence for the occurrence of ground ice on the protoplanet Vesta. The work, under the sponsorship of NASA's Planetary Geology and Geophysics program, is part of ongoing efforts at USC Viterbi to improve water detectability techniques in terrestrial and planetary subsurfaces using radar and microwave imaging techniques. The study, conducted at USC Viterbi in the Ming Hsieh Department of Electrical Engineering by research scientist Essam Heggy and graduate student Elizabeth Palmer from Western Michigan University, took over three years to complete and was featured in the journal Nature Communications on its Sept. 12 release. Heggy is a member of the Ming Hsieh Department of Electrical Engineering's Mixil Lab, which is led by professor Mahta Moghaddam and specializes in radar and microwave imaging. Vesta is located in the asteroid belt between Mars and Jupiter and, due to its large size, is believed to be a differentiated body with a core and a mantle just like our own planet. Collisions between asteroids in the belt enable them to leave their orbits and travel great distances in the solar system, potentially colliding with other planetary bodies. Finding ice on these bodies is of major importance to understanding the transport and evolution of water-rich materials in our solar system. The team used a special technique called "bistatic radar" on the Dawn spacecraft to explore the surface texture of Vesta at the scale of a few inches. On some orbits, when the spacecraft was about to travel behind Vesta from Earth's perspective, its radio communications waves bounced off Vesta's surface, and mission personnel on the ground at NASA's Jet Propulsion Laboratory (JPL) received the signals back on Earth. According to Heggy, this system of radar signaling was like "seeing a flame from a lighter in the middle of day from the opposite side of the United States." Despite the challenges in measuring such a weak signal from the Dawn Spacecraft communication antenna from nearly 300 million miles away, the team assessed the occurrence of large, smooth areas on Vesta that correlated with the occurrence of higher concentration of hydrogen as measured by the gamma ray and neutron detector (GRaND) instrument onboard. "I am excited that we were able to perform such an observation on Vesta. At USC we have been contributing to testing and developing several bistatic radar methods to explore water and ice on planetary surfaces and arid areas of Earth. As the largest research university located in an arid area of the planet, this effort is a natural outgrowth of our focus on understanding water evolution," Heggy said. The USC researchers hope their work will get the public excited not just about water in space, but also about the importance of understanding water evolution in arid areas under changing climatic conditions.
10.1038/s41467-017-00434-6
Biology
New life expectancy estimates for UK dogs suggest Jack Russell Terriers live longest
Kendy Tzu-yun Teng, Life tables of annual life expectancy and mortality for companion dogs in the United Kingdom, Scientific Reports (2022). DOI: 10.1038/s41598-022-10341-6. www.nature.com/articles/s41598-022-10341-6 Journal information: Scientific Reports
https://dx.doi.org/10.1038/s41598-022-10341-6
https://phys.org/news/2022-04-life-uk-dogs-jack-russell.html
Abstract A life table is a tabulated expression of life expectancy and mortality-related information at specified ages in a given population. This study utilised VetCompass data to develop life tables for the UK companion dog population and broken down by sex, Kennel Club breed group, and common breeds. Among 30,563 dogs that died between 1st January 2016 and 31st July 2020, life expectancy at age 0 was 11.23 [95% confidence interval (CI): 11.19–11.27] years. Female dogs (11.41 years; 95% CI: 11.35–11.47) had a greater life expectancy than males (11.07 years; 95% CI: 11.01–11.13) at age 0. Life tables varied widely between breeds. Jack Russell Terrier (12.72 years; 95% CI: 12.53–12.90) and French Bulldog (4.53 years; 95% CI: 4.14–5.01) had the longest and shortest life expectancy at age 0, respectively. Life tables generated by the current study allow a deeper understanding of the varied life trajectory across many types of dogs and offer novel insights and applications to improve canine health and welfare. The current study helps promote further understanding of life expectancy, which will benefit pet owners and the veterinary profession, along with many other sectors. Introduction A deeper understanding of life expectancies at different ages within the United Kingdom (UK) companion dog population, further categorised by sex and breed, is critical to the improvement of canine welfare and health management 1 , 2 . For example, existing and potential dog owners can develop realistic expectations for the typical remaining life period of their dogs through the knowledge of life expectancy. To date, much of the research on dog life expectancy has focused on reporting average overall ages at death in dogs that have been selected using referral or first opinion veterinary caseloads 1 , 3 , insurance databases 4 or owner questionnaires 5 , 6 . Among companion dogs died between 2009 and 2011 in the UK, the median age at death was estimated to be 12.0 years [interquartile range (IQR): 8.9–14.2], and the median age at death for various breeds ranged from Dogue de Bordeaux at 5.5 years (IQR: 3.3–6.1; n = 21) to Miniature Poodle at 14.2 years (IQR: 11.1–15.6; n = 20) 1 . Instead of offering a single value for the average age of at death, a life table is a tabulated expression of life expectancy and probability of death at different age groups of a given population. A life table provides much more detailed information and inference than a single summary average age at death across all ages 7 . There are two main types of life tables: (a) a cohort life table , which summarises the actual mortality experience of a group of individuals (the cohort) from the birth of the first to the death of the last member of the cohort, and (b) a current life table , which provides cross-sectional mortality and survival experience of a population during a single or few current years 8 . Cohort life tables have also been constructed using a hypothetical (i.e., not pre-determined) cohort 9 , 10 . Both types of life tables have their importance. Cohort life tables can inform the mortality situation of cohorts, but the data for current life tables are generally easier to be collected 8 . Human life tables are routinely constructed for countries, or sub-populations within a country, as a proxy indicator of the general health of the population. A decrease in life expectancy implies that events leading to mortality occur, on average, earlier and is, therefore, suggestive of a generally less healthy population 11 . Thus, life tables can be used to monitor changes in the general health of a population over time, as well as to identify vulnerable (sub-)populations, promoting targeted investigation into the reasons for the observed reduction in life expectancy 11 , 12 . Human life tables are considered an essential tool for effective public planning and policy-making 13 , e.g., estimating the future costs of the Old-Age, Survivors, and Disability Insurance federal programmes within the United States 14 . In the UK, a national current life table for humans is generated every three years whereas there is an update every year in the United States 12 , 15 . National life tables are usually constructed for the population overall, per sex, and may include ethnic groups in some countries 16 . Despite their usefulness for the management of human populations, life tables are rarely built for companion animals. Two life table studies for dogs were recently conducted in Japan 10 , 17 ; the first created current life tables for dogs in general, along with estimates for differing sizes, using pet insurance data, whilst the other created a hypothetical cohort life table using pet cemetery data. These life tables have advanced the knowledge of dog life trajectory 18 and have been applied in studies that required information on the life expectancy of dogs of different ages, such as a quantitative risk assessment of the introduction of rabies 19 and the quantification of welfare impact caused by diseases 2 . However, given that the breed structure of dog populations can vary widely between countries, the international generalisability of lifetables needs to be considered carefully. In addition, the average lifespan and mortality profiles of individual breeds may differ among national dog populations for a wide range of genetic and healthcare reasons. For instance, on average, Labrador Retrievers lived 14.1 years (mean) in Japan 10 , 12.5 years (median) in the UK 1 , and 10.5 (median) years in Denmark 20 . The construction of a life table for companion dogs in the UK could facilitate the understanding of the life expectancy and health of the UK companion dog population in a similar way to the application of such life tables in human populations 12 , 15 . A reliable canine life table can enhance our understanding of the life expectancy at different ages, as it demonstrates that life expectancy at each age is not the same as the average lifespan minus that age. There are practical implications when life expectancy is not understood correctly. For example, canine adoption centres may underestimate the typical remaining lifespan of adult dogs being rehomed if predictions about their age at death are based on their current age and the average lifespan. This could lead to a longer length of ownership than the adopting family had originally expected. Life tables for individual breeds could be particularly useful for informing decision-making for existing and potential dog owners when deciding between candidate breeds and/or individuals of different ages. Moreover, more complex forms of life table modelling can be used to support studies that quantify the burden of diseases on dog health and welfare 2 , 21 . When a disease leads to the death of a dog, this dog foregoes the potential remaining lifespan that it would have lived without that disease. Thus, the burden of the disease increases when a longer period of remaining life lost is caused by a disease, and this information about the life lost can be supplied by a dog life table. The current study aimed to develop the first life tables for the UK companion dog population and dogs of different traits, including sex and some breeds. The study aimed to use a large data resource provided by the VetCompass™ Programme 22 to access death information from the records of dogs under veterinary care in the UK. The resulting life tables could improve our understanding of longevity-related demographics of the dog population in the UK, whilst ultimately contributing to the improved health and welfare of dogs worldwide. Materials and methods The sampling frame of the current study included all dogs under primary veterinary care at clinics participating in the VetCompass™ Programme during 2016 (i.e., dogs with at least one clinical record in 2016). The VetCompass™ Programme collates de-identified electronic patient record (EPR) data from primary-care veterinary practices in the UK for epidemiological research (VetCompass, 2019). Data fields available for VetCompass™ researchers include a unique animal identifier along with breed, date of birth, sex, neuter status and bodyweight, as well as clinical information from free-form text clinical notes, summary diagnosis terms 23 and treatment with relevant dates. Dog breeds recognised by any of the Kennel Club (KC), the American Kennel Club and the Australian National Kennel Council were considered ‘purebred’ while all others (apart from those without any breed information) were considered ‘crossbred’ 24 , 25 . Based on their breed, purebred dogs were classified into one of the KC breed groups (Gundog, Hound, Pastoral, Terrier, Toy, Utility and Working) or as non-KC recognised (The Kennel Club, 2019). To identify the analytic dataset of deceased dogs for the current study, the EPRs were initially screened for candidate death cases, including dogs that were euthanased, died unassisted or whether the mechanism of death was unrecorded, using a range of search terms in the clinical note field (search terms: euth, pts*, crem*, ashes, pento*, casket, beech, decease*, death, “put to sleep”, doa, died, killed, “home bury” ~ 1, [“bury” and “home”]) and treatment field (search terms: euth*, pento*, crem*, casket, scatter, beech). The candidate cases were randomly ordered and the clinical notes of a subset of candidates were manually reviewed in detail to evaluate for case inclusion. Case inclusion criteria as a confirmed death required evidence in the EPR that the dog had died at any date from January 1st 2016 to July 31st 2020. Animals without information about sex were excluded from the final sample. After descriptive statistics for summarising the demographics of the sample were conducted, a hypothetical cohort life table for the UK companion dog population was constructed with all dogs in the dataset 8 . Life tables for subpopulations (mentioned below) were also built: all life tables needed to have a minimum of 3 dogs in each given year interval and 11 dogs at the last year interval. A minimum of 3 dogs in each given year interval was decided to ensure a sample variance that takes the advantage of averaging (i.e., the denominator will be > 1) for “mean fraction of last year of life lived by dogs died in [x, x + 1)” ( \({\widehat{a}}_{x}\) ). Because the estimation of life expectancy at each year interval takes into account all dogs died at that age and after, the number of dogs at a single year interval does not play a major role in the estimation of 95% CI of the life expectancy. Thus, we only set the number limit of dogs for the last year interval. Based on the criteria, life tables were also constructed for (a) male and female dogs, (b) neutered and entire dogs for both sexes (c) dogs of different KC breed groups (Gundog, Hound, Pastoral, Terrier, Toy, Utility, Working, and non-KC recognised), and (d) crossbred and 18 breeds of dogs. All the life tables were complete life tables (i.e., life tables with an age interval of 1 year), except for the final interval which could extend beyond one year. Table 1 presents the parameters in the life table and their definition and equations. The life expectancy at age 0 equates to the mean age at death of dogs across all ages. Table 1 Parameters used in a life table. Full size table Data cleaning (including removal of dogs (a) died before January 1st 2016 or after July 31st 2020, (b) with negative lifespan, (c) without birth or mortality information, (d) without sex information) and management were performed in Microsoft Excel 2013 (Microsoft Corp.) and in R programme version 4.0.2 in RStudio interface version 1.3.1073 26 , 27 . Descriptive analyses were facilitated by the “tidyverse” package 28 . Life table construction was performed in R, and the 95% confidence interval for life expectancy at different years was generated using empirical bootstrapping with 10,000 iterations 29 . An iteration of the life table would be taken into the estimation of 95% confidence interval only if it met the criteria for a life table stated above. R codes to generate a complete cohort life table and the confidence interval can be found online: . Ethics approval and consent to participate Ethics approval was obtained from the RVC Ethics and Welfare Committee (SR2018-1652). No human data were included in the current study. Results Demography The sampling frame included 876,039 dogs with at least one clinical record during 2016 from 886 clinics in the VetCompass™ database. The geographic spread of the clinics with available postcode data included England (90.4%), Scotland (3.9%), Wales (3.7%), Northern Ireland (1.7%) and Channel Islands (0.3%). Initial screening identified 97,860 candidate death cases at any date from 1st January 2016 to 31st July 2020. Following a manual review of 32,390 (33.1%) of the candidate cases and excluding 72 dogs without a record of sex, the current study analysis included 30,563 (94.4% of candidates) confirmed deceased dogs. The success rate of the combined search terms to correctly identify deceased dogs was 94.6% (30,635/32,390). Among the 30,563 dogs, 14,574 (47.7%) were female. There were 17,546 (57.4%) neutered dogs, of which 61.4% (n = 8951) were female. There were 23,963 (78.4%) purebred dogs, 6511 (21.3%) crossbred dogs, and 89 (0.3%) dogs without recorded breed information. Among 23,414 dogs of KC-recognised breeds, there were 5354 (22.9%) Gundogs, 1329 (5.7%) Hounds, 2451 (10.5%) Pastorals, 6055 (25.9%) Terriers, 3334 (14.2%) Toys, 2707 (11.6%) Utilities and 2184 (9.3%) Workings. In total, the analytic dataset included 263 ‘purebred’ breeds along with a ‘crossbred’ grouping. The number and percentage of each breed is in Supplementary File 1 . There were 18 breeds included in the life table analyses, accounted for 50.6% of the population. The breeds were: American Bulldog (n = 126), Beagle (n = 171), Border Collie (n = 938), Boxer (n = 831), Cavalier King Charles Spaniel (n = 861), Chihuahua (n = 453), Cocker Spaniel (n = 1063), English Bulldog (n = 476), French Bulldog (n = 229), German Shepherd Dog (n = 1097), Husky (n = 153), Jack Russell Terrier (n = 1614), Labrador Retriever (n = 2481), Pug (n = 196), Shih-tzu (n = 635), Springer Spaniel (n = 785), Staffordshire Bull Terrier (n = 2347), and Yorkshire Terrier (1039). There were 5188 (17.0%) dogs recorded with health insurance. Life table Table 2 shows the overall life table for the UK companion dog population. The life expectancy at age 0 for UK companion dogs was 11.23 (95% CI: 11.19–11.27) years, with life expectancy decreasing with age. The probability of death at each year interval increased with age with an exception of year interval 1–2 (0.017) to 2–3 (0.016). The probability of death within each year interval remained at or below 0.02 before year 5, and the increase in the probability became prominent after around year 6–7. Table 2 Cohort life table of dogs under primary veterinary care in the UK. Full size table Female dogs (11.41; 95% CI: 11.35–11.47) had a longer life expectancy than male dogs (11.07; 95% CI: 11.01–11.13) at age 0 (Tables S1 and S2 in Supplementary File 2). This trend towards greater annual female life expectancy persisted until individuals were 12 years of age, after which the life expectancy of both sexes became similar. When adding neuter status into consideration, a substantially higher probability of death in entire female (year 10 and before) and male (year 4 and before) than their neutered counterparts was observed (Tables S3 to S6 in Supplementary File 2). Entire females (10.50; 95% CI: 10.38–10.61) and males (10.58; 95% CI: 10.48–10.68) had a similar life expectancy at age 0 and life expectancy trajectory (Fig. 1 ), whereas both neutered females (11.98; 95% CI: 11.91–12.04) and males (11.49; 95% CI: 11.42–11.57) had an elevated life expectancy at age 0 when compared to their non-neutered counterparts, especially females. Figure 1 Life expectancy and the 95% confidence interval for female and male dogs at different ages (year) under primary veterinary care in the UK. Full size image Among the KC breed groups and dogs of breeds not recognised by the KC, Terrier had the longest life expectancy at age 0 at 12.03 (95% CI: 11.94–12.2) years, followed by Gundog (11.67 years; 95% CI: 11.59–11.76), non-KC recognised dogs (11.66 years; 95% CI: 11.56–11.76), Pastoral (11.20 years; 95% CI: 11.06–11.35), Hound (10.71 years; 95% CI: 10.53–10.89), Toy (10.67 years; 95% CI: 10.54–10.81), and Utility (10.06 years; 95% CI: 9.89–10.23) (Fig. 2 ). Working dogs’ life expectancy was shorter than all the other groups at all ages, with a life expectancy of 9.14 (95% CI: 9.01–9.27) years at age 0. However, comparative patterns of life expectancy between the breed groups at age 0 were not necessarily maintained later into life. For instance, Hound and Toy groups had a similar life expectancy at age 0 but diverged soon after this to reach a difference of 0.76 years (higher in Toy) at year 12. The life table of Hound ended at year 17 with a life expectancy of 0.52 (95%CI: 0.27–0.77) years, whereas Toy dogs at year 19 still can be expected to live for 0.66 (95%CI: 0.44–0.87) years. Supplementary File 2 (Tables S7 to S14 ) contains the life tables for the breed groups. Figure 2 Life expectancy and the 95% confidence interval for dogs of different Kennel Club breed groups under primary veterinary care in the UK. Full size image Life tables for the 18 breeds and crossbred varied widely (Table 3 ) and can be found in Supplementary File 2 (Tables S15 to S33 ). The last age of the life tables ranged from 11 in French Bulldogs to 19 in Jack Russell Terrier. Jack Russell Terrier had the greatest life expectancy at age 0 at 12.72 (95% CI: 12.53–12.90) years, followed by Yorkshire Terrier (12.54 years; 95% CI: 12.30–12.77), Border Collie (12.10 years; 95% CI: 11.85–12.33) and Springer Spaniel (11.92 years; 95% CI: 11.69–12.13). Compared to other breeds, many brachycephalic breeds (i.e., breeds of dogs with a short, flat face) had a relatively short life expectancy at age 0, with French Bulldog having the shortest at 4.53 (95% CI: 4.14–5.01) years, 2.86 years less than the value for English Bulldog (7.39 years; 95% CI: 7.08–7.69). To explore the longevity of the dogs of different breeds, we examined the earliest age at which the life expectancy dropped below 1.5 years (1.5 years was chosen because the life expectancy at the last year of all breeds was less than this value; Fig. 3 ). The life expectancy dropping below 1.5 years occurred in Chihuahuas at year 15–16, followed by Jack Russell Terrier, crossbred dogs and Yorkshire Terrier at year 14–15. English Bulldog was the earliest to reach the life expectancy of 1.5 years (year 9–10), followed by Boxer, French Bulldog and American Bulldog at year 10–11. Table 3 Key statistics extracted from the life tables of 18 individual dog breeds and of crossbred, including the life expectancy at age 0 and at the last age, using the data of dogs under primary veterinary care in the UK. Full size table Figure 3 Relation between the life expectancy at year 0 and year interval in which the life expectancy became 1.5 in 18 breeds and crossbred under primary veterinary care in the UK. Full size image The probability of death was lower in year 0–1 than year 1–2 in most breeds [American Bulldog (0.024; 0.065), Border Collie (0.012; 0.020), Boxer (0.005; 0.016), English Bulldog (0.040; 0.055), Cocker Spaniel (0.012; 0.013), French Bulldog (0.131; 0.136); German Shepherd Dog (0.013; 0.020), Husky (0.026; 0.047), Jack Russell Terrier (0.009; 0.009); Labrador Retriever (0.007; 0.009), Springer Spaniel (0.006; 0.006) and Staffordshire Bull Terrier (0.011; 0.012)]. Some breeds, including American Bulldog, Chihuahua, English Bulldog, French Bulldog, Husky and Pug had a probability of death before reaching adulthood (before year 2 30 ) much higher than the overall dogs (0.017 in year 0–1 and 0.016 in year 1–2). Discussion This study presents the first cohort life tables for the UK dog population and dogs of different characteristics, including sex, neuter status, KC grouping and 18 breeds and crossbred dogs. These tables offer information about annual life expectancy and annual probability of death, which has been unavailable from conventional longevity studies in UK companion dogs to date. With the ongoing accumulation and accessibility of death information from Big Data resources such as VetCompass in the future, the construction of life tables for increasing numbers of breeds of dogs and also for other companion animal species should expand. The current study provides proof of concept for the development of (hypothetical) cohort life table construction in companion animals and also shares open-access R codes to contribute to these wider purposes (see “ Materials and methods ”). Based on the mathematical and biological plausibility, a valid life table should exhibit the highest life expectancy at age 0 which decreases with age 8 , 15 , 31 . The probability of death may be higher in infancy as the immune system continues to mature in the postnatal period, for both humans and dogs 32 . In dogs, the immune system takes approximately one year for full maturity 32 . In the most recent life tables of humans in the UK, Australia and the US, the probability of death appeared the lowest in ages of 7–11 years (i.e., the onset of senescence) before showing an increasing trend until the end of the table (i.e., life) 15 , 33 , 34 . Our overall life table in dogs satisfied the condition of decreasing life expectancy and had the lowest annual probability of death ( \({\widehat{q}}_{x})\) before reaching adulthood at year 2 30 , similar to the life tables in humans. Some life tables constructed in the current study did not follow this trend in the probability of death, for example, the life tables for neutered males and females. These life tables will be discussed below. The life expectancy at age 0 reported in the current study for dogs under primary veterinary care in the UK in 2016 was 11.23 years (11.19–11.27), 2.47 years shorter than the life expectancy at age 0 (both 13.7 years) in the two life tables of Japanese dogs constructed using pet insurance and pet cemetery data, discussed above 10 , 17 . Differing data sources for the study populations might partially contribute to this substantial variation. Insured dogs may represent a subset of dogs that live longer than those without insurance, as they may receive more or additional veterinary care, due to alleviated economic constraints on the owners 35 , 36 . Moreover, dog breeds of high disease risk such as brachycephalic breeds might be under-represented in some insurance data than the general population due to the elevated cost to insure and special rules of reimbursement applied to these breeds in some insurance companies 37 , 38 . Breed demographics are likely to differ between these two counties. Breeds of toy or small size have a longer life expectancy than larger-sized dogs and are more common in Japan than in the UK 1 , 5 , 10 , 17 . In contrast, breeds of large and medium sizes, as well as brachycephalic dogs, are more popular in the UK 39 , which present shorter life expectancies. These demographic differences will influence country-level life expectancy estimates at age 0. However, it appeared that even within the same breeds, the life expectancy of dogs in Japan was considerably higher than those in the UK 10 , such as Labrador Retriever (UK = 11.77 and Japan = 14.1 years), Shih Tzu (UK = 11.05 and Japan = 15.0 years), Beagle (UK = 9.85 and Japan = 14.8 years), Pug (UK = 7.65 and Japan = 12.8 years), and French Bulldog (UK = 4.53 and Japan = 10.2 years). Variation in estimates may be partly due to sampling effort, as the life expectancy tables from Japan were created using a smaller sample size. While female dogs (11.41; 95% CI 11.35–11.47) showed a longer life expectancy at age 0 than male dogs (11.07; 95% CI 11.01–11.13), this phenomenon was moderated by neuter status. Entire animals of both sexes showed similar trajectories of life expectancy from age 0 onwards. However, neutering was associated with an elevated life expectancy at age 0 for both sexes compared to their entire counterparts, and this longevity advantage from neutering was higher in female dogs than in male dogs. A similar survival advantage for neutered animals has been reported in several studies 1 , 40 , 41 , but most data, including the current study, generated these results by dichotomising dogs into neutered or entire without taking into account the duration of gonadal hormone exposure before the neutering. Neutered animals in these cited studies would have already lived to the age of neutering, biasing their life expectancy towards greater length, highlighted by the lowered probability of death at year 0–1 in neutered dogs. As veterinarians may often recommend early neutering for female dogs, sometimes before the start of the oestrus cycle 42 or soon after the first cycle 43 , neutering of females may occur earlier in life than neutering for males 43 . Therefore, the gap of true life expectancy between the sexes due to neutering might be even wider than reported here. Neutering may also act as a proxy for stronger owner responsibility and better care, as it is often considered responsible dog ownership. Thus, neutered animals may benefit from additional survival advantages related to enhanced owner care 43 . Neutering may directly affect the risks of various health conditions and therefore shift life expectancy as a result 41 . In female dogs, neutering reduces or eliminates the risk of pyometra, a potentially life-threatening condition that occurs in 2% of entire female dogs under 10 years 44 . Neutering is linked to a reduced risk of tumours within reproductive organs and various cardiovascular diseases, but an increased risk of joint disorders and several types of tumours such as lymphoma and hemangiosarcoma, especially in females 45 . Due to the complexity stated above, our life tables for neutered dogs should be interpreted with great caution. For the 18 breeds and the crossbred category, the number of years contained in the tables, life expectancy and the probability of death at different ages varied widely. Relatively shorter life expectancy at specific year points can be taken as evidence that events and processes eventually leading to mortality are occurring earlier in life in these populations than some other populations, so these populations may have generally poorer health than some other populations 30 , 46 . If the distribution of external factors that may lead to differences in the life expectancy and the probability of death (e.g. severe disease epidemics and/or substantial differences in the level of veterinary and owner care) do not depend on the breeds, it may be safe to assume that part of the life expectancy difference is contributed by internal factors driven by the genetic make-up of the breeds. Breed predisposition to particular disorders is a well-identified phenomenon 47 . Breeds that show high levels of potentially life-threatening predispositions that start early in life are likely to have a higher probability of death at younger ages and therefore a decreased life expectancy. Indeed, four brachycephalic breeds (French Bulldog, English Bulldog, Pug and American Bulldog) that showed the shortest life expectancy at year 0 of all 18 breeds in our results are also reported with several predispositions to life-limiting disorders that occur early in life, such as brachycephalic obstructive airway syndrome, spinal disease and dystocia 48 , 49 , 50 , 51 . Lifespan variations between breeds were explored by examining the association between the breed life expectancy at age 0 and the year interval at which the life expectancy reached 1.5. Generally, life expectancy at age 0 and the year interval in which the life expectancy became 1.5 were positively associated, as would be expected: breeds that lived longer were also older when they reached an age with 1.5 years of life expectancy. However, although Chihuahuas showed a life expectancy at age 0 of only 7.91, the year interval in which the life expectancy became 1.5 was year 15–16, the highest of all the breeds, indicating a high variation of lifespans among Chihuahuas. A lowered life expectancy at age 0 suggests an increase in mortality of younger-aged dogs (whose mortality is usually low), and the life expectancy becoming 1.5 years at a later age implies more dogs also living to advanced age; both increase the variation of lifespans 11 . In our results, the probability of death before year 13–14 was higher (and much higher before year 4) in Chihuahua than for dogs overall, which became lower after that. It was also observed in French Bulldogs that a low life expectancy at age 0 (4.55 years) and relatively old year interval when the life expectancy became 1.5 years (year 10–11, more than twice as long as the life expectancy at age 0) and that their probability of death was rather uniform across all ages. However, although the high variation of lifespans of French Bulldogs could be due to high health risks in early life 49 , 50 , 51 , 52 and a relatively small sample size (n = 232), it may partly be attributed to recent soaring popularity 53 . The number of KC registered French Bulldogs in the UK rose steeply from 2771 in 2011 to 39,266 in 2020 39 , suggesting that the population of French Bulldogs (and other breeds sharing a similar rising trend in popularity) in our dataset are biased towards younger dogs that contribute proportionately more deaths in younger ages in the life table. In contrast, breeds with a decreasing trend in popularity may have an underestimated probability of death at younger ages, resulting in overestimated life expectancy. Previous studies have also shown the rising popularity of certain breeds and the association of rising popularity with lower median age 54 , 55 , 56 , 57 , 58 , 59 , 60 . Hypothetical cohort life tables are more susceptible to the influence of population instability, which is common in dogs due to sudden and dramatic fad-like changes in breed popularity 61 . This can be avoided by implementing current or real cohort life tables instead if such data were available. Thirteen of the 18 breeds had a lower probability of death in year 0–1 than year 1–2 in the life tables, some slightly and others substantially. This finding goes against the evidence that mortality is higher in puppy (0–26 weeks) and juvenile (27–52 weeks) periods than young adult period (1–2 years) and empirical results in human life tables 15 , 33 , 34 . This may, in part, be due to substantial puppy mortality occurring before individuals can be registered to a primary veterinary clinic resulting in these deaths not appearing in the current dataset. A more accurate estimation of life expectancy from birth would be possible if all the currently unavailable puppy mortality information could be recovered. Life tables in companion animals offer extensive applications. Similar to a common application for human life table studies, comparison between life tables can support deeper insight into the health of dogs of differing demographics such as sex and breed over time and space 62 , 63 . When life tables are generated periodically for a specific population, changes in the life expectancy and the probability of death at specific ages can indicate changes in the general health and welfare of the population. Comparison of life tables among the populations of different traits such as breed or conformation can also identify less healthy or more vulnerable populations 62 , as demonstrated in our study (especially for breeds). Advanced life table modelling can offer useful information allowing the quantification of disease burden on health and welfare in human and companion animals 2 , 64 , 65 . Quantification of disease burden is important because it can assist with the prioritisation of health conditions for targeted reform 66 , 67 , 68 . Coupling these findings with cost-effective analysis on disease prevention and control can assist to allocate resources to priority health conditions and achieve efficient improvements in the health and welfare of the overall population 69 . The value of disease burden quantification has been demonstrated by the Global Burden of Disease project of the World Health Organization to improve human health 70 . The Global Burden of Disease uses the Disability-Adjusted Life Year framework that incorporates life tables as part of the methodology to quantify the burden of many diseases (369 diseases and injuries in 2019) 64 , 65 . The Disability-Adjusted Life Year has been adapted into the Welfare-Adjusted Life Years (WALY) to quantify the burden of common diseases on dogs' welfare 2 . The WALY constitutes two elements, (a) years lived with impaired welfare, which is the years having a certain disease weighted by its severity and (b) years of life lost due to the premature death caused by the disease or resulting assisted death. Future life table modelling that accounts for comorbidity and demographics of dogs can offer information about years of life lost based upon the life expectancy at the age of death for the individual animal affected. Life tables highlight the value of interpreting life expectancy annually, especially at older ages, where differences in life expectancy between ages become narrower. Thus, the current authors propose that life table literacy is important for veterinary professionals, shelter staff, and dog owners because it can optimise decision-making and subsequently can positively impact dog welfare. Life table literacy will promote realistic expectations for the life expectancy of dogs at different ages, helping to make treatment plans for illness and end of life decisions. Shelters and charities can also incorporate this information in the adoption process ensuring that potential dog owners understand the expected length of ownership commitment required for dogs of different breeds, ages, and neuter status. With the foundations for canine life table science built by the current work, we hope to generate further examples of life tables for both dogs and cats using the VetCompass data in the future. The current study provides a proof of concept that can support future research looking to construct life tables for dogs and cats as a periodic recurring endeavour. Consequently, changes in the life expectancy, mortality, and health of companion dogs and cats can be tracked similarly to how it is in human demography 15 , 16 . For future life table construction, we hope to incorporate other sources of information such as KC annual registry data and dog insurance data with further modelling, which will help to produce even more accurate life tables 71 . This study had some additional limitations to those discussed above. Firstly, the high frequency of euthanasia in companion dogs highlights the potentially underestimated life expectancy compared to unassisted death 72 . This is especially biased by euthanasia undertaken for non-life-threatening reasons such as undesirable behaviours, economic reasons or convenience 35 , 73 . Consequently, differing cultures between countries for euthanasia in dogs might substantially influence national life tables. Another limitation is the sole inclusion of primary veterinary practice-attending dogs. Thus, our results might be less representative of unowned dogs or dogs not attending veterinary clinics. Also, some dogs that died at home or in emergency out-of-hours clinics might be excluded from the current data, although the data did capture all deaths away from the clinics that were reported by owners to the veterinary clinics at any time. Lastly, the sample sizes for some of the 18 breeds (e.g. American Bulldogs, Beagle, English Bulldog, French Bulldog, Husky and Pug) were relatively small, resulting in life tables with reserved confidence. Conclusion The current study has produced the first life tables for dogs in the UK, reporting annual life expectancy and probability of death for the UK companion dog population, dogs of different sex and neuter status, breed groups and also for 18 breeds and crossbred dogs. We report an elevated life expectancy in neutered dogs compared to entire dogs and wide variation in life expectancy between breeds, with Jack Russell Terrier and Yorkshire Terrier having the highest and some brachycephalic breeds showing the lowest life expectancy at age 0. The construction and application of life tables offers great potential for companion animal health and welfare sciences but is still in its infancy. Life tables generated in the current study promote not only a better understanding of the life trajectory of dogs but also offer several applications for the veterinary profession and research to improve the health and welfare of dogs. Data availability The datasets generated and analysed during the current study are publicly available on the RVC Data Repository at . R codes for cohort life table construction and all life tables generated by the current study can be found at . Abbreviations UK: United Kingdom CI: Confidence interval EPR: Electronic patient record IQR: Interquartile range KC: Kennel Club WALY: Welfare-Adjusted Life Years
Jack Russell Terriers and Yorkshire Terriers have the highest life expectancies of dog breeds in the UK, according to a new study published in the journal Scientific Reports. However, flat-faced breeds such as French Bulldogs and Pugs have some of the lowest life expectancies. Kendy Tzu-yun Teng, Dan O'Neill and colleagues analyzed 30,563 records of dog deaths from veterinary practices across the UK between 2016 and 2020 using the VetCompass database, categorized into 18 dog breeds recognized by the Kennel Club and also a group of crossbreed dogs. They created life tables which calculate life expectancy throughout the life cycle, starting at birth (0 years). Jack Russell Terriers had the highest life expectancy at birth (12.72 years), followed by Yorkshire Terriers (12.54 years), Border Collies (12.10 years), and Springer Spaniels (11.92 years). In contrast, French Bulldogs had the lowest life expectancy at birth (4.53 years). This is approximately three years less than other flat-faced breeds that showed low life expectancies at birth including English Bulldogs (7.39 years) and Pugs (7.65 years). The authors propose that these short life expectancies could result from the high health risks known to occur in these flat-faced breeds. Across all dog breeds, the average life expectancy at age 0 for male dogs was 11.1 years, four months shorter than the estimate for female dogs. Dogs that had been neutered had a higher life expectancy (11.98 years for females and 11.49 years for males) than those that were not neutered (10.50 years for females and 10.58 years for males). The authors discuss the potential benefits of neutering and associated increased life expectancy and whether neutering could possibly reflect more responsible dog owners and better care. The authors conclude their work now enables dog life expectancies to be tracked at different ages, similarly to humans, and may improve predictions for different breeds in the UK. There could also be other practical benefits such as helping dog shelters to provide accurate estimates of a dog's remaining life expectancy during rehoming.
10.1038/s41598-022-10341-6
Nano
The key to mass-producing nanomaterials
Carson T. Riche et al. Flow invariant droplet formation for stable parallel microreactors, Nature Communications (2016). DOI: 10.1038/ncomms10780 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms10780
https://phys.org/news/2016-02-key-mass-producing-nanomaterials.html
Abstract The translation of batch chemistries onto continuous flow platforms requires addressing the issues of consistent fluidic behaviour, channel fouling and high-throughput processing. Droplet microfluidic technologies reduce channel fouling and provide an improved level of control over heat and mass transfer to control reaction kinetics. However, in conventional geometries, the droplet size is sensitive to changes in flow rates. Here we report a three-dimensional droplet generating device that exhibits flow invariant behaviour and is robust to fluctuations in flow rate. In addition, the droplet generator is capable of producing droplet volumes spanning four orders of magnitude. We apply this device in a parallel network to synthesize platinum nanoparticles using an ionic liquid solvent, demonstrate reproducible synthesis after recycling the ionic liquid, and double the reaction yield compared with an analogous batch synthesis. Introduction Continuous flow microfluidic reactors are powerful tools for synthesizing chemicals and materials 1 , 2 , 3 , 4 , 5 , 6 . Microreactors allow for efficient heat transfer, excellent control of local mixing conditions and provide an ideal format for studying reaction kinetics on a small scale 7 . Microfluidic systems also offer a clear and appealing route to scale-up via massively parallel operation. Scaling by parallelization has clear advantages over traditional scale-up approaches. In contrast to a scale-up approach that relies on increasing the size of a single batch reactor, scale-up by parallelization does not change the local reaction conditions in terms of mixing uniformity and temperature distribution, which are critically sensitive variables for certain chemistries. For example, the scale-up of colloidal inorganic nanoparticle syntheses to yield kg quantities is difficult to execute in conventional batch reactors because higher reagent concentrations or increased reaction volumes affect mass and thermal transport, which in turn affect nucleation and growth, leading to loss of particle quality and poor process reproducibility 8 . Microfluidic parallelization can circumvent these issues and enable a simplified and more predictable scale-up route, as demonstrated in a parallel network of planar droplet formation devices 9 . One major challenge in implementing parallel microreactor systems is developing control and design strategies that guarantee uniform fluidic behaviour across an ensemble of reactors 10 , 11 . Here, we address this issue by presenting a fluidic design based on a three-dimensional (3D)-printed channel junction that allows for geometrically controlled two-phase liquid-in-liquid droplet formation that is robust to fluctuations in driving pressure and flow rate. Microfluidic liquid-in-liquid droplet flows facilitate rapid homogenization of reactants 12 . They are, therefore, ideal for reactions that are sensitive to concentration gradients and local mixing conditions. Droplets are isolated from each other and the channel walls to eliminate dispersion effects and prevent device fouling 13 . The design presented here enables an ensemble of parallel reactors with consistent droplet formation behaviour regardless of inconsistencies in the feed pressure or flow rate across the reactor bank. In contrast, prior attempts at nanoparticle synthesis scale-up using continuous flow have focused on modifying single-channel devices to employ larger droplets and increased operating flow rates 14 , 15 . Although these approaches produce good quality nanoparticles, this strategy cannot be scaled indefinitely. An additional advantage of the droplet formation geometry we present here is that it can be rapidly reconfigured to produce a variety of droplet volumes spanning four orders of magnitude. This range of droplet sizes is used in many applications, including biomimetic vesicle formation, cell encapsulation and millifluidic reactor platforms 16 , 17 . In traditional T-junction and flow-focusing droplet formation devices, the operating parameters (that is, flow rates) allow for a relatively narrow range of droplet sizes to be accessed by a single device geometry 18 . Switching to a different droplet size regime requires redesigning (and refabricating) the device. In the device geometry we present here, droplet size is set by the diameter of an easily interchangeable outlet component. Different droplet sizes are accessible by swapping out this modular component. This control mechanism is in contrast to upstream geometrical control exhibited in planar droplet formation devices where the inlet geometry governs droplet size. Coupled with the relative insensitivity of droplet formation to flow rates, this design represents an important innovation in microfluidic droplet formation. In addition to demonstrating the robust operation of this droplet formation geometry in a parallel system, we show it operating as the key element of a droplet microreactor applied to the synthesis of metal nanoparticles. In this paper, we demonstrate the first platinum nanoparticle (PtNP) synthesis using a continuous flow droplet microreactor. A key aspect of our continuous flow synthesis is the use of ionic liquid droplets as the dispersed phase and reaction medium. Ionic liquids are gaining interest as solvents for precious metal nanoparticle synthesis because of their ability to colloidally stabilize nanoparticles and induce high nucleation rates resulting in more monodisperse nanoparticle ensembles 19 . This, coupled with their environmental health, safety and sustainability advantages over volatile and flammable organic solvents 20 , makes ionic liquids promising solvents for large-scale nanofabrication reactions. Herein, the synthesis of PtNPs is performed in ionic liquid droplets that are successfully recycled and reused to produce PtNPs over multiple runs in high fidelity. Results 3D-printed microfluidic droplet generators This microfluidic droplet generator is designed to be an easy-to-operate device that forms consistent droplet sizes across a broad range of inlet pressures or flow rates. The 3D geometry is manufactured using stereolithographic (SLA) printing technologies. This geometry is designed to interface with commercially available tubing (outer diameter (OD)=1/16 inch) to create a droplet generating chip that does not require any fabrication steps in a clean room facility and can passively form droplet volumes spanning four orders of magnitude. Tubing connections form an elastomeric seal that resists leaking for water flow rates beyond 8 l h −1 . Although the 3D printing technology is quickly advancing to create higher resolution features, a major concern is that the smallest channel feature that has been produced in the commonly used Watershed material is 400 μm. Herein, we overcome this barrier by (i) fabricating a device with a 250-μm feature and (ii) using the printed device as a fluidic manifold while relying on higher resolution extruded tubing to control droplet size. The channels deliver the laminated dispersed and continuous phases to the outlet in a perpendicular orientation ( Fig. 1a,b ). Droplets pinch off as the fluids enter the vertical outlet tubing ( Fig. 1c,d ). The entire droplet formation process can be seen in the Supplementary Movie 1 . Figure 1: Renderings of 3D droplet generators and images of droplet formation process. ( a ) Computer-aided design (CAD) rendering of a droplet generator with two inlets for the dispersed and continuous phases and a single outlet that accepts tubing (OD=1/16 inch) with various IDs to control the droplet size. ( b ) CAD rendering of a droplet generator in which the vertical segment is fully constructed by stereolithography (SLA) rather than being formed by external tubing. ( c ) Micrographs depicting different views of the device during the droplet breakup process. ( d ) Micrographs of the droplet breakup process in full SLA droplet generators with an outlet size of 250 or 500 μm. Full size image The basic 3D-printed device shown here can be used to form a broad range of droplets sizes by interfacing it with outlet tubing of various inner diameters. The size of the outlet tubing determines the droplet size such that the droplet sizes are similar to the inner diameter of the outlet tubing ( Fig. 2a ). In comparison, a planar T-junction geometry produces droplets with a size governed by the geometry of the dispersed phase inlet 21 . Regardless of the outlet tubing size, the droplet population is monodisperse, which allows for the collected droplets to self-assemble into hexagonally close packed arrays ( Supplementary Figs 1–3 ). Figure 2: Images of droplets and statistics on droplet sizes using various outlet tubing sizes. ( a ) Micrographs of the droplets formed using the six different sizes of outlet tubing listed. ( b ) Droplet diameter versus outlet tubing inner diameter for flow rate ratios of 0.05 (left, diamond) and 0.5 (right, circle). Error bars represent the s.d., some error bars are obscured by the symbols. ( c ) Plot of the droplet diameter versus fractional droplet number for various outlet tubing sizes. The solid lines represent the average droplet sizes for a single outlet size; each line includes values for flow rate ratios of 1:2 and 1:20 (dispersed to continuous phase). ( d ) Boxplot of the droplet size produced by droplet generators with the same outlet tubing (ID=254 μm) and different surface chemistries (that is, hydrophilic, native and hydrophobic) on the channels, as modified by initiated chemical vapour deposition. Full size image We also present a droplet generator where the vertical cavity is integrated into the SLA manufactured device rather than being provided by external tubing ( Fig. 1b,d ). The breakup process is imaged in fully printed devices with cylindrical sizes of inner diameter (ID)=250 and 500 μm ( Supplementary Movie 2 ). We observe droplets forming at the point where the horizontal flow turns vertical, as in the droplet generators with externally connected tubing acting as the vertical cavity. As the dispersed and continuous phases enter the vertically oriented cylinder, the dispersed phase segments into droplets. This 250 μm cylinder is the smallest channel reported using the SLA process with transparent Watershed resin (FineLine Prototyping) 22 . Post printing, FineLine Prototyping clears unreacted resin from the vertical cavity and the devices are used as received. Flow invariant droplet formation The defining characteristic of this droplet-forming device is that uniform droplets can be formed while varying the flow rate ratio. We demonstrate flow invariant droplet formation for six different commercially available sizes of outlet tubing. The smallest and largest inner diameters are 25 and 762 μm, respectively. For each tubing size, flow invariant droplet formation is observed up to an upper limit flow rate ( Fig. 2b ). By analysing the sizes of collected droplets, we determine the upper limit of the flow invariant regime for each tubing size. The data presented are collected at this upper limit for the continuous phase flow rate of 0.2, 5, 10, 20, 80 and 180 ml h −1 corresponding to the outlet tubing with an inner diameter of 25, 127, 178, 254, 508 and 762 μm, respectively. Below this upper limit, the droplet size is approximately the same as the diameter of the outlet tubing ( Fig. 2b ). The upper limit of the flow invariant regime can be expressed in terms of the capillary number ( Ca ) of the system. For all outlet tubing sizes, this value (calculated using the inner diameter of the outlet as the characteristic length and the continuous phase flow rate as characteristic velocity) is about 10 −3 . Below this capillary number, the droplet size is independent of the flow rate ratio for values of 1:20 and 1:2 ( Fig. 2c ). The droplet diameters are plotted versus the fractional droplet number. These data show the entire population of droplets. The total number of droplets is normalized to one and represented as a fractional droplet number. We also observe a consistent droplet size for intermediate flow rate ratios ( Supplementary Fig. 4 ). At higher capillary numbers, in the flow-dependent regime, the droplet formation process transitions to a jetting mechanism ( Supplementary Movie 3 ). The same flow invariant behaviour is observed for a more viscous dispersed phase of 70 wt% glycerol in water ( Supplementary Fig. 5 ). However, the threshold for invariance shifted to a lower capillary number. In the flow invariant regime, the median droplet sizes have a coefficient of variation <3%. The output of the droplet generator is dependent on the geometry of the outlet tubing and not dependent on the surface chemistry of the channel before the outlet. We modify the internal channel surfaces by depositing a poly(ethylene glycol diacylate) or poly(1 H ,1 H ,2 H ,2 H perfluorodecylarcylate-co-ethylene glycol diacrylate) polymeric film via initiated chemical vapour deposition (iCVD). The two coatings have water contact angles of 60° and 120°, respectively. The native (uncoated) material has a contact angle of 100°. With the same outlet tubing (ID=254 μm), the coated devices produce the same size droplets as the unmodified device ( Fig. 2d ). The roughness of the channel surfaces was the same before and after coating by scanning electron microscopy analysis ( Supplementary Fig. 6 ). Device parallelization A single-droplet formation device can be used as a single unit in a highly parallelized system of n units to linearly create n -fold droplets and n -fold throughput. Ideally, a parallelized system of single-droplet generators is designed to have an equal pressure drop and resistance over each channel to create identical flow conditions in each droplet generator. However, feedback between channels arises because of unequal numbers of droplets flowing in each channel 23 . This imbalance leads to flow rate fluctuations that alter the final droplet size, but the fluctuations are irrelevant in the 3D droplet generators because of their insensitivity to flow rate. For this reason, our droplet generator is uniquely suited for use in a parallelized network. To demonstrate this, we construct a parallel network ( n =4) of droplet generators and deliver different flow rates to each device ( Fig. 3a ). We print a manifold to distribute the dispersed and continuous phases to four droplet generators. The manifold connects to four independent droplet generators by jumper cables (that is, sections of poly(ether ether ketone) (PEEK) tubing). The jumper cables connecting the dispersed phases have lengths of 10, 12.5, 15 and 17.5 mm, resulting in relative resistances of 1x, 1.25x, 1.5x and 1.75x, respectively, because the resistance is linearly proportional to the length of cylindrical tubing. The network assembly creates the largest pressure drop over the dispersed phase jumper cables so there is minimal feedback from the continuous phase jumper cables or the outlets. The ID of the jumper cables are 127, 762 and 254 μm for the dispersed phase, continuous phase and outlets, respectively. Figure 3: Setup and performance of unbalanced parallel network. ( a ) Schematic of the parallel network assembled by connecting a distribution manifold to four droplet generators. The continuous phase was linked using low resistance jumper tubing (ID=762 μm) and the dispersed phase was linked using various lengths of tubing (ID=127 μm) to create a gradient of resistances across the four branches. ( b ) Droplet diameters ( n >1,000) produced by the four branches of the parallel network (left) by dispersed and continuous phase flow rates of 10 and 70 ml h −1 (purple circles) and 30 and 210 ml h −1 (black triangles) while operating in and beyond the flow invariant regime, respectively. Error bars represent the s.d. Full size image The network successfully delivers the same continuous phase flow rate and different dispersed phase flow rates to each droplet generator. When operating outside the flow invariant regime (that is, at high capillary number), the droplet size produced is dependent on the branch location and therefore the dispersed phase flow rate ( Fig. 4b ). As expected, the droplet size is smaller in branches with a lower dispersed phase flow rate. In contrast, when operating within the flow invariant regime (that is, at low capillary number), the droplet size is independent of the branch location despite a different dispersed phase flow rate being delivered to each channel ( Fig. 4b ). As expected, the balanced parallel network delivers the same flow rates to each droplet generator and produces a constant droplet size across each of the devices, in both flow regimes ( Supplementary Fig. 7 ). Figure 4: Micrographs and characterization of PtNPs synthesized in ionic liquid solvent using 3D droplet generator. ( a ) Transmission electron microscopy (TEM) images of PtNPs produced using the microfluidic droplet generator using 1x recycled, 2x recycled and 3x recycled BMIM-Tf 2 N ionic liquid. Scale bars represent 50 nm and histograms represent the NP diameters ( n =500) from multiple TEM images. ( b ) X-ray diffraction (XRD) of the PtNPs. ( c ) 1 H NMR spectra of the BMIM-NTf 2 ionic liquid (*residual solvent peak), and ( d ) 19 F NMR spectra of the BMIM-Tf 2 N ionic liquid. Both spectra in black are of the ionic liquid as received. ( e ) Comparison of the overall yield from various reactions. Full size image PtNP synthesis The droplet generator presented here is suitable for chemical synthesis in dispersed phases with a wide range of solvent properties. As a proof-of-concept, we demonstrated the synthesis of PtNPs by a polyol reduction in droplet flows of 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide (BMIM-Tf 2 N) ionic liquid solvent. There are few examples of droplet flows of ionic liquids because they represent an exceptional case of droplet flow behaviour as a result of their complex interfacial properties and high viscosity 1 , 24 , 25 . Here, BMIM-Tf 2 N ionic liquid droplets are formed trivially in a modified droplet generator with three inlets to accommodate two reagent/dispersed phase streams ( Supplementary Fig. 8 ). The two reagent inlets supply (i) the potassium tetrachloroplatinate(II) (K 2 PtCl 4 ) precursor and the reducing agent (that is, ethylene glycol) and (ii) the poly(vinylpyrrolidone) (PVP) in BMIM-Tf 2 N. Droplets of the combined reagents are formed using PEEK tubing (ID=762 μm) in the outlet. The reaction is initiated by flowing the droplets into a convection oven at 150 °C to quickly nucleate the PtNPs. The temperature in the tubing equilibrates in less than a second to trigger the nucleation event. Likewise, an abrupt cooling step quickly quenches the reaction and arrests nanoparticle growth. There is no observable clogging of or deposition on the channel surfaces after a 2-h reaction ( Supplementary Fig. 9 ). Powder X-ray diffraction analysis confirms the resulting nanoparticles crystallize in the face centered cubic structure expected for Pt metal. An average lattice parameter of a =3.87 Å is calculated for the PtNPs, which is in close agreement with bulk Pt metal (PDF# 00-004-0802). Moreover, the diffraction peaks are broadened, suggesting the presence of small nanoparticles on the order of ∼ 6 nm by the Scherrer equation ( Fig. 4b ). We synthesize three rounds of PtNPs and recycle the BMIM-Tf 2 N ionic liquid solvent between rounds. To recycle the ionic liquid solvent, the PtNPs are harvested from the ionic liquid and then it is washed to remove excess ethylene glycol and PVP. The purity of the BMIM-Tf 2 N is unchanged, as evidenced by 1 H and 19 F nuclear magnetic resonance (NMR), upon recycling ( Fig. 4c,d and Supplementary Note 1 ). Transmission electron microscopy images reveal that the PtNPs appear spherical and uniform in morphology across each reuse of the ionic liquid ( Fig. 4a ). For each condition, 500 particles are analysed and their average sizes are 5.65±0.76, 5.96±0.90 and 6.56±0.92 nm for 1x, 2x and 3x recycled ionic liquid, respectively. The mean sizes are all within the standard deviation of each other. We provide a general device and platform for running reactions in parallelized droplet reactors. When PtNPs are synthesized in an n =4 parallel network, all four branches produce high-quality particles, as evidenced by consistent size distributions that are within error of each other ( Supplementary Fig. 10 ). In addition, translation of the PtNP synthesis from batch scale to a continuous flow droplet reactor results in significantly higher yields ( Fig. 4e ). For each run with the recycled ionic liquid and the overall parallel reaction, the continuous flow yield is about twice the batch yield. We attribute the greater yield to a more rapid and uniform heating profile, as well as improved mixing as compared with the batch reaction. The combined use of our droplet generator in a parallel network and a reusable solvent system provide an ideal platform for the efficient manufacturing of large quantities of nanomaterial product. Discussion We introduce a novel droplet generator that uses a 3D geometry to form droplets of controlled size. A key feature of this droplet formation geometry is that there is a broad regime of inlet flow rate ratios over which resulting droplet size is invariant to flow rate. Another advantage of this droplet formation format is that its inherent modularity makes it simple to select the size of droplets that will be formed. The size can easily be tuned by changing the ID of the outlet tubing. The ease of device fabrication and operation lowers the barrier-to-entry for first time users of microfluidic devices. An end user need not have an extensive fluid mechanics understanding to operate the device to achieve the desired and well-controlled droplet sizes. The relationship between outlet tubing size and droplet size seems to underlie droplet size stability. We speculate that this relationship is due to the mode of droplet formation. Droplet breakup occurs as the two flows alternate to enter and fill the opening of the vertical cavity created by the outlet tubing. Droplets form at the interface between the outlet tubing and the horizontal flow when the dispersed phase is pinched off to form droplets ( Fig. 1c,d and Supplementary Movie 1 ). When the dispersed phase fills the outlet it causes an upstream buildup of continuous phase pressure. Once the outlet is completely occluded by the dispersed phase, the droplets shear off, releasing the continuous phase pressure. As outlet tubing size increases, more dispersed phase accumulates before the continuous phase is completely occluded, explaining the correlation between outlet tubing size and terminal droplet size. These droplet-forming devices are uniquely suited to high-throughput processing using microfluidics. When assembled in a parallel configuration, they are insensitive to small changes in flow that could arise because of feedback between channels. This is demonstrated by building a four-branched parallel network that produces droplets of similar size across the network despite having an intentional gradient of dispersed phase flow rates being delivered to each branch. The devices also resist clogging that could affect droplet formation. We synthesize monodisperse PtNPs over multiple runs while using the same recycled ionic liquid solvent. The yield in the continuous flow reactions is ∼ 60% and nearly twice the yield of an analogous batch reaction. Using this device infrastructure along with more sustainable chemistry provides an ideal platform for producing large quantities of precious metal nanoparticles. This can be easily extended to other applications requiring high-throughput synthesis in microfluidic droplets. Methods Device fabrication Microfluidic chips were designed in ProEngineer, exported as sterolithography files and printed in Somos Watershed XC 11122 by FineLine Prototyping using high-resolution SLA printing technology. Devices were used as received. Inlet and outlets interfaced with OD=1/16 inch tubing. The device shown in Fig. 1a has a channel height of 1 mm, inlet and outlet holes were 1.59 mm in diameter, the length of the main channel was 5 mm and the width of the main channel was 4 mm. The fully 3D printed droplet generator in Fig. 1b has the same dimensions as the device in Fig. 1a and the additional vertical cavity was printed with an internal diameter of 250 or 500 μm. Droplet visualization Water-in-oil droplets were formed using an aqueous phase of Fe(SCN) x (3-x)+ complex (for visualization) in deionized water, prepared by mixing 0.2 M KSCN (Sigma) with 0.067 M Fe(NO 3 ) 3 ·9H 2 O (Sigma) in a 1:1 volumetric ratio. The oil phase was 1% (w/v) Span80 (Sigma) in hexanes (Sigma). Fluids were driven by syringe pumps (Harvard Apparatus). Droplets were collected in a glass bottom Petri dish (MatTex) containing 1 ml of the 1% (w/v) Span80 in hexanes solution and imaged on an inverted Zeiss microscope using a × 20 objective. At least 60 images of the collected droplets were captured for each flow condition. Droplet sizes were analysed using custom image processing code in Matlab, primarily using the imfindcircles function. Droplet formation was monitored in situ using a Phantom V711 camera (Vision Research). Images were captured at 4,000 frames per second with a 240-μs exposure. PtNP synthesis in microfluidic device K 2 PtCl 4 (99.9%; Strem), PVP (MW=55,000; Aldrich), ethylene glycol (99.8%; Sigma-Aldrich) and BMIM-Tf 2 N (99%; IoLiTec Lot # K00219.1.4.) were used as received. In a typical procedure, K 2 PtCl 4 (156.0 mg) was added to ethylene glycol (10.0 ml) and bath sonicated until dissolved, affording a brown-red mixture. Separately, PVP (852.4 mg) was dissolved in BMIM-Tf 2 N (30.0 ml) by heating at 130 °C for 10 min to give a clear solution. PtNPs were synthesized using our 3D droplet-forming device by modifying the original design to incorporate three inlets ( Supplementary Fig. 8 ). Two inlets supplied the K 2 PtCl 4 in ethylene glycol at 10 ml h −1 and the PVP in BMIM-Tf 2 N at 30 ml h −1 . The third inlet supplied a continuous phase of FC-40 (Sigma) at 90 ml h −1 . The outlet PEEK (ID=0.03 inches) fed into 200 feet of perfluoroalkoxy tubing (McMaster-Carr) that was placed in a convection oven set to 150 °C. The effluent was collected into a receiving flask cooled in an ice bath to quench the reaction. The PtNPs were transferred to two 50-ml centrifuge tubes and precipitated with acetone (30 ml) to afford a black suspension, which was briefly vortex stirred ( ∼ 1 min), bath sonicated (3 min) and collected by centrifugation (6,000 r.p.m.; 5 min). The colourless supernatant was decanted and saved in order to wash the ionic liquid for further syntheses. The black nanoparticulate solid was redispersed in ethanol (10 ml), bath sonicated (3 min), precipitated with hexanes (30 ml) and centrifuged (6,000 r.p.m.; 5 min). This process was repeated 3 × in order to remove any residual organics (that is, PVP, BMIM-Tf 2 N, and ethylene glycol). The purified PtNPs were re-dispersed in ethanol (5 ml) and remained colloidally stable for at least 5 months. Recycling the ionic liquid Equal volumes of hexanes, with respect to the ionic liquid, were added and vortex stirred before centrifugation (6,000 r.p.m.; 10 min). Upon phase separation, the hexanes layer was removed and additional hexanes were added; this extraction procedure was repeated 3 × . The ionic liquid was dried in a rotary evaporator at 80 °C for ca 2 h. The purity of the washed ionic liquid was confirmed by 1 H and 19 F NMR spectroscopy. Initiated chemical vapour deposition Devices were coated as received with poly(1 H ,1 H ,2 H ,2 H -perfluorodecyl acrylate-co-ethylene glycol diacrylate) (poly(PFDA-co-EGDA)) or poly(ethylene glycol diacrylate) (poly(EGDA)) 1 , 26 , 27 . Briefly, the devices were coated using the iCVD process. Microfluidic reactors were placed on a temperature-controlled stage maintained at 30 °C within a pancake-shaped vacuum chamber (GVD Corp., 250 mm diameter, 48 mm height). The polymer coating was formed via a free-radical chain mechanism from the reaction of vapour phase precursors. Di- tert -butyl peroxide (Sigma) and PFDA (SynQuest) and/or EGDA (Monomer-Polymer) were introduced into the vacuum chamber at a pressure of 100 mTorr and the initiator molecules were thermally decomposed into free radicals at 250 °C by a resistively heated array of nichrome wire. The radicals and monomer molecules diffuse into the channels via the inlets and outlet, adsorb to the surface of the material, and polymerize to form a thin, conformal coating. PtNP synthesis in batch In a typical procedure, K 2 PtCl 4 (39.0 mg) was added to ethylene glycol (2.5 ml) in a 23-ml vial and bath sonicated until dissolved, affording a brown-red mixture. Separately, PVP (213.1 mg) was dissolved in BMIM-Tf 2 N (7.5 ml) by heating at 130 °C for 10 min to give a clear solution. The two solutions were then combined in a 25-ml round bottom flask and placed in a silicone oil bath preheated to 150 °C while stirring at 1,000 r.p.m. The reaction was quenched with an ice bath after 15 min. The washing of the particles and the recycling of the ionic liquid followed the same procedure as in the microfluidic device reaction. Characterization Transmission electron microscopy images were obtained using a JEOL JEM2100F (JEOL Ltd.) microscope operating at 200 kV. Samples were prepared on 400 mesh Cu grid coated with a lacey carbon film (Ted Pella, Inc.) by drop-casting a dilute suspension of PtNPs in ethanol. The size distribution of the PtNPs was determined by analysing 500 unique nanoparticles. Powder X-ray diffraction patterns were collected on a Rigaku Ultima IV diffractometer functioning at 40 mA and 40 kV with a Cu K α X-ray source ( λ =1.5406 Å). The step size and collection time were 0.015° and 10 s per step, respectively. 1 H and 19 F NMR spectra were obtained on Varian 600 spectrometer (600 and 564 MHz, respectively) with chemical shifts reported in p.p.m. Additional information How to cite this article: Riche, C. T. et al . Flow invariant droplet formation for stable parallel microreactors. Nat. Commun. 7:10780 doi: 10.1038/ncomms10780 (2016).
Nanoparticles - tiny particles 100,000 times smaller than the width of a strand of hair - can be found in everything from drug delivery formulations to pollution controls on cars to HD TV sets. With special properties derived from their tiny size and subsequently increased surface area, they're critical to industry and scientific research. They're also expensive and tricky to make. Now, researchers at USC have created a new way to manufacture nanoparticles that will transform the process from a painstaking, batch-by-batch drudgery into a large-scale, automated assembly line. The method, developed by a team led by Noah Malmstadt of the USC Viterbi School of Engineering and Richard Brutchey of the USC Dornsife College of Letters, Arts and Sciences, was published in Nature Communications on Feb. 23. Consider, for example, gold nanoparticles. They have been shown to be able to easily penetrate cell membranes without causing any damage - an unusual feat, given that most penetrations of cell membranes by foreign objects can damage or kill the cell. Their ability to slip through the cell's membrane makes gold nanoparticles ideal delivery devices for medications to healthy cells, or fatal doses of radiation to cancer cells. However, a single milligram of gold nanoparticles currently costs about $80 (depending on the size of the nanoparticles). That places the price of gold nanoparticles at $80,000 per gram - while a gram of pure, raw gold goes for about $50. "It's not the gold that's making it expensive," Malmstadt said. "We can make them, but it's not like we can cheaply make a 50 gallon drum full of them." Right now, the process of manufacturing a nanoparticle typically involves a technician in a chemistry lab mixing up a batch of chemicals by hand in traditional lab flasks and beakers. Brutchey and Malmstadt's new technique instead relies on microfluidics - technology that manipulates tiny droplets of fluid in narrow channels. "In order to go large scale, we have to go small," Brutchey said. Really small. The team 3D printed tubes about 250 micrometers in diameter - which they believe to be the smallest, fully enclosed 3D printed tubes anywhere. For reference, your average-sized speck of dust is 50 micrometers wide. They then built a parallel network of four of these tubes, side-by-side, and ran a combination of two non-mixing fluids (like oil and water) through them. As the two fluids fought to get out through the openings, they squeezed off tiny droplets. Each of these droplets acted as a micro-scale chemical reactor in which materials were mixed and nanoparticles were generated. Each microfluidic tube can create millions of identical droplets that perform the same reaction. This sort of system has been envisioned in the past, but its hasn't been able to be scaled up because the parallel structure meant that if one tube got jammed, it would cause a ripple effect of changing pressures along its neighbors, knocking out the entire system. Think of it like losing a single Christmas light in one of the old-style strands - lose one, and you lose them all. Brutchey and Malmstadt bypassed this problem by altering the geometry of the tubes themselves, shaping the junction between the tubes such that the particles come out a uniform size and the system is immune to pressure changes. Malmstadt and Brutchy collaborated with Malancha Gupta of USC Viterbi and USC graduate students Carson Riche and Emily Roberts.
10.1038/ncomms10780
Biology
Evolutionarily novel genes at work in tumors
A. A. Makashov et al, Oncogenes, tumor suppressor and differentiation genes represent the oldest human gene classes and evolve concurrently, Scientific Reports (2019). DOI: 10.1038/s41598-019-52835-w E. A. Matyunina et al. Evolutionarily novel genes are expressed in transgenic fish tumors and their orthologs are involved in development of progressive traits in humans, Infectious Agents and Cancer (2019). DOI: 10.1186/s13027-019-0262-5 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-019-52835-w
https://phys.org/news/2019-12-evolutionarily-genes-tumors.html
Abstract Earlier we showed that human genome contains many evolutionarily young or novel genes with tumor-specific or tumor-predominant expression. We suggest calling such genes T umor S pecifically E xpressed, E volutionarily N ew ( TSEEN ) genes. In this paper we performed a study of the evolutionary ages of different classes of human genes, using homology searches in genomes of different taxa in human lineage. We discovered that different classes of human genes have different evolutionary ages and confirmed the existence of TSEEN gene classes. On the other hand, we found that oncogenes, tumor-suppressor genes and differentiation genes are among the oldest gene classes in humans and their evolution occurs concurrently. These findings confirm non-trivial predictions made by our hypothesis of the possible evolutionary role of hereditary tumors. The results may be important for better understanding of tumor biology. TSEEN genes may become the best tumor markers. Introduction We are interested in the possible evolutionary role of tumors. In previous publications 1 , 2 , 3 , 4 , 5 we formulated the hypothesis of the possible evolutionary role of hereditary tumors, i.e. tumors that can be passed from parent to offspring. According to this hypothesis, hereditary tumors were the source of extra cell masses which could be used in the evolution of multicellular organisms for the expression of evolutionarily novel genes, for the origin of new differentiated cell types with novel functions and for building new structures which constitute evolutionary innovations and morphological novelties. Hereditary tumors could play an evolutionary role by providing conditions (space and resources) for the expression of genes newly evolving in the DNA of germ cells. As a result of expression of novel genes, tumor cells acquired new functions and differentiated in new directions, which might lead to the origin of new cell types, tissues and organs 5 . The new cell type was inherited in progeny generations due to genetic and transgenerational epigenetic mechanisms similar to those for pre-existing cell types 5 , 6 , 7 . Our hypothesis makes several nontrivial predictions. One of predictions is that tumors could be selected for new functional roles beneficial to the organism. This prediction was addressed in a special work 5 , 8 , in which it was shown that the “hoods” of some varieties of gold fishes such as Lionhead, Oranda, etc. are benign tumors. These tumors have been selected by breeders for hundreds of years and eventually formed a new organ, the “hood”. The other prediction of the hypothesis is that evolutionarily young and novel genes should be specifically expressed in tumors. This prediction was verified in a number of papers from our laboratory 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 . We have described several evolutionarily young or novel genes with tumor-predominant or tumor-specific expression, and even the evolutionary novelty of the class of genes – cancer/testis genes – which consists of evolutionary young and novel genes expressed predominantly in tumors (reviewed in) 9 . We suggest calling such genes Tumor Specifically Expressed, Evolutionarily New ( TSEEN ) genes 5 , 9 . TSEEN genes may become the best tumor markers 9 , 10 . In this paper, we performed a systematic study of the evolutionary ages of different functional classes of human genes in order to verify one more nontrivial prediction of the hypothesis of the possible evolutionary role of hereditary tumors, i.e. the prediction of concurrent evolution of oncogenes, tumor suppressor genes and differentiation genes 2 , 3 , 5 . Results The curves of gene age distribution for different classes of human genes obtained by the ProteinHistorian tool are represented in Figs 1 – 7 . Figure 1 Distribution of human housekeeping genes genes and all protein coding genes according to their evolutionary ages. The evolutionary ages of the gene classes are measured numerically in million years at the median of distribution, i.e. at the time point on the human evolutionary timeline that corresponds to the origin of 50% of genes in this class. Full size image Figure 2 Gene age distributions of different classes of human genes. Full size image Figure 3 Cluster I of gene age distribution and control curves. Full size image Figure 4 Cluster II of gene age distribution and control curve. Full size image Figure 5 Cluster III of gene age distribution and control curve. Full size image Figure 6 Gene age distribution for different classes of human genes between Euarchontoglires and H. sapiens. Full size image Figure 7 The proportion of different classes of human genes originated between Homininae and H. sapiens. Full size image These figures show curves sloping upward from left to right. The uppermost curve describes the gene age distribution of human housekeeping genes. The evolutionary age of this gene class, defined by the median position of the curve, is 894 million years (Ma) (Fig. 1 ). The curve of all human protein-coding genes has evolutionary age of 600 Ma (Figs 1 and 3–5 ). These curves were used as control curves in our study. Some curves are located mainly between the control curves (Fig. 3 ), others are located below the second control curve (Figs 4 and 5 ). The median ages of other groups of genes are the following: oncogenes (750 Ma), tumor suppressor genes (750 Ma), differentiation genes (693 Ma), homeobox genes (450 Ma), apoptosis genes (360 Ma), canser/testis (CT) antigen genes (autosomal) (324 Ma), Biomedical Center globally subtracted, tumor-specifically expressed (BMC GSTSE) protein-coding genes (220 Ma), BMC GSTSE non-coding sequences (130 Ma), CT antigen genes located on X chromosome (CT-X) (60 Ma) and BMC GSTSE non-coding sequences located on X chromosome (BMC GSTSE-X non-coding sequences) (50 Ma) (Fig. 2 ). In most of the cases the pairwise differences in the age distributions of genes belonging to different classes are statistically significant (see Supplementary Dataset 1 ). As follows from Figs 2 – 5 the curves are organized in clusters. The existence of the clusters is supported by hierarchical cluster analysis (Fig. 8 ). The Kolmogorov-Smirnov distance classification demonstrates moderate bootstrap reliability. If we remove from consideration housekeeping genes (control), and replace BMC GSTSE non-coding sequences with GSTSE-X non-coding sequences, the Kolmogorov-Smirnov distance classification demonstrates perfect bootstrap reliability (Fig. 9 ). The difference between the three clusters’ evolutionary ages is statistically significant (chi square P-value not exceed 1 * 10 −300 ; X 2 = 1756 under 30 df) as well as the pairwise difference of the ages of each pair of clusters (see Supplementary Dataset 2 ). Figure 8 Hierarchical classification of 10 classes of human genes (Kolmogorov-Smirnov, complete linkage). Full size image Figure 9 Hierarchical classification of 9 classes of human genes (Kolmogorov-Smirnov, complete linkage). Full size image Cluster I includes the gene age distribution curves of human housekeeping genes, oncogenes, tumor suppressor genes and differentiation genes. It is located mainly between the control curves (Fig. 3 ). Below the all protein-coding genes curve is the larger part of cluster II (including the following: homeobox genes, apoptosis genes, autosomal CT antigen genes and BMC GSTSE protein-coding sequences, Fig. 4 ). The lowest position is occupied by cluster III, which includes curves of gene age distribution of the BMC GSTSE and BMC GSTSE-X non-coding sequences, and CT-X antigen genes orthologs (Fig. 5 ). The curves which belong to cluster I demonstrate growth starting from > 4000 Ma. In Bilateria (910 Ma) they reach a proportion of 30%. The oncogene age distribution curve stays almost flat until Opisthokonta (1368 Ma), but after Opisthokonta goes upward and in Bilateria reaches 30% like other curves of cluster I. Between Bilateria and C hordata all curves of cluster I show a steep increase to 50%, and after Chordata (797 Ma) keep an almost constant slope up to 100% (Figs 2 and 3 ). The curve of housekeeping gene ages reaches 23% in Eukaryota , 29% in Opisthokonta , 47% in Bilateria , and makes a similar jump of 15% between Bilateria and Chordata (Fig. 1 ). The curves of cluster II are slightly sloping until Opisthokonta , then slowly grow between Opisthokonta and Bilateria , and then demonstrate the 20% jump between Bilateria and Chordata , similarly to the curves of cluster I. The curve of homeobox genes ages, which belongs to cluster II, demonstrates almost constant slope between Bilateria and Eutheria (Figs 2 , 4 ). The curves of CT-X antigen genes and BMC GSTSE-X non-coding sequences are characterized by the highest growth (as compared to other curves) of 78% and 67%, respectively, during the last 90 mln years (Figs 2 and 5 – 6 ). The gene ages curve of BMC GSTSE-X non-coding sequences occupies the lowest position during the period of the last 67 Ma (53%) and shows the maximum slope during the period of the last 6 Ma (25%), when the majority of other curves stop increasing (Figs 6 and 7 ). The CT-X antigen gene class was stochastically younger than the housekeeping gene class (two sided test P-value 0.027) and tumor suppressor gene class (two sided test P-value was 0.049), but after correction for multiple testing, simultaneously these results are not significant (see Supplementary Dataset 3 for complete pairwise relative evolutionary novelty analysis for different gene classes). Moreover, we discovered that the class of the BMC GSTSE non-coding sequences was stochastically younger than the class of housekeeping genes (two sided test P-value 2.6 * 10 −4 ) and that the differentiation gene class was stochastically younger than the housekeeping gene class (two sided test P-value 5 * 10 −6 ). The bootstrap rates of the stochastically younger cases agree with these hypotheses (see Supplementary Dataset 4 ). We also found that cluster III was stochastically younger than cluster I (two sided test P-value is 1.7 * 10 −5 ) and the combination of clusters I and II (two sided test P-value is 1.9 * 10 −5 ). Moreover, cluster III was stochastically younger than all protein-coding genes (P-value 0.0015) (Supplementary Dataset 5 , see also Supplementary Dataset 6 for the bootstrap agreement). Many genes that we studied are in two or more classes (Supplementary Dataset 7 ). As far as we are interested in co-evolution of differentiation genes, oncogenes and tumor-suppressor genes, we examined the gene age distributions of pairwise intersections of these gene classes (Supplementary Fig. 1 ) and of their pairwise subtractions (Supplementary Fig. 2 ). We found that curves of overlapping gene subclasses (diff x onco, diff x TSG, and onco x TSG) and subtracted gene subclasses (diff-onco, diff-TSG, onco-diff, onco-TSG, TSG-diff and TSG-onco) have similar shapes, and the ages of gene subclasses are similar to the ages of original gene classes (i.e. differentiation genes, oncogenes and tumor suppressor genes) (Supplementary Figs 1–3 ). The curves of pairwise gene subclasses fit in the same cluster, i.e. cluster I (Supplementary Fig. 4 ). Discussion To study different functional classes of genes we used publicly available gene databases describing different gene classes – The Human Protein Atlas (housekeeping genes); Tumor-Associated Gene database (TAG database) (oncogenes); TSGene (tumor suppressor genes); CTDatabase (cancer/testis (CT) antigen genes); HomeoDB (HomeoBox genes); DeathBase (apoptosis genes); GeneOntology (differentiation genes); Biomedical Center Database (BMC GSTSE protein-coding genes and BMC GSTSE non-coding sequences). All annotated human protein coding genes (Genome assembly GRCh38) and housekeeping genes were used as controls. Although we understand the limitations of such an approach connected with differing philosophies of the authors of databases and continuing upgrading of databases, we were able to obtain meaningful results. The results were also reproducible for different versions of databases with curves corresponding to different versions almost overlapping (see Supplementary Fig. 5 ). We decided to study the ages of different gene classes in order to verify the predictions which stem from the hypothesis of the possible evolutionary role of heritable tumors formulated by one of us 5 . According to this hypothesis, hereditary tumors were the source of extra cell masses, which might be used in the evolution of multicellular organisms for the expression of evolutionarily novel genes and for the origin of new differentiated cell types with novel functions. The evolutionary role of cellular oncogenes might consist in sustaining certain level of autonomous proliferative processes in the evolving populations of organisms and in promoting the expression of evolutionarily new genes. After the origin of a new cell type, the corresponding oncogene should have turned into a cell type-specific regulator of cell division and gene expression. If true, the number of cellular oncogenes should correspond to the number of cell types in higher animals 2 , 3 , 5 . If tumors and cellular oncogenes played a role in evolution as proposed, then the evolution of oncogenes, tumor suppressor genes, differentiation genes and cell types should proceed concurrently 5 . We found that any functional gene class includes genes with different evolutionary ages. This means that genes with similar functions originated during different periods of evolution. The age of a gene was defined by the most recent common ancestor on the human evolutionary timeline 20 , 21 ] containing genes with similar sequences, i.e. with a significant BLAST score (or HMMER E-value). The age of a functional gene class (or the age of the cluster) was described by distribution of ages of genes belonging to this gene class (i.e. particular gene database). For convenience, the age of the gene class can be measured numerically in million years at the median of distribution, i.e. at the time point on the human evolutionary timeline that corresponds to the origin of 50% of genes in this class. We found that different functional classes of human genes have different evolutionary ages ranging from 894 millions years for housekeeping genes to 50 million years for BMC GSTSE-X non-coding sequences. This reflects the different evolutionary history of different functional gene classes. The curves of the older gene classes occupy the higher-left position and those of younger gene classes occupy the lower-right position on distribution curves (Figs 1 – 7 ). The slope of curves changes along the evolutionary timeline. This suggests that the rate of novel genes origin is different during different periods of evolution. Thus, the slope of all curves of clusters I and II, including the housekeeping gene ages distribution curve, increases sharply during the period between the origin of Bilateria and the origin of Chordata when many new cell types and morphological novelties originated. About 20% of all orthologs emerge during this period. Trends of the curves during the period of the Cambrian explosion (~543–~508 Ma), when most major animal phyla appeared in the fossil record 22 , suggest that this radiation was preceded and followed by the extensive origin of novel genes (Figs 2 – 5 ). We see the last considerable increase in the origin of new genes 6 Ma ago, between Homininae and H. sapiens , when 15% of CT-X antigen genes, 10% of BMC GSTSE protein-coding genes, 17% of BMC GSTSE non-coding sequences and 25% of BMC GSTSE-X non-coding sequences originated (Figs 6 and 7 ). It is known that housekeeping genes represent the oldest gene class in existing cells and evolve more slowly (according to their Ka/Ks rates) than tissue-specific genes 23 , 24 . We found that the class of human housekeeping genes as described previously in 25 also contains evolutionarily younger genes, i.e. housekeeping genes continue to originate in the course of evolution, although at relatively slower rate than genes in other functional gene classes (see the slope of the corresponding curve). But as far as the class of housekeeping genes is large (7367 genes according to Uhlen et al . 25 ), even in humans 117 housekeeping genes originated, according to our data. The intensive increase in the number of oncogenes began between Opisthokonta and Bilateria (25% of oncogenes), which coincided with the origin of multicellularity. This suggests a role for oncogenes in the origin of multicellular organisms. The other important jumps in the origin of oncogenes occur between Bilateria and Chordata (26%) and between Chordata and Euteleostomi (30%), which were periods of great morphological changes. Thus 83% of oncogenes originated between Opisthoconta and Mammalia . Our data correspond with results of phylostratigraphic tracking of cancer genes which suggest a link to the emergence of multicellularity 26 . But our data also show considerable increase in the proportion of oncogenes and tumor suppressor genes before and beyond the emergence of vertebrates (Figs 2 and 3 ), while Domaset-Loso and Tautz described significantly lower origination of founder genes related to cancer beyond the emergence of vertebrates. This difference may be due to difference in methodology: Domaset-Loso and Tautz studied the emergence of cancer related domains while ProteinHistorian tool, which we used, studies the origin of the full-size proteins, in our case oncoproteins and tumor suppressor proteins. While the origin of oncogene class, according to our data, is related to the origin of multicellularity, many differentiation genes were co-opted from unicellular ancestors (Fig. 3 ). Today, genes that control metazoan development and differentiation are found in Opisthokonta suggesting that multicellularity evolved from unicellular opisthokont ancestors 27 , 28 , 29 . The slope of the differentiation gene ages distribution curve supports this notion. According to our data, 11% of human differentiation genes are conserved in Opisthokonta (Fig. 3 ). The gene classes studied in this paper form three clusters visually and based on hierarchical cluster analysis. Each cluster contains curves with the least difference in gene age distributions. The first cluster includes gene age distribution curves of housekeeping genes, oncogenes, tumor suppressor genes, and differentiation genes. This cluster is the oldest with evolutionary ages of gene classes from 894 Ma (housekeeping genes) to 693 Ma (differentiation genes). It is not homogeneous because the curve of housekeeping gene ages is separate from the other curves of the cluster, and differentiation gene class is stochastically younger than housekeeping gene class. On the other hand, gene age distribution curves of oncogenes, tumor suppressor genes and differentiation genes almost overlap. The removal of housekeeping gene class from bootstrap analysis does not destroy cluster I, but even increases its bootstrap reliability (Figs 8 and 9 ). It was known for a long time that there are oncogenes, which are very ancient 30 , 31 , 32 , 33 , 34 . But to our knowledge this paper is the first indication in the literature that oncogenes represent the most ancient class of genes in human genome with the exception of housekeeping genes. The other interesting piece of data is that tumor suppressor genes and differentiation genes coevolve with oncogenes. The fact that orthologs of oncogenes, tumor suppressor genes and differentiation genes belong to the same cluster and their distribution curves almost overlap means that they evolve concurrently, as predicted earlier 2 , 3 , 4 , 5 . Moreover, we found that differentiation, onco-, and tumor suppressor gene classes partially overlap (Supplementary Dataset 7 ), and pairwise intersection and subtraction gene subclasses co-evolve with the main gene classes (Supplementary Figs 1 – 3 ). Overlapping of gene classes means that some genes have two (or more) functions, and may belong to two (or more) functional gene classes. It is known that a gene may function in several processes and contain exons that determine diverse molecular functions and biological processes 35 . The existence of diff x onco and diff x TSG subclasses confirmes our prediction on co-evolution of differentiation, onco-, and tumor suppressor functions even on a single gene level. As example of gene with dual function could be TGFb. It is known that TGFb may function as tumor promoter or tumor suppressor. This phenomenon is known as “TGFb paradox” 36 . In gene classes studied in this paper, TGFb is found in oncogene, differentiation and tumor suppressor gene classes. Actually, it is a triple function. The other example is Wnt gene. It was discovered as proto-oncogene 37 . On the other hand, the Wnt gene family encodes a group of cell-signaling molecules that participate in vertebrate and invertebrate development. Wnt protein sequence have been conserved during a billion years of evolution 38 . Wnt gene is found in differentiation gene and oncogene classes that we studied in this paper. The existence of such dual-function genes and other data support our hypothesis that hereditary tumors at early or intermediate stages of progression might participate in the evolutionary origin of new differentiated cell types 4 , 5 . Our prediction that there should be a general correspondence between the number of oncogenes and the number of cell types is also supported by the other existing data. Thus, the TAG database, which we used in this study, currently contains 245 human oncogenes, of which 224 are found by ProteinHistorian. Domaset-Loso and Tautz used other data sets (Sanger Cosmic, NCBI Entrez section in CancerGenes, the CancerGenes and the Network of Cancer Genes (NCG)). They found 380 oncogenes in these databases 26 . On the other hand, the current estimate of the number of the cell types in humans produced the numbers from 240 39 , 40 to 411 cell types 41 . Supplementary Dataset 8 contains a table of correspondence of the number of oncogenes and cell types in different multicellular organisms (Supplementary Dataset 8 ). That is, the general correspondence between the number of cell types and the number of oncogenes does exist, as was predicted in 2 , 3 . It is noteworthy that when such correspondence was first predicted in 1987, only 20 oncogenes have been described 42 , and by 1996 – only 70 oncogenes 43 . We further hypothesized that at least three different classes of genes are necessary for the origin of a new cell type in evolution: oncogenes, tumor suppressor genes, and evolutionarily novel genes, which determine a new function 5 . The existence of cluster I supports our hypothesis of co-evolution of differentiation, onco-, and tumor suppressor genes 5 . The bootstrap values are always the highest for differentiation, onco-, and tumor suppressor genes. This strongly supports the existence of cluster I and co-evolution of differentiation, onco-, and tumor suppressor gene classes, although the number of protein coding tumor suppressor genes (TSGene database, 1018 genes) and differentiation genes (Gene Ontology, 3697 genes) is higher than the number of oncogenes (TAG database, 245 genes). The existence of cluster I and particulary clasterization of differentiation genes and tumor suppressor genes also supports the differentiation theory of cancer 44 . According to this theory, cancer is abnormal programming of gene function during cell differentiation. The loss of tissue-specific functions (e.g. due to mutations of corresponding genes) is connected with tumors. Terminal differentiation is incompatible with tumors, i.e. has a tumor suppressor function. The second cluster occupies the intermediate position between cluster I and custer III with evolutionary ages of gene classes between 450 Ma (homeobox genes) and 220 Ma (BMC GSTSE protein-coding genes). Cluster II locates mainly below the second control curve, i.e. the curve of all protein coding genes. It is extremely interesting that in the evolutionary timeline the distribution curves of gene ages of homeobox and apoptosis genes are separated from those of differentiation genes by the period of several hundred millions years, i.e. evolutionarily the origin of genes responsible for differentiation and organogenesis are widely separated. Thus, before Bilateria , almost 30% of differentiation genes originated, and only 10% of homeobox genes. Half of differentiation genes originated at 643 million years, and half of homeobox genes – at 450 million years. In M ammalia 87% of differentiation genes and 73% of homeobox genes are represented. Indeed, the processes of differentiation and organogenesis are separated in evolution. For example, the thyriod gland was diffuse in the common ancestor of vertebrates and still has a diffuse nature and lacks the capsule in cyclostomes and in teleostean fishes 45 , 46 , 47 , 48 . In mammalians and humans diffuse endocrine system and diffuse, unencapsulated bundles of lymphatic cells still exist. Nevertheless, during certain periods of the evolutionary timeline the curves of cluster I and cluster II behave in a similar manner. E.g. between Bilateria and Chordata the curves of cluster I and cluster II demonstrate similar jump of about 20%, although in cluster II this jump starts from much lower level. Finally, the third cluster is the youngest with evolutionary ages between 130 Ma and 50 Ma. This cluster includes gene classes expressed predominantly in tumors – CT-X genes, BMC GSTSE and BMC GSTSE-X non-coding sequences. Genes belonging to this cluster continue to originate during last 90 Ma, and even during the last 6 Ma, as shown in Figs 6 and 7 . They also evolve more rapidly than other gene classes (reviewed in 5 ). The youngest during the last 6 Ma period are tumor-specifically expressed non-coding sequences located on X chromosome, discovered at the Biomedical Center by global subtraction of cDNAs of all known normal libraries from cDNAs of all known tumor libraries 10 , 12 . We already described the evolutionary novelty of CT-X antigen gene class earlier 19 . Later other authors reproduced our results with appropriate reference to our original paper 49 . Here we confirmed the evolutionary novelty of CT-X gene class using the current upgraded database of CT genes – CTDatabase, and with another method – ProteinHistorian. In this paper, we also described the new class of TSEEN genes – BMC GSTSE ncRNA genes. In our other work we discovered a new long non-coding RNA (lncRNA) – OTP-AS1 ( OTP - antisense RNA 1) 50 , which belongs to cancer/testis sequences. Statistical analysis supported the existence of two classes of TSEEN genes – CT-X gene class and BMC GSTSE ncRNA gene class (Supplementary Dataset 1 ), which constitute cluster III. Cluster III was stochastically younger than the combination of two clusters I and II (Supplementary Datasets 5 and 6 ). Reduced cluster III composed of BMC GSTSE-X ncRNA and CT-X genes demonstrates perfect bootstrap reliability (Fig. 9 ). Thus at least three evolutionary categories of gene classes are expressed in human tumor cells: evolutionarily old (e.g. oncogenes), evolutionarily young or novel (e.g. CT-X genes and BMC GSTSE non-coding sequences) and intermediate age gene classes (e.g. BMC GSTSE protein-coding genes). But even evolutionarily older gene classes contain evolutionarily novel genes, for example, oncogenes CT45A1 and TBC1D3 51 , 52 , 53 (see also discussion of evolutionarily novel housekeeping genes above). On the contrary, even evolutionarily younger gene classes contain evolutionarily older genes (10% of all genes in CT-X and BMC GSTSE-X ncRNA gene classes). The data presented in this paper support and extend the concept of tumor-specifically expressed, evolutionarily novel ( TSEEN ) genes, formulated in 3 , 4 , 5 , and confirmed in 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 . From the data presented in this paper we can see that even different classes of genes (e.g. CT-X antigen genes and BMC GSTSE non-coding sequences) could be tumor-predominantly expressed and evolutionarily young or novel. Thus the data presented in this paper confirm two predictions of our hypothesis of the possible evolutionary role of tumors, i.e. concurrent evolution of oncogenes, tumor suppressor genes and differentiation genes, and the existence of tumor specifically expressed, evolutionarily novel ( TSEEN ) gene classes. This may be important for better understanding of tumor biology, in particular of the possible evolutionary role of tumors as described in 5 . Methods The following public databases were used as a source of human gene classes in this study: housekeeping genes – The Human Protein Atlas; oncogenes – TAG database; tumor suppressor genes – TSGene; differentiation genes – GeneOntology; HomeoBox genes – HomeoDB; apoptosis genes – DeathBase; cancer-testis (CT) antigen genes – CTDatabase; BMC GSTSE protein-coding genes and non-coding sequences – Biomedical Center Database; and all annotated human protein coding genes – Genome assembly GRCh38 (21694 genes). CT antigen genes were divided into two groups: autosomal genes and genes located on X chromosome. BMC GSTSE non-coding sequences located on X chromosome were also separately studied. This was done because X chromosome contains relatively more evolutionarily novel genes than autosomes 5 . Housekeeping genes are 7367 genes expressed in all analyzed tissues in the Human Protein Atlas 25 . This database contains information for a large majority of all human protein-coding genes regarding the expression and localization of the corresponding proteins based on both RNA and protein data. The Atlas contains information about 44 different human tissues and organs 25 . The TAG database (Tumor Associated Genes Database) (245 oncogenes) was designed to utilize information from well-characterized oncogenes and tumor suppressor genes to facilitate cancer research. All target genes were identified through text-mining approach from the PubMed database. A semi-automatic information retrieving engine collects specific information of these target genes from various resources and store in the TAG database. At the current stage, TAG database includes 245 oncogenes 54 , which were used in ProteinHistorian analysis (see below). The database we used was modified for the last time on 2014.10.03. TSGene 2.0 database contains 1217 human tumor suppressor genes (1018 coding and 199 non-coding genes) curated from a total of over 5700 PubMed abstracts 55 . In ProteinHistorian analysis we used only 1018 protein-coding tumor suppressor genes. Differentiation genes (3697 genes) were obtained by manual search for “differentiation” in the Gene Ontology database 35 . Homeobox gene database (HomeoDB2) (333 genes) is a manually curated database of homeobox genes and their classification. HomeoDB2 includes all homeobox loci from 10 animal genomes (human, mouse, chicken, frog, zebrafish, amphioxus, nematode, fruitfly, beetle and honeybee) plus tools for downloading sequences, comparison between different species and BLAST search 56 , 57 . We used the database, which was updated for the last time on 2011.08.08. Deathbase (53 genes) is a database of proteins involved in different cell death processes. It is aimed to compile relevant data on the function, structure and evolution of this important cellular proccess in several organisms (human, mouse, zebrafish, fruitfly and worm). Information contained in the database is subject to manual curation 58 . The database was updated for the last time in 2011. CTdatabase (286 genes) provides basic information including gene names and aliases, RefSeq accession numbers, genomic location, known splicing variants, gene duplications and additional family members. Gene expression at the mRNA level in normal and tumor tissues has been collated from publicly available data obtained by several different technologies. Manually curated data related to mRNA and protein expression, and antigen-specific immune responses in cancer patients are also available, together with links to PubMed for relevant CT antigen articles 59 . We used the update of 2017. To construct the BMC database of sequences that are expressed in tumors but not in normal tissues, the normal EST set was subtracted in silico from the tumorous EST set. This approach is known as computer-assisted differential display (CDD). In total, 4564 cDNA libraries categorized as “tumorous” and 2304 “normal” libraries were used in CDD experiments. 251 EST clusters with tumor predominant expression were described in 10 , and 196 clusters – in 12 . From these clusters 60 protein-coding genes and 121 non-coding sequences were selected for analysis. All annotated human protein coding genes (21694 genes) were obtained from Genome assembly GRCh38 60 with Ensembl tool 61 . The genome assembly was submitted on 2013.12.17. The ProteinHistorian tool was used to perform homology search in genomes of different taxa. The ProteinHistorian tool is an integrated web server, database and a set of command line tools which estimates the phylogenetic age of proteins based on a species tree, several external datasets of protein family predictions from the Princeton Protein Orthology Database (PPOD) 62 and two algorithms for ancestral family reconstruction (Dollo and Wagner parsimony) 63 . The ProteinHistorian tool searches the orthologs in 34 completely sequenced eukaryotic and prokariotic genomes from 16 taxa in the human lineage (Cellular Organisms, Eukaryota, Opisthokonta, Bilateria, Deuterostomia, Chordata, Euteleostomi, Tetrapoda, Amniota, Mammalia, Theria, Eutheria, Euarchontoglires, Catarrhini, Homininae, and H. sapiens ). The species tree used in analysis is presented in Supplementary Fig. 6 . Divergence time is estimated in millions of years ago (Ma) for each internal node in the species tree. It is important to note that a protein could have appeared at any time along the branch to which it is assigned, so the divergence time estimate reported is a lower bound. The ages are taken from the TimeTree database 20 . Time tree database collects estimation of time of divergence among species data from publications in molecular evolution and phylogenetics. These included phylogenetic trees scaled to time (timetrees) and occasionally tables of time estimates and regular text. The data was collected from more than 2300 studies that have been published since 1987 20 . The ProteinHistorian tool detected the following gene numbers in databases mentioned above: The Human Protein Atlas (housekeeping genes) – 6789 genes; The TAG database (oncogenes) – 224 genes; TSGene (tumor suppressor genes) – 984 genes; GeneOntology (differentiation genes) – 3697 genes; HomeoDB (homeobox genes) – 231 genes; DeathBase (apoptosis genes) – 53 genes; CTDatabase (CT-antigen genes) – 187 genes, including 109 autosomal and 78 X-chromosome located genes; Biomedical Center Database – 60 protein-coding genes; Genome assembly GRCh38 (all protein-coding genes) – 19911 genes. The nucleotide BLAST algorithm, HMMER tool and the original Python script were used to analyze the ages of non-coding sequences. The orthologs were searched in 25 completely sequenced eukaryotic and prokaryotic genomes (Supplementary list 1 ). The processing of datasets obtained with ProteinHistorian tool was carried out with Python script and Grep tool. The age of the gene is defined by the most recent common ancestor on human evolutionary timeline containing genes with similar sequences, i.e. with a significant BLAST score (or HMMER E-value) 21 . The age of the functional gene class (or cluster) is described by distribution of ages of genes belonging to this gene class. For convenience, the age of the gene class can be measured numerically in million years at the median of distribution, i.e. at the time point on the human evolutionary timeline which corresponds to the origin of 50% of orthologs of the functional gene class (Fig. 1 ). A probability distribution is stochastically smaller then another one if its cumulative distribution function is larger than the cumulative distribution function of the another one for each value of the argument. We say that a class of genes is stochastically younger than another one, if the age of this class is stochastically smaller than the age of the another class. Thus, we associate stochastically younger property of the gene class with its relative evolutionary novelty. Before statistically analyze the relative evolutionarily novelty of gene classes we first evaluated stochastic difference in the age of gene classes using Kolmogorov-Smirnov distance to specify clusters based on the complete linkage, and performed pairwise comparative statistical analysis by using the Kolmogorov-Smirnov and Chi-square tests to discover statistically significant differences between the evolutionary ages of gene classes. We used appropriate contrasts and Sheffe S-method of multiple comparison to verify stochastic order in the evolutionarily ages of different genes classes observed in all the time points (taxons) from cellular organisms to humans. Thus, we apply covariance-adjusted method to create efficient joint confidence intervals for differences of the empirical distribution functions in all the time points available with the covariance obtained from the weak convergence of centered difference of the empirical distribution functions to the Brownian bridge process. The distribution of maximum modulus of correlated normal distributions required for the covariance-adjusted joint confidence interval was obtained by using Monte Carlo method with 10 6 (before clustering) and 10 7 (after clustering) replications. In order to check bootstrap reliability of the obtained results we bootstrapped independently from the original classes the same size classes 10000 times and performed exploratory analysis for each of the genes age curves, including the bootstrapped mean value, mean square error, median and quartiles for all the taxon break points ( Supplementary Dataset 9 ). At each of the replications we obtain the hierarchical classification based on the Kolmogorov-Smirnov distance, and report the bootstrapped rates for all classes and for all nodes of the initial trees. Moreover, at each of the replications for each pair of the bootstrapped genes age curves we checked for intersections, and finally report the bootstrapped rates of stochastically larger and stochastically smaller cases. Some genes are included in several databases. This was taken into account in statistical analysis. To investigate intersections of gene classes, for each pair of gene classes we report the observed number of genes belonging to both classes and the corresponding expected counts, which were calculated under assumption of independent attendance of genes to classes (Supplementary Dataset 7.1 ). More precisely, with each of gene classes we associate a binary variable taking value 1 if the corresponding gene belongs to the gene class and 0 otherwise. The independent attendance of genes to a pair of gene classes means that the corresponding variables are independent. Moreover, for each pair of gene classes we create the 2x2 contingency table and report P-values of Chi-square and Fisher’s exact tests (Supplementary Dataset 7.2 ). Several genes belong to three or even four gene classes. For triple and quadruple intersections of gene classes we report their counts and share of the intersection in each of the classes (Supplementary Dataset 7.3 ). In order to check reliability of the classification by age of gene classes with respect to the dual functionality we use additional six subclasses to classification: the subclass of genes belonging to both differentiation and tumor suppressor gene classes (diff x TSG); the subclass of genes belonging to both differentiation and oncogene classes (diff x onco); the subclass of genes belonging to differentiation but not to tumor suppressor gene classes (diff-TSG); the subclass of genes belonging to differentiation but not to oncogene classes (diff-onco); the subclass of genes belonging to oncogenes but not to differentiation gene classes (onco-diff); the subclass of genes belonging to tumor suppressor gene but not to differentiation gene classes (TSG-diff).
A team of scientists from Peter the Great St.Petersburg Polytechnic University (SPbPU) studied the evolutionary ages of human genes and identified a new class of them expressed in tumors—tumor specifically expressed, evolutionarily novel (TSEEN) genes. This confirms the team's earlier theory about the evolutionary role of neoplasms. A report about the study was published in Scientific Reports. A tumor is a pathological new growth of tissues. Due to genetic changes, it has impaired cellular regulation and therefore defective functionality. Tumors can be benign or malignant. Unlike the latter, the former grow slowly, don't metastasize, and are easy to remove. Malignant tumors (cancer) are one of the primary mortality factors in the world. A team of scientists from Saint Petersburg discovered a new class of evolutionarily novel genes present in all tumors—the so-called TSEEN (Tumor Specifically Expressed Evolutionarily Novel) genes. "The evolutionary role of these genes is to provide genetic material for the origin of new progressive characteristics. TSEEN genes are expressed in many neoplasms and therefore can be excellent tumor markers," said Prof. Andrei Kozlov, a Ph.D. in Biology, the head of Laboratory "Molecular Virology and Oncology" at Peter the Great St. Petersburg Polytechnic University. The new research confirms a theory that has been proposed by A. Kozlov earlier. According to it, the number of oncogenes in a human body should correspond to the number of differential cell types. The theory also suggested that the evolution of oncogenes, tumor suppressor genes, and the genes that determine cell differentiation goes on concurrently. The theory is based on the hypothesis of evolution through tumor neofunctionalization, according to which hereditary neoplasms might have played an important role during the early stages of metazoan evolution by providing additional cell masses for the origin of new cell types, tissues, and organs. Evolutionarily novel genes that originate in the DNA of germ cells are expressed in these extra cells. Prof. Kozlov also made a reference to the article "Evolutionarily Novel Genes Are Expressed in Transgenic Fish Tumors and Their Orthologs Are Involved in Development of Progressive Traits in Humans" (2019) that has recently been published by his laboratory. In this article, the team confirmed their hypothesis using transgenic fish tumors and fish evolutionarily novel genes. The orthologs of such genes are found in the human genome, but in humans they play a role in the development of progressive characteristics not encountered in fish (e.g. lungs, breasts, placenta, ventricular septum in the heart, etc). This confirms the hypothesis about the evolutionary role of tumors. The studies referred to in the article lasted for several years, and their participants used a wide range of methods from the fields of bioinformatics and molecular biology. "Our work is of great social importance, as the cancer problem hasn't been solved yet. Our theory suggests new prevention and therapy strategies," said Prof. Kozlov. According to him, to fight cancer, a new paradigm should be developed in oncology. TSEEN genes may be used to create new cancer test systems and antitumor vaccines.
10.1038/s41598-019-52835-w
Biology
Structure of amyloid protein offers clues to rare disease cause
Javier Garcia-Pardo et al, Cryo-EM structure of hnRNPDL-2 fibrils, a functional amyloid associated with limb-girdle muscular dystrophy D3, Nature Communications (2023). DOI: 10.1038/s41467-023-35854-0 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-023-35854-0
https://phys.org/news/2023-02-amyloid-protein-clues-rare-disease.html
Abstract hnRNPDL is a ribonucleoprotein (RNP) involved in transcription and RNA-processing that hosts missense mutations causing limb-girdle muscular dystrophy D3 (LGMD D3). Mammalian-specific alternative splicing (AS) renders three natural isoforms, hnRNPDL-2 being predominant in humans. We present the cryo-electron microscopy structure of full-length hnRNPDL-2 amyloid fibrils, which are stable, non-toxic, and bind nucleic acids. The high-resolution amyloid core consists of a single Gly/Tyr-rich and highly hydrophilic filament containing internal water channels. The RNA binding domains are located as a solenoidal coat around the core. The architecture and activity of hnRNPDL-2 fibrils are reminiscent of functional amyloids, our results suggesting that LGMD D3 might be a loss-of-function disease associated with impaired fibrillation. Strikingly, the fibril core matches exon 6, absent in the soluble hnRNPDL-3 isoform. This provides structural evidence for AS controlling hnRNPDL assembly by precisely including/skipping an amyloid exon, a mechanism that holds the potential to generate functional diversity in RNPs. Introduction Human heterogeneous ribonucleoprotein D-like (hnRNPDL) belongs to a class of conserved nuclear RNA-binding proteins (RBPs) that assemble with RNA to form ribonucleoproteins (RNPs). hnRNPDL acts as a transcriptional regulator and participates in the metabolism and biogenesis of mRNA 1 , 2 , 3 , 4 . Three isoforms of hnRNPDL are produced by alternative splicing (AS): hnRNPDL-1, hnRNPDL-2, and hnRNPDL-3 5 (Fig. 1a and Supplementary Fig. 1 ). hnRNPDL-2 is a 301 residues protein and the predominant isoform in human tissues 6 . It consists of two consecutive globular RNA recognition motifs (RRM1 and RRM2), followed by a C-terminal low-complexity domain (LCD) that maps at residues ~201–285 and a nuclear localization sequence (PY-NLS) comprising residues 281-301. hnRNPDL-1 is a longer isoform of 420 amino acids containing an additional Arg-rich N-terminal LCD 5 ; it is less abundant than hnRNPDL-2 and is present mainly in brain and testis 6 . hnRNPDL-3 is a shorter and minor isoform of 244 amino acids that lacks the N- and C- terminal LCDs but conserves the PY-NLS 7 . Fig. 1: hnRNPDL-2 aggregates into ordered amyloid fibrils. a Domain organization of human hnRNPDL isoforms. The RNA recognition motifs are labelled as RRM (pink) and N- and C-terminal low complexity domains (LCD) are indicated as R-rich LCD (orange) and Y/G-rich LCD (blue), respectively. The region and sequence of exon 6 is also indicated, according to hnRNPDL-2 numbering. D259 disease-associated amino acid is shown in red. NLS indicates the shared nuclear localization signal. b Representative confocal images of HeLa hnRNPDL KO cells transiently transfected with the different hnRNPDL isoforms (DL-1, DL-2 and DL-3). The empty GFP vector (GFP) was transfected as control. The images depict the characteristic nuclear distribution of GFP-hnRNPDL fusions (green). Nuclear DNA was stained with Hoechst (blue). White arrows indicate the location of the nucleolus. In all cases, the nucleus contour has been indicated with a white dashed line. Scale bar, 10 μm. c Th-T binding to hnRNPDL-1 (orange line), hnRNPDL-2 (blue line) and hnRNPDL-3 (purple line) after incubation at 37 °C and 600 rpm, pH 7.5 and 300 mM NaCl for 2 days. Representative negative staining TEM micrographs of the incubated solutions of ( d ) hnRNPDL-1, ( e ) hnRNPDL-2 and ( f ) hnRNPDL-3. Scale bar, 200 nm. Note that only in ( e ) individual amyloid filaments are evident. In ( b ) and ( d–f ), results are representative from three independent experiments. Full size image A point mutation in hnRNPDL exon 6 causes autosomal dominant limb-girdle muscular dystrophy D3 (LGMD D3) 8 , 9 , 10 , a rare disease characterized by slowly progressive proximal muscle weakness 11 , 12 . This mutation changes the conserved Asp259 in the C-terminal LCD to either Asn or His. Similarly, mutation of specific Asp residues to Asn or Val in the LCDs of hnRNPA1 and hnRNPA2 are linked to amyotrophic lateral sclerosis (ALS) and multisystem proteinopathy (MSP) 13 , 14 . However, unlike ALS and MSP patients, where hnRNPA1 and hnRNPA2 accumulate in cytoplasmic inclusions in muscular fibers 14 , 15 , most LGMD D3 patients do not exhibit nuclear or cytoplasmatic protein inclusions 8 . Structures of the amyloid fibrils formed by hnRNPA1 and hnRNPA2 LCDs have been recently determined 16 , 17 . For both fibrils, the PY-NLS was embedded within the ordered core. This led to suggest that, under physiological conditions, binding of the import receptor karyopherin-β2 (Kapβ2) to PY-NLS 18 would impede fibrillation, whereas its exposure under pathological conditions would allow amyloid formation 16 . However, in these studies, constructs of the LCD, alone or fused to a fluorescent protein, were used to form the fibrils, and it is unknown whether the observed assemblies would match those of the natural full-length proteins’ fibrils. Here we present the cryo-electron microscopy (cryo-EM) structure of the fibrils formed by the full-length hnRNPDL-2 isoform at an overall resolution of 2.5 Å. These fibrils are stable and bind oligonucleotides, with the associated RRM domains building an exposed solenoidal coat wrapping the structured fibril. The fibrils’ core is formed by a single highly hydrophilic filament encompassing LCD residues 226-276, including Asp259. Modeling suggests that disease-associated mutations at this residue may have limited impact on fibril stability. Importantly, the fibril core does not include the PY-NLS. The hnRNPDL-2 fibril core precisely matches exon 6 (residues 224-280), which is alternatively spliced (AS) in a mammalian-specific manner 19 . These AS events are frequent in RNPs, especially at their Y/G-rich LCDs 19 and they have been proposed to be a way to regulate protein function by controlling the formation of high-order assemblies 20 ; our results provide structural evidence supporting this hypothesis. Overall, we describe the structure of a full-length RNP in its fibrillar functional conformation, providing insight into the molecular bases of LGMD D3, and illustrating how AS can control RNPs assembly by including/excluding amyloidogenic exons at their LCDs. Results hnRNPDLs localize at the nucleus and exon 6 is key for their compartmentalization Human hnRNPDL AS renders three naturally occurring transcripts (Fig. 1a and Supplementary Fig. 1 ). When transfected into a HeLa hnRNPDL knockout (KO) cell line (Supplementary Fig. 2 ), the three ectopically expressed isoforms accumulated in the nucleus (Fig. 1b and Supplementary Fig. 3 ), consistent with sharing a functional PY-NLS 7 . hnRNPDL-1 and hnRNPDL-2 exhibited a granulated nuclear distribution and were excluded from the nucleolar regions. In contrast, hnRNPDL-3 was homogeneously distributed in both the nucleolus and the nucleoplasm. This indicates that exon 6 (residues 224-280), shared by hnRNPDL-1 and hnRNPDL-2 and absent in hnRNPDL-3, is responsible for the differential intranuclear compartmentalization of the isoforms, delimiting regions with higher and lower protein concentration. hnRNPDL-2 forms stable and non-toxic amyloid fibrils We purified monomeric hnRNPDL-1, hnRNPDL-2, and hnRNPDL-3 by size-exclusion chromatography. Upon incubation, hnRNPDL-2 forms amyloid fibrils (Supplementary Fig. 4 ), in a reaction that is highly sensitive to the solution ionic strength (Supplementary Fig. 5b ). At pH 7.5 and low salt (50 mM NaCl), aggregation was fast, rendering large assemblies that precipitated as the reaction progressed. In contrast, at 300 mM NaCl, the reaction exhibited characteristic sigmoidal kinetics, with the formation of highly ordered individual amyloid fibrils that strongly bind Thioflavin-T (Th-T) (Figs. 1 c, e ). A further increase in ionic strength ( i.e . 600 mM NaCl) drastically delayed and reduced hnRNPDL-2 amyloid formation (Supplementary Fig. 5b ). Thus, the fibrils formed at pH 7.5 and NaCl 300 mM were selected for structural studies. Importantly, under the same conditions, monomeric hnRNPDL-1 and hnRNPDL-3 did not aggregate into Th-T positive assemblies after 24 h (Supplementary Fig. 4b ), 48 h (Fig. 1c ) or even after incubation for 7 days (Supplementary Fig. 6 ). Additionally, these two isoforms did not form amyloids upon incubation in the presence of 50 mM or 600 mM NaCl (Supplementary Fig. 5a , c ). These results point to exon 6 as the protein region responsible for amyloid formation in hnRNPDL-2. In fact, exon 6 is missing in hnRNPDL-3, and the hnRNPDL-1 N-terminal Arg-rich LCD may effectively counteract exon 6 amyloidogenic propensity, likely by diverting the assembly towards phase-separated condensates 3 . Once formed, hnRNPDL-2 fibrils are stable and insensitive to salt or temperature (Supplementary Fig. 7 ). Remarkably, these in vitro formed structures are devoid of toxicity for different human cell lines up to 0.4 mg/ml (Supplementary Fig. 8 ), which is consistent with the observation that >30% of endogenous hnRNPDL-2 accumulates into the detergent-insoluble fraction of wild-type HeLa cells (Supplementary Fig. 9 ). Cryo-EM structure determination of hnRNPDL-2 amyloid fibrils We used cryo-EM to investigate the molecular structure of hnRNPDL-2 amyloid fibrils (Fig. 2 and Supplementary Fig. 10 ). Two-dimensional (2D) classification yielded one single species of twisted fibrils. About 54,500 fibril segments from 1,114 micrographs were selected for helical reconstruction in RELION 3.1 21 , 22 , allowing us to obtain a three-dimensional (3D) density map of hnRNPDL-2 at an overall resolution of 2.5 Å (Fig. 2b , Fig. 2c and Supplementary Fig. 11 ). The fibril consists of a single protofilament where hnRNPDL-2 subunits stack along the fibril (Z) elongation axis. The fibril forms a left-handed helix with a full pitch of ~357 Å, a helical rise of 4.82 Å, and a helical turn of −4.86°, featuring a typical cross-β amyloid structure (Figs. 2 c– f ). A molecular model could be unambiguously built de novo for the fibrils core, spanning hnRNPDL-2 residues 226-276 (Figs. 2 d– e ) and matching strikingly the region of exon 6 (residues 224-280), which is absent in the non-amyloidogenic hnRNPDL-3 isoform (Fig. 1a ). The fibril core hosts six individual β-strands (β1 to β6) connected by β-turns and loops, yielding a sort of M-shaped fold, with approximate overall footprint of 55 Å x 40 Å (Figs. 2 c– e ). Data collection and refinement statistics are summarized in Supplementary Table 1 . Fig. 2: Structure of hnRNPDL-2 amyloid filaments. a Cryo-electron micrograph of hnRNPDL-2 filaments. Representative image from 1114 micrographs. The inset shows a representative reference-free 2D class average image of the hnRNPDL-2 filament. Scale bar, 50 nm. b Side view of the three-dimensional cryo-EM density map showing the structured core of an individual hnRNPDL-2 filament. The fibril pitch is indicated. c Detailed view of the cryo-EM density map showing the layer packing in the hnRNPDL-2 filament (left panel). The filament rise/subunit is indicated in Å. Rendered side view of the secondary structure elements accounting for three stacked rungs comprising residues N226-D276 (right panel). The distance between the stacks is indicated in Å. d Sequence of hnRNPDL exon 6, according to hnRNPDL-2 numbering. The observed β-strands that build the hnRNPDL-2 fibrils amyloid core are indicated. e Cryo-EM density map of a layer of hnRNPDL-2 amyloid core. Fitting of the atomic model for residues N226-D276 is shown on top. D259 disease-associated amino acid is highlighted in red. Water molecules are shown as red spheres. f Representative 3D class average image of the hnRNPDL-2 amyloid fibril. Full size image Structural features of hnRNPDL-2 amyloid fibrils The core of hnRNPDL-2 amyloid fibrils, as well as exon 6, map to the C-terminal Y/G-rich LCD, comprising residues ~201–285. Accordingly, the filament amino acid composition is skewed towards Gly (27%) and Tyr (25%), which comprise >50% of the residues in the structure, and neutral-polar residues, with Asn, Gln, and Ser summing to 32%. Thus, each layer of the amyloid core is essentially stabilized through intra-subunit hydrogen bonds involving buried polar side chains, main-chain peptide groups, and ordered solvent molecules (Fig. 3a ). The individual protein subunits pack on each other through an extensive hydrogen-bonding network holding the in-register β-strands together, in the typical cross-β arrangement (Supplementary Fig. 12a ). Additionally, the Tyr aromatic side chains build ladders of π-π stacking interactions along the fibril elongation direction, stabilizing the inter-subunit association interface. Tyr residues are approximately equally distributed between the filament interior and the surface. The core region N-end contains two additional aromatic residues, W227 and F231, which also form individual ladders of π-π stacking interactions along the fibrillar axis; surprisingly, F231 is exposed on the fibril surface, and not buried within the core. Other hydrophobic residues such as Val, Leu, and Ile are absent, whereas they are frequently observed in the structure of pathological fibrils 23 . Gly residues are dispersed throughout the sequence, with their peptide units also involved in hydrogen bonding. The main chain of each individual subunit stacked in the fibril does not lay in a plane, with a 10 Å distance between the top of β1 strand and the edge of the turn immediately after β4 (Fig. 2c ). As a result, each subunit (i) not only interacts with the layer directly above (i + 1) and below (i − 1), but also with layers (i + 3) and (i − 3). This is best exemplified by the formation of a strip of hydrogen bonds involving the side chains of Q237 in layer i and Y258 in layer i-3 (Supplementary Fig. 12b-c ). Fig. 3: Overall structure of the hnRNPDL-2 fibril core. a Schematic representation of one cross-sectional layer of the hnRNPDL-2 fibril core. The location of the β-strands is indicated with thicker arrows. Polar and hydrophobic residues are colored in green and white, respectively. Glycine residues are colored in yellow. Aspartic residues are colored in red. b Sequence alignment of hnRNPDL orthologues showing evolutionary conservation of residues within exon 6. The β-strands that build the hnRNPDL-2 fibrils amyloid core are indicated. The mutated D259 and surrounding conserved residues are shown in red and blue, respectively. Conserved residues are indicated with an asterisk. c Surface representation showing the electrostatic surface potential of the hnRNPDL-2 fibril at pH 7, with ribbon representation of one subunit on top. Electrostatic potential maps were calculated using APBS server and visualized using the APBS plugin in PyMol (Schrödinger, NY, USA) 52 , 59 . Negative and positive map potential values are colored in red and blue, respectively, according to the k b Te c −1 unit scale (k b is the Boltzmann’s constant, e c is the charge of an electron and a T is 298 K). The location of the six β-strands (β1 to β6) is indicated. The side chain of D259 is shown as blue sticks and labeled. d Stick model of one hnRNPDL-2 fibril rung showing the position of evolutionary conserved residues (colored in pink). D259 disease-associated residue is highlighted in red. e Upper panel, top view of the Y255 to Y260 segment showing the interlayer interactions between Y260 and G256. Lower panel, close-up view perpendicular to the fibril axis showing the Y260 packing and their interactions with the main chain of G256. Y260 side chains are shown as sticks (colored in cyan) over the cryo-EM map sown as a grey mesh. f Effect of LGMDD3-associated mutations on hnRNPDL-2 fibril formation. The WT hnRNPDL-2 and D259H and D259N mutant proteins were incubated under the same conditions and subjected to separation on a glycerol cushion and SDS–PAGE analysis (fractions from top to bottom, 1–6). Bottom panels, representative NS-TEM images of amyloid filaments from the bottom fraction. This experiment has been performed twice with similar results. Full size image Unlike most amyloid structures, the conformation of hnRNPDL-2 residues N226-D276 outlines pores that define two internal water channels. The major of such channels spans the fibril’s entire length and contains additional densities that correspond to two ordered water molecules per layer, that are H-bonded to nearby polar groups of Y239, N241, and with peptide N-atoms of G256 (Supplementary Fig. 12d ). According to our experimental estimates, the cryo-EM density around the channel shows one of the highest resolutions within the entire fibril structure, indicating high local stability (Supplementary Fig. 11a ). On average, each fibril layer encloses ten ordered water molecules. hnRNPDL-2 fibrils are mainly composed of hydrophilic amino acids and display more polar surfaces than the fibrils of typical pathogenic proteins (Supplementary Fig. 13 ). The solvent-accessible surface area (SASA) of the hnRNPDL-2 fibril upper layer is 3458 Å 2 , 48% of which is covered by polar atoms, with a percentage of polar residues in the buried area of 64%; both values being significantly higher than in disease-associated fibrils (Supplementary Table 2 ). The SASA of internal hnRNPDL-2 layers is, on average, 1294 Å 2 , reflecting a burial of 63% of the surface relative to the end solvent-exposed layers, 57% of exposed atoms being polar. This endorses the lateral surface of the fibril with a high hydrophilic character (Supplementary Fig. 13a ). In contrast, in the lateral surfaces of pathogenic fibrils, non-polar atoms are predominant (Supplementary Fig. 13 ). The percentage of polar residues in the buried area (67%) is exceedingly high when compared with the values observed for disease-associated fibrils (Supplementary Table 3 ). The proportion of exposed polar atoms and polar buried residues in the inner layers of hnRNPDL-2 are also higher than in the hnRNPA1 and hnRNPA2 LCDs fibrils. Overall, hnRNPDL-2 appears to assemble into one of the most hydrophilic fibrils described so far, providing donor and acceptor groups for potential interactions. We calculated the solvation free energy of folding (ΔG) and ΔG/residue for hnRNPDL-2 fibrils (Supplementary Fig. 14 and Supplementary Table 4 ). These values are significantly lower than those of disease-associated fibrils, independently if they were obtained in vitro, ex vivo or upon in vitro seeding with ex vivo fibrils, indicating that despite their irreversibility, hydrophilic hnRNPDL-2 fibrils are less stable. Importantly, in contrast to the hnRNPA1 and hnRNPA2 LCD fibrils 16 , 17 , the PY-NLS in hnRNPDL-2 fibrils is adjacent to, but not part of the structural core (Supplementary Fig. 15 ), suggesting that binding of Kapβ2 would not necessarily hamper fibrillation. Disease-causative hereditary mutations in the hnRNPDL-2 fibril structure Two missense mutations in hnRNPDL exon 6, D259N and D259H, are linked to LGMD D3 8 , 9 , 10 . D259 is strictly conserved in the hnRNPDL C-terminal LCD of vertebrates (Fig. 3b ) and maps at the end of the loop connecting β4 to β5 in the hnRNPDL-2 fibril (Fig. 3c ). This residue is solvent exposed, with 54% of its surface accessible in the inner fibril layers, yielding a negatively charged ladder along the fibril surface (Figs. 3 c and 3d ). Apart from D259, the fibril core contains three additional exposed Asp residues, D236, D249, and D276, and no positively charged amino acid, which results in a calculated single-layer pI of 3.3 and a highly anionic patch extended along the fibrillar axis. This distinguishes hnRNPDL-2 from hnRNPA1 and hnRNPA2 LCDs fibril cores, with calculated pIs of 6.0 and 8.4, respectively. As in hnRNPDL-2, mutation of a conserved Asp at the fibrillar core of hnRNPA2 LCD to Val (D290V) or to Asn and Val in hnRNPA1 LCD (D262N/V) are disease-associated. Virtual mutations of the respective amyloid cores 16 , 17 indicated that they would render more stable fibrils by removing charge repulsions along the structure, explaining the presence of inclusions of these RNPs variants in the tissues of ALS or MSP patients 14 . However, this mechanism would not necessarily apply to hnRNPDL-2; firstly, because the mutation causing more severe and earliest LGMD D3 onset, D259H, does not neutralize the negative charge but reverts it, and secondly because the D259N mutation still leaves three other Asp residues exposed to solvent, the core surface remaining highly acidic, with a calculated pI of 3.3. Indeed, modeling the impact of the D259H mutation in the hnRNPDL-2 fibril predicts that it would be destabilizing, whereas the stabilization predicted for D259N is lower than for the D262N/V or D290V mutations in hnRNPA1 and hnRNPA2 fibrils, respectively (Supplementary Table 5 ). It has been recently proposed that mutations promoting the conversion of low-complexity amyloid-like kinked segments (LARKS) into steric zippers underlie irreversible aggregation and disease 24 . The impact of the D290V mutation in hnRNPA2 LCD seems to respond to this mechanism 24 . However, in hnRNPDL, D259H/N mutations do not increase the propensity to form steric zippers above the detection threshold (Supplementary Fig. 16 ). These analyses suggested that in LGMD D3, disease-associated variants may not necessarily act by increasing the fibril stability or amyloid propensity. We produced and purified monomeric hnRNPDL-2 D259H and D259N variants and incubated them under the same conditions used to produce the wild-type (WT) fibrils described in the previous section. When the species in the incubated samples were separated on a glycerol cushion, it could be observed that the formation of high-molecular weight species was dramatically reduced in the disease-associated mutants, compared with WT (Fig. 3f ). Moreover, TEM imaging of the high-molecular-weight fraction evidenced that, in contrast to the copious and long fibrils present in WT, the mutants exhibited scarce and significantly shorter fibrils (Fig. 3f ), displaying poor binding to Th-T (Supplementary Fig. 17 ). WT hnRNPDL-2 exhibited a strict bimodal distribution, being present only in the low- and high-molecular weight fractions, indicating that they constitute the predominant metastable states of hnRNPDL-2. These results question an amyloid origin of LGMD D3 and are consistent with the absence of mutated protein inclusions in the atrophied muscle of patients 8 . hnRNPDL-2 amyloid fibrils bind nucleic acids hnRNPDL-2 contains two N-terminal tandem RRM domains (RRM1 and RRM2) that are thought to be functional RNA/DNA-binding motifs 25 (Fig. 1a and Supplementary Fig. 18 ). As shown in Figs. 4a and 4b , the RRMs were visible in hnRNPDL-2 fibrils 2D classes, when a small number of segments were averaged, as additional fuzzy globular densities around the filament core; such densities are averaged out in the 3D reconstruction due to irregular locations of the RRMs along the fibril. Importantly, the RRMs surrounding the fibrillar core could be evidenced by immunogold labeling in TEM images (Fig. 4a ). We then propose a model whereby the structured fibril core, built by exon 6 residues, is decorated by a fuzzy coat of flexible RRMs (Fig. 4c ). Fig. 4: Structure of the RNA/DNA-binding domains from hnRNPDL-2 and their location in the amyloid filaments. a Top panel, representative 2D class average image showing hnRNPDL-2 fibrils with an approximate solenoid coat of globular domains. The arrows indicate the location of the density assigned to the RRMs. Bottom panel, negative-stain electron microscopy (EM) micrographs of hnRNPDL-2 filaments bound to nanogold-antibodies (~10 nm, white arrow) targeting the location of the hnRNPDL-2 globular domains. The N-terminal 6xHis tag was labelled using nanogold-conjugated secondary antibodies against anti-6xHis antibodies produced in mouse. Representative image from two independent experiments. Scale bar, 50 nm. b 3D unsharpened density map reconstruction of hnRNPDL-2 fibril. Densities for the amyloid core and putative RRMs are colored in blue and pink, respectively. The 10x filament rise/subunit is indicated in Å. c Schematic diagram showing the proposed organization of the RRM domains (pink) around the fibril core (cyan). d Cross-sectional view of the unsharpened cryo-EM map of hnRNPDL-2 fibril, with the superimposed hnRNPDL-2 amyloid core in a ribbon representation. The side chain of F231 is represented as sticks. e Model structure of the N-terminal RNA-binding domains RRM1 and RRM2 from hnRNPDL-2 generated with AlphaFold2 60 . f Electrophoretic mobility shift assay (EMSA) of soluble hnRNPDL-2 with a Fluorescein-labelled oligonucleotide (F-ssDNA). The 7-mer ssDNA was incubated with the soluble form of hnRNPDL-2 at the indicated protein concentrations. EMSA has been performed three times with similar results. g Binding affinity of soluble hnRNPDL-2 to the 7-mer fluorescent ssDNA (F-ssDNA) determined by the EMSA assay. Data is shown as mean ± SEM ( n = 3 independent experiments). h Binding of the 7-mer fluorescent ssDNA (F-ssDNA) to preformed hnRNPDL-2 amyloid filaments. Data is shown as mean ± SEM ( n = 4 independent experiments for all protein concentrations, except for 10 and 25 µM with n = 3 independent experiments). i Representative confocal microscopy image of hnRNPDL-2 amyloid fibrils bound to fluorescent ssDNA (+F-ssDNA). Control fibrils without F-ssDNA are shown as control condition. Representative image from three independent experiments. In ( g ) and ( h ) data was fitted to one-site specific binding mechanism with Hill slope using GraphPad Prism 57 . Full size image In this respect, it should be noted that residue F231 appears exposed to solvent, giving rise to a hydrophobic ladder along the fibril hydrophilic surface. Closer inspection of the density map, however, suggests that the N-terminal arm of the fibril core, including F231, interacts with other regions of the hnRNPDL-2 assembly, likely the fuzzy RRMs (Fig. 4d ), that would preclude F231 entropically disfavored interaction with the solvent. Previous studies have shown that hnRNPDL is a protein that actively participates in transcription and AS regulation 1 , 2 , 3 , 4 . A prior investigation indicated that this protein binds to oligonucleotides with the consensus sequence ACUAGC, as deduced from the screening of a set of in vivo identified RNA ligands 26 . To confirm that soluble hnRNPDL-2 can bind nucleic acids, we performed Electrophoretic Mobility Shift Binding Assays (EMSA). We found that the soluble protein binds to a 7-mer fluorescently labeled ssDNA oligonucleotide displaying the ACUAGC motif with an apparent dissociation constant (Kd) of 5.9 ± 0.78 µM ( 4 f, g ), whereas the affinity of the protein for an equivalent RNA sequence was lower (Supplementary Fig. 19 ). We incubated the same ssDNA oligonucleotide with preformed hnRNPDL-2 amyloid fibrils, and we quantified the amount of oligonucleotide bound to the fibrils after centrifugation. As shown in Fig. 4h , the fibrils significantly bind ssDNA in a concentration-dependent manner, showing a Kd of 2.1 ± 0.36 µM. This interaction was confirmed by confocal microscopy, as incubated hnRNPDL-2 amyloids appeared highly fluorescent due to the incorporation of the fluorescein-labeled ssDNA (Fig. 4i ). It could be that longer nucleic acids sequences, as those found in nature, would result in tighter binding, because multivalency allows weak interactions to collectively form much stronger interactions, a phenomenon known as avidity. Unfortunately, specific long RNA/DNA hnRNPDL targets remain to be identified. In any case, our data demonstrates that hnRNPDL-2 amyloid fibrils retain the ability to bind nucleic acids, in keeping with a potential functional role for these self-assembled structures. We have shown that the amyloid fibril surface is strongly acidic; we therefore expect the globular RRM domains that decorate the fibril to be responsible for such activity. Discussion hnRNPDL-2 is the major hnRNPDL isoform in human tissues 6 and, as we show here, the only one forming ordered amyloid fibrils under physiologic-like conditions. This property can unequivocally be attributed to exon 6, which also accounts for the granular and heterogeneous protein distribution in the cell nucleus. Importantly, hnRNPDL-1 does not form fibrils, although it phase separates under the same conditions 3 . Thus, hnRNPDL constitutes a notable exception to the general rule considering liquid-liquid phase separation (LLPS) and amyloid formation as two interconnected phenomena 27 , 28 . This might be a way to evolve two structurally different assemblies, condensates and fibrils, each associated with an hnRNPDL isoform, skipping potential pathogenic transitions between these conformational states. The in vitro cryo-EM hnRNPDL-2 fibrillar structure we report here exemplifies the amyloid assembly of a full-length RNP, whereas previous fibrils were solved starting from the LCD alone 16 , 29 , 30 or the LCD fused to a fluorescent protein 17 , 31 . Thus, we think it should better reflect the in vivo fibrillar packing of this protein family. Indeed, the hnRNPDL-2 fibril structure exhibits significant differences relative to those of hnRNPA1 and hnRNPA2 LCDs, even though the respective full-length soluble forms share the same overall molecular architecture 32 , 33 . In particular, the hnRNPDL-2 amyloid core matches a vertebrate conserved exon, whereas this is not the case for hnRNPA1 and hnRNPA2 (Supplementary Fig. 20 ). Secondly, the hnRNPDL-2 fibril is significantly more hydrophilic and acidic on its surface, a property directly impacting on the interaction with other (macro)molecules. Thirdly, hnRNPDL-2 fibrils are irreversible and cannot be disassembled at 90 °C, whereas those of hnRNPA1 and hnRNPA2 LCDs are reversible 16 , 17 . Indeed, ΔG and ΔG/residue values for hnRNPDL-2 fibrils are more negative than for hnRNPA1 and hnRNPA2 structures (Supplementary Table 4 ). In addition, our amyloid includes the complete protein, and contacts between the RRMs and the core may also contribute to stability. Indeed, reversibility of hnRNPA1 and hnRNPA2 fibrils is expected since, for them, LLPS and fibril formation are interrelated, and they can potentially transition between the two states. Such connection does not apply in the case of hnRNPDL isoforms, and because hnRNPDL-2 does not phase separate, no back-transition to liquid droplets is possible. The equilibrium should be established between the amyloid and monomeric states, which fits well with our sedimentation assays. A final and important difference between hnRNPA1, hnRNPA2, and hnRNPDL-2 fibril structures is that, in the first two proteins, the PY-NLS residues that bind to importin Kapβ2 establish stabilizing interactions and are buried inside the fibril core 16 , 17 ; this does not occur in hnRNPDL-2, where PY-NLS is adjacent but external to the amyloid core (not mapped by density). The accessibility of PY-NLS in both monomeric and fibrillar states of hnRNPDL-2 is consistent with the observation that, in humans, Kapβ2 co-localizes with both condensed and diffuse hnRNPDL nuclear regions 8 . This does not exclude that Kapβ2 might still modulate hnRNPDL-2 fibril formation, sterically interfering with the building of the amyloid core; however, this action would be mechanistically different from those exerted on hnRNPA1 and hnRNPA2, where Kapβ2 also acts as disaggregase 34 . hnRNPDL-2 fibrils share similarities with the 3D structures of functional amyloids. Like most of them 35 , 36 , 37 , hnRNPDL-2 fibrils do not exhibit polymorphism. This suggests that they may represent a global free energy minimum, allowing us to speculate that the same structure would be adopted in the cell and that it represents the functional conformation of the assembled state. In contrast, pathogenic fibrils are mostly polymorphic 23 . The hnRNPDL-2 fibril core is hydrophilic, and the structure is stabilized by hydrogen-bonding networks, between residues and with water molecules, together with Tyr side chains π-π stacking and dipole-dipole interactions. The eminent polar nature of the interactions that hold up the hnRNPDL-2 fibril and the abundance of flexible Gly residues would allow protein chains to explore the conformational space efficiently towards the final structure, without being trapped into polymorphic local minima by stable hydrophobic interactions, as it often occurs for disease-associated fibrils 23 . In addition, the hydrophilic nature of hnRNPDL-2 fibril surfaces may lay behind their observed lack of toxicity, precluding interactions with hydrophobic cellular membranes and their subsequent disruption. hnRNPDL-2 fibrils consist of a single filament, whereas 74% of the available amyloid structures have two or more protofilaments 38 . This is not surprising, since most fibril interfaces involve hydrophobic interactions between individual protofilaments 38 and, as said, non-polar amino acids are virtually absent in the hnRNPDL-2 fibril core. In addition, the surrounding RRMs coat would impair any filament-to-filament lateral association. The structure of hnRNPDL-2 fibrils is reminiscent of that of HET-s prion, where globular domains also hang from a single filament fibril in a solenoidal fashion 35 . A single filament allows for the decoration of amyloid cores with regularly spaced globular domains following the helical twist of the fibril, something hardly compatible with a multi-protofilament assembly. Indeed, the fibrils of hnRNPA2 17 and FUS 31 LCDs fused to fluorescent proteins also exhibit a single protofilament, consistent with this being the preferred disposition when globular domains are adjacent to LCDs in the sequence. Instead, the fibrils of hnRNPA1 LCD alone involved two filaments 16 . The most substantial evidence of hnRNPDL-2 fibrils’ functionality is their ability to bind small oligonucleotides, especially ssDNA, with an affinity equivalent to that of the soluble counterpart. This indicates that the RRM domains wrapping around the structured fibrils are folded and functional. The functionality of hnRNPDL-2 fibrils provides the basis for understanding the connection between D259 mutations and LGMDD3 and why, in contrast to MSP patients bearing similar mutations in hnRNPA1 and hnRNPA2 LCDs, whose muscle biopsies show cytoplasmic mislocalization and protein aggregation 15 , in LGMDD3, myopathologic studies coincide in the absence of sarcoplasmic protein aggregates 8 , although congophylic deposits were detected in some instances 39 . For hnRNPA1 and hnRNPA2, theoretical calculations indicated that Asp substitutions would stabilize the fibrils 16 , 17 and facilitate LARKS to steric zippers transitions 24 . This would thermodynamically shift any potential droplet/fibril equilibrium towards the fibrillar state, reducing reversibility. For hnRNPDL-2, this equilibrium does not apply, and the WT fibrils are already irreversible. The same calculations indicate that D259H/N mutations do not significantly stabilize the fibril core and that no LARKS to steric zipper transition occurs. Indeed, the mutant proteins exhibit a low propensity to fibrillate, and the fibrils are shorter and less organized. This is consistent with the lack of aggregates in most patients’ muscular tissues 8 , suggesting that we might face a loss-of-function disease. It could be that hnRNPDL-2 fibrils cannot be efficiently formed in the affected muscle, and the soluble protein cannot compensate for their activity, or that Kapβ2 cannot properly transport the mutant proteins 7 . Alternatively, inefficient fibrillation pathways of mutants might involve intermediates that are either degraded by the protein quality control machinery, decreasing the pool of active protein, or instead accumulate in myocytes, exerting toxicity. Indeed, knockdown of zebrafish hnRNPDL (85% identity with human hnRNPDL-2) using antisense oligonucleotides resulted in dose-dependent disorganization of myofibers, causing body shape defects and restricted and uncoordinated movements, which is consistent with a loss-of-function myopathy 8 . Our results are in keeping with the recent evidence that even for hnRNPA1, disease manifestation is not always associated with increased fibrillation, and variants with LCD mutations displaying a low ability to form fibrils are also pathogenic 40 . Interestingly, these hnRNPA1 variants cause vacuolar rimmed myopathy, histologically similar to LGMDD3. This suggests that RNP-associated diseases might not respond to a unique molecular mechanism, but somewhat different sequential/structural perturbations might elicit cellular dysfunction and degeneration, potentially by affecting the function of LCD-containing RNPs in muscle regeneration, where they execute pre-mRNA splicing, stabilize large muscle-specific transcripts and aid in their transport 41 . The most intriguing and unique feature of the hnRNPDL-2 fibril is the perfect overlap between the amyloid core and exon 6. AS patterns have diverged rapidly during evolution, with exons that were ancestrally constitutive in vertebrates evolving to become alternatively spliced in mammals, expanding the regulatory complexity of this lineage 42 . These evolutionary changes impact all members of the hnRNPD family, to which hnRNPDL belongs. Accordingly, the exon 6 sequence is conserved among vertebrates (Fig. 3b), an additional evidence of its functionality, and alternatively spliced in mammals 7 , 19 . Mammalian-specific AS is especially frequent at the Y/G-rich LCDs of RNPs 19 , suggesting that regulation of the number of GY motifs in these regions confers fitness benefit 20 . Furthermore, elimination of these repeats through exon skipping results in dominant-negative RNPs that bind nucleic acids but cannot form multimeric complexes through Tyr-dependent interactions, which significantly modifies their gene regulation activity and nuclear patterning 20 . This differential behavior is often attributed to the longer isoforms’ ability to undergo LLPS 20 . However, in proteins of the Rbfox family, the splicing activity is contingent on the formation of Th-T positive fibrous structures, which is mediated by its LCD in a Tyr-dependent manner 43 , although it is unknown if Rbfox fibers correspond to cross-β amyloids. Here we provide high-resolution structural evidence supporting the role of mammalian specific-AS in controlling the assembly and nuclear distribution of RNPs, which in the particular case of hnRNPDL occurs by precisely including/skipping a conserved amyloid-prone exon. Overall, this work presents the detailed cryo-EM structure of a full-length RNP in its fibrillar functional conformation. The structure of hnRNPDL-2 exon 6 in its amyloid form provides critical insights into the molecular bases of LGMD D3 and the mechanism of AS-controlled RNPs assembly in mammals. Methods Cell culture, plasmids and cell lines The human HeLa (ATCC CCL-2) and SH-SY5Y (ATCC CRL-2266) cell lines were maintained in Dulbecco’s Modified Eagle Medium (DMEM) or minimum essential medium α (MEM-α) medium, respectively. Media were supplemented with 10% (v/v) Fetal Bovine Serum (FBS). Both cell lines were grown under a highly humidified atmosphere of 95% air with 5% CO 2 at 37 °C. The HeLa hnRNPDL KO cell line was generated as described elsewhere 3 . HeLa cells have been authenticated within the last 3 years by STR analysis. The HeLa and SH-SY5Y cell lines were regularly tested for the presence of mycoplasma using qPCR Detection Kit (SIGMA), and both cell lines are mycoplasma negative. For recombinant protein expression, the genes encoding the three isoforms of hnRNPDL (hnRNPDL-1, hnRNPDL-2 and hnRNPDL-3) were inserted into pETite (Lucigen corporation) vector with a His-SUMO N-terminal tag. For subcellular localization experiments, the same genes were cloned into pEGFP-C3 (Clontech). In all the cases, the correctness of the DNA sequence was verified by sequencing. Cell transfection and immunoblotting Adherent cells were transfected with linear polyethylenimine (PEI; Polysciences) in a 1:3 DNA:PEI ratio. Cells were collected after 48 h, lysed in M-PER mammalian protein extraction reagent (Thermo Fisher Scientific) with 1/1000 of the EDTA-free protease inhibitor cocktail Set III (Calbiochem), and centrifuged for 30 min at 15,000 × g at 4 °C. The soluble and insoluble fractions were analyzed by SDS–PAGE and immunoblotting onto PVDF membranes (EMD Millipore) using standard protocols 44 . hnRNPDL proteins were detected using different primary antibodies (anti-hnRNPDL antibody, HPA056820 from Sigma-Aldrich, dilution 1:500; anti-vinculin monoclonal antibody VLN01, MA5-11690 from Invitrogen, dilution 1:5000). The primary antibodies were detected with the appropiate HRP-labelled secondary antibody (goat anti-Rabbit IgG (H + L), 31460 from Invitrogen, dilution 1:2000 or goat anti-Mouse IgG (H + L), 31430 from Invitrogen, dilution 1:2000), followed by chemiluminescence detection using Immobilon Forte Western HRP substrate (Sigma-Aldrich). Protein expression and purification Protein expression was performed in E. coli BL21(DE3) cells, induced with 0.5 mM IPTG at a OD 600 = 0.5. After incubation for 3 h at 37 °C and 250 rpm, the cells were harvested by centrifugation for 15 min at 4000 × g . Cell pellets were resuspended in Binding Buffer (50 mM HEPES, 1 M NaCl, 5% glycerol and 20 mM imidazole, pH 7.5), lysed by sonication, and centrifugated at 30,000 × g for 30 min at 4 °C. The supernatant was filtered through a 0.45 µm filter and loaded into a HisTrap TM FF Ni-column equilibrated with Binding Buffer. The bound protein was eluted with an imidazole gradient starting from 0 to 100% of the Elution Buffer (50 mM HEPES, 1 M NaCl, 5% glycerol, 500 mM imidazole, pH 7.5). Afterwards, fractions containing purified proteins were pooled and loaded into a HiLoad TM 26/600 Superdex TM 75 pg column equilibrated with a 50 mM HEPES pH 7.5 buffer containing 1 M NaCl and 5% glycerol. Finally, the proteins were concentrated using a 10 K Amicon (Merck-Millipore), flash-frozen in liquid nitrogen and stored at −80 °C until use. Fibril formation and in vitro aggregation kinetics For aggregation experiments, the purified proteins corresponding to the different hnRNPDL isoforms and mutants were loaded into a PD-10 desalting column SephadexTM G-25 M to exchange the buffer. Samples were diluted to a final protein concentration of 50 µM in 50 mM HEPES, 300 mM NaCl, pH 7.5. Then, the aggregation reactions were incubated at 37 °C with 600 rpm agitation in sealed Eppendorfs. Th-T binding to hnRNPDL aggregates was measured by recording Th-T fluorescence in the range between 460 and 600 nm after excitation with wavelength of 445 nm using a Jasco FP-8200 spectrofluorimeter. The final Th-T and protein concentrations were 25 μM and 10 μM, respectively. All the samples were diluted in 50 mM HEPES buffer at pH 7.5 with 300 mM NaCl and this same buffer alone was used as a control. The light scattering of the reactions was measured in the range between 325 and 340 nm after excitation at 340 nm using a Jasco FP-8200 spectrofluorimeter. The aggregation kinetics of hnRNPDL isoforms were monitored in 96 well plates by following increments in Th-T signal of 25 μM protein samples. Plates were incubated at 37 °C under constant shaking (100 rpm) using a Spark (TECAN) Fluorescence microplate reader. The Th-T fluorescence of each well was measured every 30 min by exciting with a 445 nm filter and collecting the emission with a 480–510 nm filter. Fluorescence emission of all the proteins in the absence of Th-T and the signal of the buffers alone with Th-T were determined as control conditions. To test the effect of NaCl on the aggregation kinetics of hnRNPDL, aggregation reactions with different NaCl concentrations (50, 300 and 600 mM) were also prepared. Ultracentrifugation To perform ultracentrifugation experiments, the samples were centrifuged at 120,000 × g for 2 h at 4 °C in a 20% (v/v) glycerol cushion. After centrifugation, six fractions (70 μl each) were collected from top to bottom, without disturbing the layers, and analyzed by SDS–PAGE. The total protein concentration of each fraction was determined by Bradford using a commercial Coomassie Protein assay reagent (Thermo Fisher Scientific). Immunolabelling and negative staining electron microscopy Aliquots of hnRNPDL-2 amyloid fibrils were mixed with an anti-6xHis (MA1-21315 from Thermo Fisher Scientific) primary antibody (1:10 dilution) in 50 mM HEPES, 150 mM NaCl, pH 7.5 buffer and the sample was then incubated for 18 h at 4 °C. After incubation, an anti-mouse 10 nm colloidal gold-linked secondary antibody (A-31561 from Thermo Fisher Scientific) was added to the samples (dilution 1:100) and incubated for 1 h at room temperature. For negative staining transmission electron microscopy (NS-TEM), 5 µl of each sample was incubated on a EMR 400 mesh carbon-coated copper grids (Micro to Nano Innovative Microscopy Supplies) for 5 min. After incubation, the grids were washed with 10 µl MQ water and stained for 1 min with 5 µl of 2% (w/v) uranyl acetate. The excess solutions from each step were removed with filter paper. Each grid was allowed to dry before inspection using a JEOL JEM-1400 Electron Microscope operating at 120 kV with a CCD GATAN 794 MSC 600HP camera with Digital Micrograph 1.8 (GATAN) software. 3–5 micrographs were recorded for each sample at two different nominal magnifications ( i.e . ×6000 and ×10000) and an estimated defocus of about ±1–4 μm. Cryo-EM sample preparation and data collection For cryo-EM, sample vitrification was carried out using a Mark IV Vitrobot (Thermo Fisher Scientific). 3 μl hnRNPDL-2 amyloid fibrils diluted in MQ water at a final concentration of 0.25 mg/mL were applied to a C-Flat 1.2/1.3-3Cu-T50 grid (Protochips) previously glow-discharged at 30 mA for 30 s in a GloQube (Quorum Technologies). Sample was incubated on grid for 60 s at 4 °C and 100% humidity, blotted and plunge-frozen into liquid ethane. Vitrified samples were transferred to a Talos Arctica transmission electron microscope (Thermo Fisher Scientific) operated at 200 kV and equipped with a Falcon 3 direct electron detector (Thermo Fisher Scientific) and EPU 2.8 (Thermo Fisher Scientific) software. A total of 1114 movies were collected using EPU 2.8 (Thermo Fisher Scientific) in electron counting mode with an applied dose of 40 e - /Å 2 divided in 40 frames at a magnification of 120 kx. All the micrographs were acquired with a pixel size of 0.889 Å/pixel and a defocus range of −1.0 to −2.2 μm. Helical reconstruction Best movies (without the presence of artifacts, crystalline ice, severe astigmatism or with obvious drift) were imported in RELION 3.1 for further processing following a helical reconstruction pipeline 22 . All the movies were motion-corrected and dose-weighted using MOTIONCOR2 45 . Contrast transfer function (CTF) estimation was performed on aligned, unweighted sum power spectra every 4 e/Å 2 using CTFFIND4 46 . Micrographs with a resolution estimate of 5 Å or better were selected for further analysis. Fibrils were manually picked, and the segments were successively extracted using a box size of 380 pixel and inter-box distance of 33.2 Å, yielding a total of 158,493 segments. Reference-free 2D classification was performed to identify homogeneous segments for further processing. An initial model was generated using a single large class average similarly as described elsewhere 47 . In brief, the cross-over distance was measured from the class average image to be ~179 Å. The initial model was then re-scaled and re-windowed to match the unbinned particles with a 380-pixel box and low-pass filtered to 10 Å. A 3D classification with four classes, a regularization value of T = 4 and imposing a helical rise of 4.82 Å and a helical twist of −4.86° was used to select the best class containing 54490 particles. Particles were aligned with a 3D auto-refine job using a 10 Å low-pass filtered map from the previous 3D classification, considering the 30% central part of the box, and refining the helical twist and rise. Bayesian polishing 48 , 49 and CTF refinement 48 were performed to increase the resolution of the final reconstruction. Finally, the refined 3D reconstructions were sharpened using the standard post-processing procedures in RELION 3.1. Based on the gold-standard Fourier shell correlation=0.143 criteria, the best map for the hnRNPDL-2 fibril showed a final resolution of 2.5 Å. All the details of data acquisition and processing are given in Supplementary Table S1 . The hnRNPDL-2 structure, and cryo-EM map have been deposited in the Protein Data Bank (PDB ID: 7ZIR ) and Electron Microscopy Data Bank ( EMDB-14738 ), respectively. Raw images and raw processing data are available at the Electron Microscopy Public Image Archive - EMBL-EBI ( EMPIAR-11064 ). Model building and refinement The hnRNPDL-2 amyloid fibril model was manually built into the 2.5 Å sharpened map of the hnRNPDL-2 fibril using COOT 50 . In brief, an initial polyalanine model was incorporated into the 2.5 Å sharpened map of the hnRNPDL-2 filament. Then, Ala residues were mutated to Tyr where the electron density was clear enough. Based on the location of Tyr and spacing pattern, the comprising residues 226 to 276 of hnRNPDL-2 were successfully assigned and modeled. Real-space refinement was accomplished with the phenix.real_space_refine module from Phenix 51 . Molecular graphics and structural analyses were performed with Pymol 52 and ChimeraX 53 . Statistics for the final model are provided in Supplementary Table 1 and the corresponding atomic coordinates have been deposited in the Protein Data Bank (PDB accession number: 7ZIR ). Fibril stability and solvation free energy calculations The free energy of mutation calculations were determined using the module ddg_monomer from Rosetta 54 . Delta delta G (ΔΔG) energy values for WT hnRNPDL-2 and mutants (D259H, D259N and D259V) fibril structures were determined following a standard high-resolution protocol as described elsewhere 16 . In brief, we first performed energy minimization on the hnRNPDL-2 fibril structure containing five successive rungs. The resultant restraint file from this step was used in the subsequent processing. Standard Van der Waals and solvation energy parameters ( i.e . cut off value of 9 Å) were applied. For comparative purposes, the ΔΔG values for WT and relevant hnRNPA1 or hnRNPA2 mutant fibrils were also determined using the same Rosetta protocol. The percentage of buried polar residues and the exposed surfaces of hnRNPDL-2, hnRNPA2, hnRNPA1, hSAA1, Aβ−42 and α -synuclein fibrillar structures were calculated using PDBePISA 55 . Exposed surfaces were calculated as the difference between the total and the buried surface. The percentage of exposed polar surface was calculated using the total and apolar surface values predicted by GetArea 56 . The solvation free energy of folding (ΔG in kcal/mol) values for all the structures were calculated using PDBePISA 55 . In vivo cell imaging For in vivo cell imaging experiments, adherent HeLa cells were grown on 35 mm glass-bottom culture dishes at a density of 1 × 10 5 cells per dish. After 24 h incubation, the cells were transiently transfected with GFP-tagged versions of the hnRNPDL-1/-2/-3 isoforms and linear polyethylenimine (PEI; Polysciences, Eppelheim, Germany) in a 1:3 DNA:PEI ratio. After 24 h incubation, the cells were washed with fresh medium (2 × 1 mL), and then cell nuclei were stained with Hoechst (Invitrogen) for 10 min at 37 °C and 5% CO 2 . In vivo cell imaging was performed at 37 °C using a Leica TCS SP5 confocal microscope and a 63 × 1.4 numerical aperture Plan Apochromat oil-immersion lens. All the confocal images were processed using Bitplane Imaris 7.2.1 software. Gel Electrophoretic Mobility Shift Binding Assays (EMSA) The ability of hnRNPDL-2 amyloid fibrils to bind RNA/ssDNA was studied by performing a Gel Electrophoretic Mobility Shift Binding Assays (EMSA). We prepared 10 µl reactions containing 50 nM of Fluorescein-labeled RNA/ssDNA (F-GACUAGC) and increasing amounts of soluble hnRNPDL-2. The samples were incubated for 24 h at RT before the complexes were resolved on an 8% polyacrylamide gel at a constant voltage. After electrophoresis, the gel was imaged using a ChemiDoc™ MP Imaging System (Bio-Rad) and Image Lab Touch Software (Bio-Rad Laboratories). Each experiment was performed in triplicate. The apparent dissociation constants (Kd) for specific binding of RNA or ssDNA to soluble hnRNPDL-2 were determined by fitting the data to one-site specific binding mechanism with Hill slope using GraphPad Prism 57 . Fluorescent RNA/DNA-binding assays The ability of hnRNPDL-2 amyloid fibrils to bind ssDNA was studied by measuring the fluorescence of the fibrils bound to Fluorescein-labelled ssDNA using a Jasco FP-8200 Spectrofluorometer (Jasco Corporation) and a fluorescence microscope (Leica Microsystems). For the fluorometric assays, 20 l reactions containing 10 µM of hnRNPDL-2 amyloid fibrils and increasing amounts of Fluorescein-labeled ssDNA (F-GACUAGC) at the following concentrations (0.05, 0.1, 0.5, 1, 2.5, 5, 10, 25, 50 and 100 µM) were prepared. After incubation for 24 h at RT, the samples were centrifuged at 16,000 × g for 40 min at 4 °C. The pellet containing the hnRNPDL-2 amyloid fibrils bound to the ssDNA was washed with 50 µl of 50 mM HEPES buffer (pH 7,5) with 150 mM NaCl, centrifuged for 30 min and sonicated. Fluorescence emission of the ssDNA bound to fibrils was measured at an excitation and emission wavelength of 480 nm and 520 nm, respectively. Control samples with only fluorescent ssDNA were used as controls conditions. Each experiment was performed at least in triplicate. The apparent dissociation constant (Kd) for specific binding of ssDNA to hnRNPDL-2 amyloid fibrils was determined by fitting the data to one-site specific binding mechanism with Hill slope using GraphPad Prism 57 . For confocal microscopy analysis, 20 µl reactions with hnRNPDL-2 amyloid fibrils at a concentration of 20 µM were incubated for 24 h at RT in the presence of 30 µM of Fluorescein-labelled ssDNA (F-GACUAGC). After centrifugation at 16,000 × g for 40 min, the precipitated fraction was placed on a microscope slide and sealed. Confocal fluorescence images were obtained with Leica SP5 confocal microscope (Leica microsystems). hnRNPDL-2 amyloid fibrils and labeled ssDNA alone were imaged as control samples. Cytotoxicity assays The cytotoxicity of the hnRNPDL-2 amyloid fibers toward SH-SY5Y and HeLa cells was evaluated using a resazurin-based assay 58 . Briefly, the cells were seeded in 96-well plates at a concentration of 3 × 10 3 cells per well and incubated for 24 h. Afterwards, the cells were treated with different hnRNPDL-2 fiber concentrations ranging from 0.004 to 0.4 mg/mL (w/v). After 48 h of incubation, aliquots of 10 μL of the PrestoBlue™ (Thermo Fisher Scientific) cell viability reagent solution were added to each well. After 1 h incubation at 37 °C in the absence of light and in the presence of a highly humidified atmosphere of 95% air with 5% CO 2 , fluorescence emission (ex/em=531/572 nm) of each well was measured using a fluorescence microplate reader (PerkinElmer Victor 3 V) and PerkinElmer 2030 software. Cell cytotoxicity was determined in terms of cell growth inhibition in treated samples and expressed as percentage of the control condition. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The hnRNPDL-2 amyloid fibril structure, and cryo-EM map have been deposited in the Protein Data Bank and Electron Microscopy Data Bank under the accession codes 7ZIR and EMDB-14738 , respectively. Raw images and raw processing data have been deposited in Electron Microscopy Public Image Archive with accession code EMPIAR-11064 . Other structures referenced in this work are available under the PDB accession codes 6MST , 5KK3 , 6OSJ , 7Q4M and 7NCK . Other data supporting the findings of this study are available within the article and its associated supplementary information files. Source data are provided with this paper.
Researchers at the UAB have determined the structure of amyloid fibers formed by the protein hnRNPDL-2, implicated in limb-girdle muscular dystrophy type 3, using high-resolution cryo-electron microscopy (cryo-EM). They have concluded that the inability of the protein to form amyloid fibers, and not aggregation, would be the cause of the disease. This is the first amyloid structure determined at high resolution by a Spanish research team. The study, published in Nature Communications, informs the search for molecules that stabilize or facilitate amyloid formation and opens the door to the study of other functional amyloids and their mutations using the same technique, in order to better understand their implications in health and disease. Limb-girdle muscular dystrophy type 3 (LGMD D3) is a rare disease causing progressive muscle weakness caused by point mutations in the hnRNPDL-2 protein. A member of the RNA-associated ribonucleoprotein (RNP) family, it is a little-known protein with the ability to assemble to form functional amyloid structures. Amyloids are formed by the joining of thousands of pieces of the same protein to form very stable and structured fibers (protein aggregates). Their formation is often associated with diseases such as Parkinson's and Alzheimer's, but they are also used by different organisms for functional purposes, although the number of functional amyloids described in humans is still small. Researchers at the Universitat Autònoma de Barcelona (UAB) have determined that the architecture and activity of the amyloid fibers suggest that they are stable, non-toxic amyloid fibers that bind nucleic acids in their aggregated state. The results indicate that LGMD D3 could be a protein loss-of-function disease: the inability to form the amyloid structures described in the study would cause the pathology. "Our study challenges the hypothesis that the aggregation of this protein is the cause of the disease and proposes that it is the inability to form a fibrillar structure that has been selected by evolution to bind nucleic acids that causes the pathology," says Salvador Ventura, professor of Biochemistry and Molecular Biology and researcher at the Institute of Biotechnology and Biomedicine (IBB-UAB), who led the research together with the first author of the paper, Javier Garcia-Pardo, a Juan de la Cierva-Incorporación researcher at IBB-UAB. Researchers have determined the structure of the amyloid fibers of the hnRNPDL-2 protein using high-resolution cryo-electron microscopy (cryo-EM). This is the first structure of a human functional amyloid formed by the complete protein to be solved using this technique—previously, only structures formed by fragments of these proteins had been solved. It is also the first amyloid structure determined at high-resolution by a Spanish research team. The structure of the protein differs from that of other pathological amyloid proteins in that it has a highly hydrophilic nucleus, which includes the amino acid associated with LGMD D3. In this case, unlike in other diseases, amyloid formation is not toxic, but necessary for the function of the protein. The results change the concept of the origin of the disease and how it should be treated, say the researchers. "Previously, we thought that, as in many neurodegenerative diseases, LGMD D3 originated because mutations in patients caused the initially soluble protein to form aggregates and, therefore, the search for anti-aggregant molecules could be a potential therapy. Now we know that this would be a mistake, since it is the incorrect formation of the fiber that seems to trigger the disease; therefore, molecules that stabilize this structure or facilitate its formation would be the most appropriate," says Salvador Ventura. Understanding the molecular structures of amyloids Certain human amyloids can undergo both functional and pathological aggregation, and it is therefore necessary to understand their molecular structures in order to determine their distinctive qualities and functions. For example, RNPs similar to the one studied in this research, such as hnRNPA1 or FUS, are able to form functional amyloid fibers in response to cellular stress, but they can also harbor mutations responsible for disease. These proteins are characterized by a modular architecture, including one or more nucleic acid binding domains, together with disordered regions that are responsible for the formation of their assembly into functional or pathological amyloid structures. "In recent years, the structures of various amyloid fibers formed by fragments of RNPs have been solved. However, these assemblies may not necessarily coincide with those adopted in the context of complete proteins, as is the case of the structure obtained for hnRNPDL-2 solved in our group," explains Salvador Ventura. "In fact, our structure differs significantly from previous ones and questions some of the assumptions that were considered valid regarding the regulation of these proteins in cells," he points out. Special techniques for resolving functional amyloids To resolve the structure of hnRNPDL-2 in its assembled state, the research team used the cryo-EM technique, applying special techniques to resolve amyloid structures. In the past two years, a significant number of amyloid fiber structures have been solved with this technique, but these mainly correspond to pathological amyloids involved in systemic and neurodegenerative diseases. "Our discovery highlights the power of cryo-EM to study the function of RNPs and the reasons for their link to disease. These proteins have been little studied until now, but they are associated with diseases such as Alzheimer's, muscular dystrophies, cancer, and neurodevelopmental and neuropsychiatric disorders. Thus, our objective now is to take advantage of the experience acquired with this technique to determine the fibrillary states of other functional amyloids and study the effect of mutations, in order to better understand their implications in health and disease," says Salvador Ventura. The development of this new technology at the UAB will allow researchers to exploit the recently installed cryo-EM platform at the Alba synchrotron, of which the university is a partner. Solving this type of structure requires a great deal of computational power. The IBB's Protein Folding and Conformational Diseases research group led by Salvador Ventura has just acquired a high-powered computer to carry out these calculations.
10.1038/s41467-023-35854-0
Biology
A novel gene in mammals that controls a new structure found in nerve cells
Tamas Rasko et al, A novel gene controls a new structure: PiggyBac Transposable Element-derived 1, unique to mammals, controls mammal-specific neuronal paraspeckles, Molecular Biology and Evolution (2022). DOI: 10.1093/molbev/msac175 Journal information: Molecular Biology and Evolution
https://dx.doi.org/10.1093/molbev/msac175
https://phys.org/news/2022-09-gene-mammals-nerve-cells.html
Abstract. Although new genes can arrive from modes other than duplication, few examples are well characterized. Given high expression in some human brain subreg","pageStart":"msac175","siteName":"OUP Academic","thumbnailURL":" Novel Gene Controls a New Structure: PiggyBac Transposable Element-Derived 1, Unique to Mammals, Controls Mammal-Specific Neuronal Paraspeckles","image":" domesticated PGBD1 possesses a SCAN-, KRAB- and transposase-derived domains, but has no catalytic activity as a transposase. (A) Phylogenetic tree of PGBD1 and PGBD2. The presence of the transposase-derived, the SCAN, and KRAB domains are shown. The human PGBD1 and PGBD2, with the most closely related sequences (containing transposase IS4) were aligned with muscle and a tree was built using MrBayes. Protein domains were annotated with hmmerscan and CDD (NCBI). The KRAB domain was annotated with Phyre2. (B) PGBD1 domain structure in comparison to PiggyBac of the cabbage looper moth, human PGBD2, rat PGBD1, and mouse PGBD1. The transposase-derived domain (IS4) includes dimerization and DNA binding domains (DDBD) as well as the catalytic domains of PiggyBac (Chen et al. 2020). NTD, N-terminal domain; CRD, C-terminal cysteine rich domain; E1–7 are exons 1–7. The “D”s in the transposase-derived domains represent the catalytic triad DDD (D268, D346, D447). D447 is replaced by (A) in PGBD1. PGBD2 and PGBD1 are highly similar (average pairwise similarity score of ∼63% the aligned region which spans 1324 bp exceeds the borders of the annotated transposase IS4 domain, calculated by distance matrix of Ugene). Note that the ZN-finger containing CRD domain, required for ITR binding in the piggyBac transposase is missing in PGBD1 (Morellet et al. 2018). The PGBD1 sequences in rodent animal models are truncated, resulting in degenerated copies. The Ka/Kv values of the entire PGBD1 as well as for various subdomains are shown. Note the ∼1 value for the KRAB domain [overall = 0.35, N-terminal (aa 1–290) = 0.56, C-terminal (aa 291–809) = 0.21, SCAN (aa 40–142) = 0.32, KRAB (aa 211–267) = 1.02, DDBD1 (aa 405–541) = 0.19, DDBD2 (aa 750–804) = 0.26, catalytic domain 1 (aa 541–651) = 0.14, and catalytic domain 2 (aa 726–750) = 0.07, reference is the human amino acid sequence of PGBD1. (C) Protein sequence alignment of the transposase-derived DDD catalytic domain of PGBD1. The first raw of the alignment shows the corresponding sequence of the piggyBac transposase, identified in Trichoplusia ni (cabbage looper moth). The alignment includes koala and gray seal, from where the KRAB domain was reported (supplementary fig. S4F, Supplementary Material online), and various mammalian species. The conserved amino acids D268/D346/D447 of the conserved DDD catalytic domain and D450 of the piggyBac transposase are arrowed (Sarkar et al. 2003). The numbers refer to their position using the piggyBac amino acid sequence as reference. (D) Transposon excision repair assay detects no activity of the PGBD1. Schematic representation of the reporter assay of PiggyBac excision. The PiggyBac transposon (flanked by inverted terminal repeats, ITRs) splits the coding sequence of the GFP reporter. In the presence of an active transposase, transposon excision occurs, and the readout is the restored GFP reporter signal. (E) Quantitative FACS (fluoresccence-activated single cell sorting) analysis of GFP positive cells generated in the transposon excision repair assay. (Left panel) Western blot analysis of the HA-tagged PGBD1 (HA-PGBD1) protein tested in the excision repair assay. HeLa cells were cotransfected with plasmids harboring HA-tagged PGBD1 along with the reporter construct. Nontransfected HeLa and cells transfected with mPB (mammalian codon optimized piggyBac transposase) along with the reporter served as controls (right panel). Transposition assay detects no activity of the PGBD1 protein. (F) Schematic representation of the colony forming transposition assay to detect stable integration of the puromycin resistance gene marked reporter in HEK293 cells. In case of active transposition, the transposase cuts at the IRs—(terminal IRs), and inserts the reporter-marked transposon into the genome, providing antibiotic resistance for the transfected cells. In addition to the piggyBac ITRs, reporters were also built with PiggyBac-derived miniature IR (MITEs) ITRs of MER75B, MER85 (see also fig. 2B). (G) Puromycin resistant HEK293 colonies are shown as the readout of the assay. The constructs were transfected in various combinations. HEK293 cells transfected with the mPB transposase and the nonrelevant Luciferase expression construct (Luc) along with the reporter served as controls. (H) Quantification of the transposition assay. Colonies were quantified in a 75S model gel imager, using the Quantity One 4.4.0 software (Bio-Rad). Error bars indicate s.d. Note that the mPB transposase (positive control) was able to mobilize the PB transposon, but not the PB-IR-related reporters."} var googletag = googletag || {}; googletag.cmd = googletag.cmd || []; var gptAdSlots = []; googletag.cmd.push(function() { var mapping_ad1 = googletag.sizeMapping() .addSize([1024, 0], [[970, 90], [728, 90]]) .addSize([768, 0], [728, 90]) .addSize([0, 0], [320, 50]) .build(); gptAdSlots["ad1"] = googletag.defineSlot('/116097782/mbe_Supplement_Ad1', [[970, 90], [728, 90], [320, 50]], 'adBlockHeader') .defineSizeMapping(mapping_ad1) .addService(googletag.pubads()); var mapping_ad2 = googletag.sizeMapping() .addSize([768, 0], [[300, 250], [300, 600], [160, 600]]) .build(); gptAdSlots["ad2"] = googletag.defineSlot('/116097782/mbe_Supplement_Ad2', [[300, 250], [160, 600], [300, 600]], 'adBlockMainBodyTop') .defineSizeMapping(mapping_ad2) .addService(googletag.pubads()); var mapping_ad3 = googletag.sizeMapping() .addSize([768, 0], [[300, 250], [300, 600], [160, 600]]) .build(); gptAdSlots["ad3"] = googletag.defineSlot('/116097782/mbe_Supplement_Ad3', [[300, 250], [160, 600], [300, 600]], 'adBlockMainBodyBottom') .defineSizeMapping(mapping_ad3) .addService(googletag.pubads()); var mapping_ad4 = googletag.sizeMapping() .addSize([0,0], [320, 50]) .addSize([768, 0], [728, 90]) .build(); gptAdSlots["ad4"] = googletag.defineSlot('/116097782/mbe_Supplement_Ad4', [728, 90], 'adBlockFooter') .defineSizeMapping(mapping_ad4) .addService(googletag.pubads()); var mapping_ad6 = googletag.sizeMapping() .addSize([1024, 0], [[970, 90], [728, 90]]) .addSize([0, 0], [[320, 50], [300, 50]]) .build(); gptAdSlots["ad6"] = googletag.defineSlot('/116097782/mbe_Supplement_Ad6', [[728, 90], [970, 90]], 'adBlockStickyFooter') .defineSizeMapping(mapping_ad6) .addService(googletag.pubads()); gptAdSlots["adInterstital"] = googletag.defineOutOfPageSlot('/116097782/mbe_Interstitial_Ad', googletag.enums.OutOfPageFormat.INTERSTITIAL) .addService(googletag.pubads()); googletag.pubads().addEventListener('slotRenderEnded', function (event) { if (!event.isEmpty) { $('.js-' + event.slot.getSlotElementId()).each(function () { if ($(this).find('iframe').length) { $(this).removeClass('hide'); } }); } }); googletag.pubads().addEventListener('impressionViewable', function (event) { if (!event.isEmpty) { $('.js-' + event.slot.getSlotElementId()).each(function () { var $adblockDiv = $(this).find('.js-adblock'); var $adText = $(this).find('.js-adblock-advertisement-text'); if ($adblockDiv && $adblockDiv.is(':visible') && $adblockDiv.find('*').length > 1) { $adText.removeClass('hide'); App.CenterAdBlock.Init($adblockDiv, $adText); } else { $adText.addClass('hide'); } }); } }); googletag.pubads().setTargeting("jnlspage", "article"); googletag.pubads().setTargeting("jnlsurl", "mbe/article/39/10/msac175/6661922"); googletag.pubads().enableSingleRequest(); googletag.pubads().disableInitialLoad(); googletag.pubads().collapseEmptyDivs(); }); .MathJax_Hover_Frame {border-radius: .25em; -webkit-border-radius: .25em; -moz-border-radius: .25em; -khtml-border-radius: .25em; box-shadow: 0px 0px 15px #83A; -webkit-box-shadow: 0px 0px 15px #83A; -moz-box-shadow: 0px 0px 15px #83A; -khtml-box-shadow: 0px 0px 15px #83A; border: 1px solid #A6D ! important; display: inline-block; position: absolute} .MathJax_Menu_Button .MathJax_Hover_Arrow {position: absolute; cursor: pointer; display: inline-block; border: 2px solid #AAA; border-radius: 4px; -webkit-border-radius: 4px; -moz-border-radius: 4px; -khtml-border-radius: 4px; font-family: 'Courier New',Courier; font-size: 9px; color: #F0F0F0} .MathJax_Menu_Button .MathJax_Hover_Arrow span {display: block; background-color: #AAA; border: 1px solid; border-radius: 3px; line-height: 0; padding: 4px} .MathJax_Hover_Arrow:hover {color: white!important; border: 2px solid #CCC!important} .MathJax_Hover_Arrow:hover span {background-color: #CCC!important} #MathJax_About {position: fixed; left: 50%; width: auto; text-align: center; border: 3px outset; padding: 1em 2em; background-color: #DDDDDD; color: black; cursor: default; font-family: message-box; font-size: 120%; font-style: normal; text-indent: 0; text-transform: none; line-height: normal; letter-spacing: normal; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; z-index: 201; border-radius: 15px; -webkit-border-radius: 15px; -moz-border-radius: 15px; -khtml-border-radius: 15px; box-shadow: 0px 10px 20px #808080; -webkit-box-shadow: 0px 10px 20px #808080; -moz-box-shadow: 0px 10px 20px #808080; -khtml-box-shadow: 0px 10px 20px #808080; filter: progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color='gray', Positive='true')} #MathJax_About.MathJax_MousePost {outline: none} .MathJax_Menu {position: absolute; background-color: white; color: black; width: auto; padding: 5px 0px; border: 1px solid #CCCCCC; margin: 0; cursor: default; font: menu; text-align: left; text-indent: 0; text-transform: none; line-height: normal; letter-spacing: normal; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; z-index: 201; border-radius: 5px; -webkit-border-radius: 5px; -moz-border-radius: 5px; -khtml-border-radius: 5px; box-shadow: 0px 10px 20px #808080; -webkit-box-shadow: 0px 10px 20px #808080; -moz-box-shadow: 0px 10px 20px #808080; -khtml-box-shadow: 0px 10px 20px #808080; filter: progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color='gray', Positive='true')} .MathJax_MenuItem {padding: 1px 2em; background: transparent} .MathJax_MenuArrow {position: absolute; right: .5em; padding-top: .25em; color: #666666; font-size: .75em} .MathJax_MenuActive .MathJax_MenuArrow {color: white} .MathJax_MenuArrow.RTL {left: .5em; right: auto} .MathJax_MenuCheck {position: absolute; left: .7em} .MathJax_MenuCheck.RTL {right: .7em; left: auto} .MathJax_MenuRadioCheck {position: absolute; left: .7em} .MathJax_MenuRadioCheck.RTL {right: .7em; left: auto} .MathJax_MenuLabel {padding: 1px 2em 3px 1.33em; font-style: italic} .MathJax_MenuRule {border-top: 1px solid #DDDDDD; margin: 4px 3px} .MathJax_MenuDisabled {color: GrayText} .MathJax_MenuActive {background-color: #606872; color: white} .MathJax_MenuDisabled:focus, .MathJax_MenuLabel:focus {background-color: #E8E8E8} .MathJax_ContextMenu:focus {outline: none} .MathJax_ContextMenu .MathJax_MenuItem:focus {outline: none} #MathJax_AboutClose {top: .2em; right: .2em} .MathJax_Menu .MathJax_MenuClose {top: -10px; left: -10px} .MathJax_MenuClose {position: absolute; cursor: pointer; display: inline-block; border: 2px solid #AAA; border-radius: 18px; -webkit-border-radius: 18px; -moz-border-radius: 18px; -khtml-border-radius: 18px; font-family: 'Courier New',Courier; font-size: 24px; color: #F0F0F0} .MathJax_MenuClose span {display: block; background-color: #AAA; border: 1.5px solid; border-radius: 18px; -webkit-border-radius: 18px; -moz-border-radius: 18px; -khtml-border-radius: 18px; line-height: 0; padding: 8px 0 6px} .MathJax_MenuClose:hover {color: white!important; border: 2px solid #CCC!important} .MathJax_MenuClose:hover span {background-color: #CCC!important} .MathJax_MenuClose:hover:focus {outline: none} .MathJax_Preview .MJXf-math {color: inherit!important} .MJX_Assistive_MathML {position: absolute!important; top: 0; left: 0; clip: rect(1px, 1px, 1px, 1px); padding: 1px 0 0 0!important; border: 0!important; height: 1px!important; width: 1px!important; overflow: hidden!important; display: block!important; -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none} .MJX_Assistive_MathML.MJX_Assistive_MathML_Block {width: 100%!important} #MathJax_Zoom {position: absolute; background-color: #F0F0F0; overflow: auto; display: block; z-index: 301; padding: .5em; border: 1px solid black; margin: 0; font-weight: normal; font-style: normal; text-align: left; text-indent: 0; text-transform: none; line-height: normal; letter-spacing: normal; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; -webkit-box-sizing: content-box; -moz-box-sizing: content-box; box-sizing: content-box; box-shadow: 5px 5px 15px #AAAAAA; -webkit-box-shadow: 5px 5px 15px #AAAAAA; -moz-box-shadow: 5px 5px 15px #AAAAAA; -khtml-box-shadow: 5px 5px 15px #AAAAAA; filter: progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color='gray', Positive='true')} #MathJax_ZoomOverlay {position: absolute; left: 0; top: 0; z-index: 300; display: inline-block; width: 100%; height: 100%; border: 0; padding: 0; margin: 0; background-color: white; opacity: 0; filter: alpha(opacity=0)} #MathJax_ZoomFrame {position: relative; display: inline-block; height: 0; width: 0} #MathJax_ZoomEventTrap {position: absolute; left: 0; top: 0; z-index: 302; display: inline-block; border: 0; padding: 0; margin: 0; background-color: white; opacity: 0; filter: alpha(opacity=0)} .MathJax_Preview {color: #888} #MathJax_Message {position: fixed; left: 1px; bottom: 2px; background-color: #E6E6E6; border: 1px solid #959595; margin: 0px; padding: 2px 8px; z-index: 102; color: black; font-size: 80%; width: auto; white-space: nowrap} #MathJax_MSIE_Frame {position: absolute; top: 0; left: 0; width: 0px; z-index: 101; border: 0px; margin: 0px; padding: 0px} .MathJax_Error {color: #CC0000; font-style: italic} .MJXp-script {font-size: .8em} .MJXp-right {-webkit-transform-origin: right; -moz-transform-origin: right; -ms-transform-origin: right; -o-transform-origin: right; transform-origin: right} .MJXp-bold {font-weight: bold} .MJXp-italic {font-style: italic} .MJXp-scr {font-family: MathJax_Script,'Times New Roman',Times,STIXGeneral,serif} .MJXp-frak {font-family: MathJax_Fraktur,'Times New Roman',Times,STIXGeneral,serif} .MJXp-sf {font-family: MathJax_SansSerif,'Times New Roman',Times,STIXGeneral,serif} .MJXp-cal {font-family: MathJax_Caligraphic,'Times New Roman',Times,STIXGeneral,serif} .MJXp-mono {font-family: MathJax_Typewriter,'Times New Roman',Times,STIXGeneral,serif} .MJXp-largeop {font-size: 150%} .MJXp-largeop.MJXp-int {vertical-align: -.2em} .MJXp-math {display: inline-block; line-height: 1.2; text-indent: 0; font-family: 'Times New Roman',Times,STIXGeneral,serif; white-space: nowrap; border-collapse: collapse} .MJXp-display {display: block; text-align: center; margin: 1em 0} .MJXp-math span {display: inline-block} .MJXp-box {display: block!important; text-align: center} .MJXp-box:after {content: " "} .MJXp-rule {display: block!important; margin-top: .1em} .MJXp-char {display: block!important} .MJXp-mo {margin: 0 .15em} .MJXp-mfrac {margin: 0 .125em; vertical-align: .25em} .MJXp-denom {display: inline-table!important; width: 100%} .MJXp-denom > * {display: table-row!important} .MJXp-surd {vertical-align: top} .MJXp-surd > * {display: block!important} .MJXp-script-box > * {display: table!important; height: 50%} .MJXp-script-box > * > * {display: table-cell!important; vertical-align: top} .MJXp-script-box > *:last-child > * {vertical-align: bottom} .MJXp-script-box > * > * > * {display: block!important} .MJXp-mphantom {visibility: hidden} .MJXp-munderover {display: inline-table!important} .MJXp-over {display: inline-block!important; text-align: center} .MJXp-over > * {display: block!important} .MJXp-munderover > * {display: table-row!important} .MJXp-mtable {vertical-align: .25em; margin: 0 .125em} .MJXp-mtable > * {display: inline-table!important; vertical-align: middle} .MJXp-mtr {display: table-row!important} .MJXp-mtd {display: table-cell!important; text-align: center; padding: .5em 0 0 .5em} .MJXp-mtr > .MJXp-mtd:first-child {padding-left: 0} .MJXp-mtr:first-child > .MJXp-mtd {padding-top: 0} .MJXp-mlabeledtr {display: table-row!important} .MJXp-mlabeledtr > .MJXp-mtd:first-child {padding-left: 0} .MJXp-mlabeledtr:first-child > .MJXp-mtd {padding-top: 0} .MJXp-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 1px 3px; font-style: normal; font-size: 90%} .MJXp-scale0 {-webkit-transform: scaleX(.0); -moz-transform: scaleX(.0); -ms-transform: scaleX(.0); -o-transform: scaleX(.0); transform: scaleX(.0)} .MJXp-scale1 {-webkit-transform: scaleX(.1); -moz-transform: scaleX(.1); -ms-transform: scaleX(.1); -o-transform: scaleX(.1); transform: scaleX(.1)} .MJXp-scale2 {-webkit-transform: scaleX(.2); -moz-transform: scaleX(.2); -ms-transform: scaleX(.2); -o-transform: scaleX(.2); transform: scaleX(.2)} .MJXp-scale3 {-webkit-transform: scaleX(.3); -moz-transform: scaleX(.3); -ms-transform: scaleX(.3); -o-transform: scaleX(.3); transform: scaleX(.3)} .MJXp-scale4 {-webkit-transform: scaleX(.4); -moz-transform: scaleX(.4); -ms-transform: scaleX(.4); -o-transform: scaleX(.4); transform: scaleX(.4)} .MJXp-scale5 {-webkit-transform: scaleX(.5); -moz-transform: scaleX(.5); -ms-transform: scaleX(.5); -o-transform: scaleX(.5); transform: scaleX(.5)} .MJXp-scale6 {-webkit-transform: scaleX(.6); -moz-transform: scaleX(.6); -ms-transform: scaleX(.6); -o-transform: scaleX(.6); transform: scaleX(.6)} .MJXp-scale7 {-webkit-transform: scaleX(.7); -moz-transform: scaleX(.7); -ms-transform: scaleX(.7); -o-transform: scaleX(.7); transform: scaleX(.7)} .MJXp-scale8 {-webkit-transform: scaleX(.8); -moz-transform: scaleX(.8); -ms-transform: scaleX(.8); -o-transform: scaleX(.8); transform: scaleX(.8)} .MJXp-scale9 {-webkit-transform: scaleX(.9); -moz-transform: scaleX(.9); -ms-transform: scaleX(.9); -o-transform: scaleX(.9); transform: scaleX(.9)} .MathJax_PHTML .noError {vertical-align: ; font-size: 90%; text-align: left; color: black; padding: 1px 3px; border: 1px solid} googletag.cmd.push(function () { googletag.pubads().setTargeting("jnlsdoi", "10.1093/molbev/msac175"); googletag.enableServices(); }); var NTPT_PGEXTRA= 'event_type=full-text&discipline_ot_level_1=Science and Mathematics&discipline_ot_level_2=Biological Sciences&supplier_tag=SC_Journals&object_type=Article&taxonomy=taxId%3a39%7ctaxLabel%3aAcademicSubjects%7cnodeId%3aSCI01130%7cnodeLabel%3aEvolutionary+Biology%7cnodeLevel%3a3%3btaxId%3a39%7ctaxLabel%3aAcademicSubjects%7cnodeId%3aSCI01180%7cnodeLabel%3aMolecular+and+Cell+Biology%7cnodeLevel%3a3&siteid=molbev&authentication_method=IP&authzrequired=false&account_id=20036523&account_list=20036523,20001354,20051672,20015295,22486464,20028858,20001349,20029958&authnips=134.96.105.141&doi=10.1093/molbev/msac175'; Skip to Main Content googletag.cmd.push(function () { googletag.display('adBlockHeader'); }); Advertisement Journals Books Search Menu Menu Navbar Search Filter Molecular Biology and Evolution This issue SMBE Journals Evolutionary Biology Molecular and Cell Biology Books Journals Oxford Academic Mobile Enter search term Search Issues More content Advance Articles Fast Track Perspectives Discoveries Methods Resources Protocols News Cover Archive Submit Submission Site Manuscript Types Author Guidelines Open Access Self-Archiving Policy Alerts About About Molecular Biology and Evolution About the Society for Molecular Biology and Evolution Editorial Board Advertising and Corporate Services Journals Career Network Contact Us Journals on Oxford Academic Books on Oxford Academic SMBE Journals Issues More content Advance Articles Fast Track Perspectives Discoveries Methods Resources Protocols News Cover Archive Submit Submission Site Manuscript Types Author Guidelines Open Access Self-Archiving Policy Alerts About About Molecular Biology and Evolution About the Society for Molecular Biology and Evolution Editorial Board Advertising and Corporate Services Journals Career Network Contact Us Close Navbar Search Filter Molecular Biology and Evolution This issue SMBE Journals Evolutionary Biology Molecular and Cell Biology Books Journals Oxford Academic Enter search term Search Advanced Search Search Menu (function () { var hfSiteUrl = document.getElementById('hfSiteURL'); var siteUrl = hfSiteUrl.value; var subdomainIndex = siteUrl.indexOf('/'); hfSiteUrl.value = location.host + (subdomainIndex >= 0 ? siteUrl.substring(subdomainIndex) : ''); })(); Article Navigation Close mobile search navigation Article Navigation Volume 39 Issue 10 October 2022 Article Contents Abstract Introduction Results Discussion Materials and Methods Supplementary Material Acknowledgments Author Contributions
Evolution is often portrayed as a "tinkering" process, one that makes use of slight modifications to pre-existing capabilities. So how do organisms evolve brand new structures? A new study by Dr. Zsuzsanna Izsvák from the Max Delbrück Center for Molecular Medicine in the Helmholtz Association (Max Delbrück Center) and Professor Laurence Hurst from the Milner Centre for Evolution at University of Bath (UK) found evidence that evolution of a new gene underpins the evolution of a new structure found in nerve cells. They describe this unusual gene called piggyBac Transposable Element-derived 1, or PGBD1, in the journal Molecular Biology and Evolution. 'Jumping genes' cause mutations PGBD1 is one of five related PGBD genes that shows a distinct resemblance to the piggyBac element first identified in insects—hence the name piggyBac Transposable Element-derived. The PiggyBac elements are "jumping genes", also called transposons. They are able to copy themselves and to move from one location in the genome to another, sometimes introducing mutations or changing functions. PiggyBac transposons arrived into our species by horizontal transfer—similar to how some viruses can integrate their genome into our DNA. However, while the piggyBac transposons have lost their ability to jump around in our DNA over time, five piggyBac Transposable Element-derived genes (PGBD1-5) have been fixed in humans. "We aimed at finding out what potentially useful function the PGBD genes might have," says Zsuzsanna Izsvák. "For this study, we focused on PGBD1." Among the five PGBD genes, PGBD1 is unique in that it has also incorporated parts of other genes, resulting in a protein that has extra parts that are able to bind other proteins and to bind DNA. PGBD1 is thus a novel gene that is part human gene fragment, part inactive jumping gene. PGBD1 regulates nerve cells and their 'protein traps' PGBD1 is found only in mammals. It is particularly active in cells that become neurons. The researchers first investigated, where PGBD1 protein binds to DNA, observing that it glues itself in and around genes associated with nerve development. They found PGBD1 controls nerve cell development by blocking genes expressed in mature nerve cells while keeping those genes associated with being pre-nerve cells activated. Reducing the level of PGBD1 in pre-nerve cells caused them to start developing as nerve cells. One of the genes that PGBD1 protein binds especially attracted their interest. NEAT1 is a strange gene that codes for an RNA which, unusually, doesn't then go on to make a protein. Instead, this product, a non-coding RNA, makes the backbone of a physical structure, the paraspeckles. These are tiny structures in the nuclei of some of our cells that act like traps for some RNAs and proteins. The researchers found that in pre-nerve cells PGBD1 protein binds to the NEAT1 gene and stops it from working. However, when PGBD1 levels go down, NEAT1 RNA levels go up, paraspeckles form and cells become mature nerve cells. PGBD1 thus has evolved to be a key regulator of presence or absence of paraspeckles, and thus the regulator of nerve cell development. Evolution is not random tinkering What, however, is most intriguing is that paraspeckles are, like PGBD1, also mammal-specific. PGBD1 is then a rare example of a new gene that has evolved to regulate a new structure, albeit a rather small one. Zsuzsanna Izsvák, co-senior author from Max Delbrück Center, says, "This is a really unusual and serendipitous discovery. We have known that duplication of pre-existing genes can underpin the evolution of novelty, but this is a rare example of evolution doing more than just tinkering. This is a novel gene to control a novel structure." The exciting question now is whether it also plays a role in adult neurons. Co-senior author Professor Laurence Hurst of the Milner Centre for Evolution at the University of Bath adds that they "have worked out how paraspeckles are controlled, now we just need to work out how the paraspeckle itself evolved. This might be a much harder task as non-coding RNAs like NEAT1 tend to be fast evolving and therefore hard to trace over evolutionary time." This coupling between NEAT1 and PGBD1 may also be involved in schizophrenia. While NEAT1 has been previously associated with this neurological disease, the team identified some mutations in PGBD1 that they could show were also common in patients with schizophrenia—one of these mutations changes the protein of PGBD1 while others may control its level. First author Dr. Tamas Raskó, at the time of the study a postdoctoral researcher in the group of Zsuzsanna Izsvák says that "it is surely more than coincidence that both genes are involved in schizophrenia. It is very unusual to find a mutation that changes a protein that is coupled to this disease. The effects of this mutation must be a priority for further studies."
10.1093/molbev/msac175
Earth
Waiting for the complete rupture
Luca Dal Zilio et al. Bimodal seismicity in the Himalaya controlled by fault friction and geometry, Nature Communications (2018). DOI: 10.1038/s41467-018-07874-8 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-07874-8
https://phys.org/news/2019-01-rupture.html
Abstract There is increasing evidence that the Himalayan seismicity can be bimodal: blind earthquakes (up to Mw ~ 7.8) tend to cluster in the downdip part of the seismogenic zone, whereas infrequent great earthquakes (Mw 8+) propagate up to the Himalayan frontal thrust. To explore the causes of this bimodal seismicity, we developed a two-dimensional, seismic cycle model of the Nepal Himalaya. Our visco-elasto-plastic simulations reproduce important features of the earthquake cycle, including interseismic strain and a bimodal seismicity pattern. Bimodal seismicity emerges as a result of relatively higher friction and a non-planar geometry of the Main Himalayan Thrust fault. This introduces a region of large strength excess that can only be activated once enough stress is transferred upwards by blind earthquakes. This supports the view that most segments of the Himalaya might produce complete ruptures significantly larger than the 2015 Mw 7.8 Gorkha earthquake, which should be accounted for in future seismic hazard assessments. Introduction On 25 April 2015, an earthquake with moment magnitude Mw 7.8 struck the Nepal Himalaya 1 , 2 , 3 , rupturing a 50-km-wide segment of the Main Himalayan Thrust (MHT) fault (Fig. 1 a). The 2015 Gorkha earthquake has a similar location as the 1833 earthquake, with estimated magnitude Mw 7.6–7.7, which also caused significant damage in Kathmandu 4 , 5 . The geometry of the MHT is relatively well known in their hypocentral region from various geological and geophysical campaigns 6 , 7 , 8 . In particular, geodetic data (SAR, InSAR and GPS) and the detailed location of the Gorkha seismic sequence have provided new constraints on the geometry of the MHT 9 , 10 . This information allows us to investigate the relation between interseismic strain and seismicity—given the MHT geometry—and contribute to an ongoing debate on how the Himalayan wedge is deforming. Some authors claim that the location of the front of the high topography could be explained by a mid-crustal ramp along the MHT 11 , 12 , 13 . Conversely, others have argued for active out-of-sequence thrusting at the front of the high Himalaya 14 , 15 . Understanding how and where stresses build up in the Himalaya is important, because evaluating the balance between the interseismic strain accumulation and the elastic strain released during seismic events could potentially improve the seismic hazard assessment in central Nepal following the 2015 earthquake 16 . Fig. 1 Seismotectonic context, model setup and fault geometries. a , Topographic relief, coupling mode and historical seismicity. The white arrows show the long-term shortening across the arc. The interseismic coupling is shown as shades of red (ref. 48 ). A coupling value of 1 means the area is fully locked, while a value of 0 means fully creeping. Coloured patches indicate the supposed rupture zones since 1505 (refs. 4 , 21 , 22 ): blue patches display blind ruptures of large (Mw ≤ 7.8) earthquakes, whereas yellow patches indicate surface ruptures of great (Mw > 8) events. Black line indicates the cross-section utilised for the numerical model setup. b , Zoom of the initial reference setup (model EF) and temperature. The numerical setup represents the geological cross-section of the Nepal Himalaya constrained from the main-shock and aftershocks of the Gorkha sequence (ref. 9 ). c , Additional fault geometries employed in the numerical experiments: model DF, from Duputel et al. 10 , and a planar fault geometry (model PF) Full size image It has long been noticed that the seismicity in the Himalaya is bimodal 11 , 17 , 18 . Partial (blind) earthquakes (up to Mw ~ 7.8) tend to cluster and repeatedly rupture the deeper portion of the MHT, whereas sporadic great earthquakes (Mw > 8) completely unzip the entire width of the seismogenic zone (Fig. 1a ). The partial ruptures are generally characterised by 10–15 km focal depths and clustered along the front of the Himalaya. They seem to occur in the vicinity of the mid-crustal ramp 11 . The Mw 7.8 Gorkha earthquake is the largest known event in that category. On the other hand, paleoseismological field studies found evidence for surface ruptures at the Himalayan frontal fault (Main Frontal Thrust, MFT), probably associated with great (Mw > 8) events 16 , 19 , 20 , 21 , 22 . The 1934 Mw 8.4 Bihar Nepal 16 , 22 and the 1950 Mw 8.7 Assam earthquake 23 —the largest intracontinental earthquake ever recorded—probably fall in that category. On the basis of these observations, the mechanism driving bimodal behaviour remains poorly understood. One potential explanation is that the MHT consists of along-dip subsegments that rupture—either independently or jointly with neighbouring segments during larger earthquakes—with a non-periodic or even chaotic behaviour arising from stress transfers. This segmentation may partly be controlled by rheological 24 and geometrical complexities such as local non-planarity 5 , 25 , 26 . There is also growing evidence that fault frictional properties are also an influential and perhaps determining factor that affect the spatial extent, size and timing of megathrust ruptures 27 . Dynamic simulations over multiple earthquake cycles with a linear slip-weakening friction law show that a large event that ruptures the entire fault is preceded by a number of small events with various rupture lengths 28 . These results are in keeping with dynamic modelling of the seismic cycle based on rate-and-state friction, which produce partial ruptures even in the case of a planar fault with uniform frictional properties 29 . However, how complete ruptures relate to partial ruptures and the geometry and mechanical properties of the MHT, has not yet been investigated quantitatively. Here we report the use of a novel two-dimensional (2D) numerical approach (Methods section) to explore the seismic rupture pattern on the MHT over many earthquake cycles (Fig. 1b ). The geometry and mechanical properties of our model are defined based on constraints from structural geology and geophysical campaigns 7 and new insights gained from studies of the Gorkha sequence 1 , 5 , 9 , 10 . The temperature distribution is based on a thermokinematic model derived from thermochronological and thermobarometric data 13 (Supplementary Fig. 1 ). The model is kinematically driven using a boundary condition that translates into a convergence rate of 38 mm year −1 across the collisional system. The reference geometry of the MHT (Fig. 1b ) is inferred from Elliott et al. 9 and denoted as model EF. It is comprised of three segments to reflect the ramp-flat-ramp geometry: a shallow ~30° dipping ramp between the surface and 5-km depth constrained by structural sections; a flat portion with a shallow angle reaching, finally, a steeper mid-crustal ramp 30 . Uncertainties regarding the geometry of the MHT still exist, and relatively gentle variation in geometry have also been proposed 10 . We therefore also perform numerical experiments considering this alternative, smoother fault model (model DF; Fig. 1c and Supplementary Fig. 2 ). To test the sensitivity of the model to the fault geometry, we consider a simple planar fault as well (model PF; Fig. 1c and Supplementary Fig. 2 ). For each of the three fault geometries adopted, we execute a parameter study of the fault frictional properties by testing values of effective static fault friction (that is, static friction ( μ s ) including pore-fluid pressure: μ eff = μ s (1 − λ )) between 0.06 and 0.2 (Supplementary Table 1 ). This range is consistent with the results of a compilation of previously published data 31 . A detailed description of the numerical technique, model setup, modelling procedure and limitations is given in the Methods section. Results Consistency with interseismic deformation An important goal in Himalayan studies over the past decades has been to refine the Himalayan convergence rate 32 , 33 , because this is responsible for the productivity of Himalayan earthquakes 31 , 34 . We emulate the observed velocity field by imposing a convergence rate of 38 mm year −1 . The model produces about 19–20 mm year −1 of convergence across the Himalaya, a value consistent with the long-term geological rate, while the residual convergence rate is dissipated by deformation distributed outside the domain shown in Fig. 1b . The model fits the geodetic measurements of interseismic strain remarkably well (Fig. 2a ). All three fault geometries yield predictions in good agreement with uplift rates measured from spirit-levelling 35 , inSAR 36 , and horizontal velocity measured from GPS 8 (Supplementary Fig. 3 ). However, we note that model EF agrees particularly well with the data, in terms of both horizontal and vertical velocities. Fig. 2 Interseismic behaviour computed in the 2D model. a , Observed vs. synthetic present-day velocity fields. Observed field (error bars) is shown in blue (horizontal GPS 8 ) and violet (spirit-levelling 35 ) bars, respectively. Solid lines show the corresponding horizontal and vertical modelling prediction. b , Elastic strain regime across the Himalaya inferred over an interseismic period of 350 years and orientation of principal compressional axes (blue bars). Histograms in c and d show the vertical and horizontal off-megathrust faulting distributions, respectively Full size image The mid-crustal ramp operates as a geometric asperity during interseismic periods where elastic strain builds up and accounts for as much as two-thirds of the convergence rate (Fig. 2b ). At greater depth, the higher temperature favours the transition from frictionally unstable velocity-weakening behaviour to stable (velocity-strengthening) visco-plastic creep (Supplementary Fig. 4 ). Visco-plastic strain rates show a sub-horizontal shear zone in the middle-lower crust, which corresponds to the aseismic creep along the MHT. Distributed viscous deformation also occurs in the vicinity of the kink along the MHT ramp-flat geometry (Fig. 2b ). Another constraint on the simulated tectonic deformation comes from the off-megathrust events. The model shows that anelastic strain off the MHT tends to cluster beneath the topographic front of the Higher Himalaya (Fig. 2c, d ). In fact, most of these events concentrate in a narrow zone near the edge of the mid-crustal ramp, which correlates well with the microseismicity observed over the past decades 11 . This off-megathrust earthquake activity also shows a cutoff beneath the Higher Himalaya, which corresponds to the region where the viscous deformation is dominant and the axes of principal compressional stresses ( σ 1 ) become (sub-)vertical. Bimodal earthquake behaviour of the reference model Despite the 2D limitations, the reference model produces a rich earthquake behaviour, similar to that of natural faults. The spatiotemporal evolution of slip velocity of the reference model shows how coseismic slip events are released on the MHT fault (Fig. 3a ). Although the whole seismogenic zone is interseismically nearly fully locked, most of the simulated earthquakes nucleate and propagate only in the lower edge of the locked Main Himalayan Thrust, whereas only a few events unzip the whole flat-and-ramp system. The largest events tend to have similar size and recur quasi-periodically every ~1250 years. Between them, a range of smaller events occurs, which release only small fraction of the accumulated strain energy. Using a rupture width–moment magnitude scaling law 37 , the moment magnitude of partial ruptures is estimated to Mw ~ 7.4–7.8 (Fig. 3b ). Such cluster of differently sized partial ruptures leads up to a final complete failure of the MHT. These complete ruptures are the largest events with an estimated moment magnitude in the order of Mw ~ 8.3–8.4 (Fig. 3b ). Fig. 3 Megathrust behaviour computed in the 2D model (EF) over 10,000 years. a , Spatiotemporal evolution of slip on the MHT for the reference model. Red lines show slip during the simulated earthquakes. Note that hypocenters (black circles) are typically located in the lower edge of the flat segment, just before the mid-crustal ramp. b , Time evolution of downdip rupture width. Colorbar indicates the corresponding moment magnitude. c , d , Along megathrust profiles of initial ( c ) and final ( d ) stress vs. strength for the partial rupture event E9. e , Contours of accumulated coseismic slip throughout event E9. f , g , Along megathrust profiles initial ( f ) and final ( g ) stress vs. strength for the complete rupture event E18. h , Contours of accumulated coseismic slip throughout event E18 Full size image To investigate the physical mechanism behind this behaviour, we analyse the spatiotemporal evolution of the stress and yield strength on the MHT. For example, event E9 (Fig. 3a ) ruptures only the lower edge of the seismogenic zone and then event E18 is capable of propagating up to the surface. Our analysis indicates that the partial rupture event E9 nucleates close to the downdip limit of the seismogenic zone, before the mid-crustal ramp, where the stress build-up due to tectonic loading is fastest (Fig. 3c ). The rupture propagation causes a local stress drop (Fig. 3d ), unzipping only part of the seismogenic zone as it is stopped as a result of a large initial strength excess—that is, difference between stress and yield strength. For this event, we further estimate the slip resulting from the occurrence of such rupture. Our results indicate that event E9 produces ~5–6 m of coseismic slip (Fig. 3e ), mainly on the deeper flat portion of the MHT, between 10 and 15 km depth. When only the downdip edge of the locked zone is unzipped, stress is transferred to the neighbouring updip region by a static stress transfer. Then the next downdip event nucleates sooner than expected from the average recurrence periods, with this new rupture being generally larger than the previous one. This occurs because the strength excess decreases in the frontal part of the MHT, as a result of the stress transfer and the ongoing tectonic loading. Consequently, partial ruptures contribute significantly to build up stresses to a critical level on the updip limit of the MHT, as for example before event E18 (Fig. 3f ). Once strength excess is low throughout the MHT, a complete event eventually propagates through the whole ramp-flat-ramp fault system and leads to a large stress drop (Fig. 3g ). These complete ruptures results in slip larger than 8 m (Fig. 3h ), which is consistent with estimates from paleoseismic investigations 19 , 20 , 38 . Then a new cycle of partial ruptures begins, with an initial period of quiescence or small events activity (Fig. 3 ). This is exactly what our model shows in Fig. 3 : temporal evolution of the MHT displays a bimodal seismicity-dominated regime. Notably, rupture events are triggered by stress build-up near the downdip end of the locked fault zone, as is observed in nature 39 . Also, the model reproduces a realistic earthquake sequence of irregular moment magnitude main shocks, including events similar to the 2015 Gorkha earthquake. A simulation example is shown in Supplementary Movie 1 . This bimodal pattern of large strength excess, low stress drop partial ruptures leading to infrequent low strength excess, high stress drop complete ruptures is also observed when analysing their non-dimensional stress ( S ) parameter—that is, the ratio between the average strength excess before an event and the average coseismic static stress drop 40 , 41 (Fig. 4a–c ). In terms of this S parameter, complete ruptures thus have relatively low values, while partial ruptures have relatively high values. When studying the kinematic slip evolution of each event, we observe both pulse- and crack-like ruptures (Supplementary Fig. 5 ). In a pulse-like mode, the local slip duration—also known as rise time—is much shorter than the total duration of the event 42 . On the contrary, shear cracks have an extended slip duration, even after the rupture has reached the surface, in which the rise time scales with final rupture width. Our models thus show that slip-pulses and shear cracks coexist along the same interface. They occur at marked points in time in relation to the stress state of the interface as defined by the S parameter. Large strength excess, prior to the event, leads to partial ruptures with relatively high S parameter that are pulse-like. These self-healing pulses could be the result of a strongly slip rate-dependent friction formulation that rapidly heals the fault for low characteristic slip velocities 43 . Conversely, relatively low S parameters are observed for complete ruptures that are crack-like. The hypothesis of a different rupture style for each event type is also supported by the identified mechanism of recurrent updip stress transfer toward a critical level 41 , 44 , combined with results from dynamic rupture simulations that relate the fault stress state ( S parameter) and rupture styles 45 . However, due to a low temporal resolution (Δ t = 1 year) and missing wave-mediated stress transfer, we cannot be sure that the actual stress states at the MHT are such that both crack- and pulse-like ruptures are feasible. In spite of these potential pitfalls, our models show similar features to those observed during the Gorkha earthquake. In particular, the 2015 main-shock was a pulse-like rupture with slip on any given portion of the fault occurring over a short fraction (~10 s) of the total ~70 s duration of the earthquake source 2 . Fig. 4 Impact of the three fault geometries on the rupture patterns. Relationship between the S parameter and rupture width for models adopting a realistic ramp-flat-ramp fault geometry inferred from Elliott et al. 9 ( a ) and Duputel et al. 10 ( b ), which also indicates the dominance of different rupture styles (pulse- vs. crack-like ruptures), and a planar fault geometry ( c ). d – f , Along megathrust profiles of the average stress vs. strength for the three fault geometries adopted: models EF ( d ), DF ( e ) and PF ( f ). Our simulations also indicate that, for each partial rupture, the S parameter is generally higher than in dynamic rupture simulations 45 . This results from the large amount of slip velocity-induced weakening ( γ ; see Methods section), which is motivated by laboratory experiments at coseismic slip rates 71 Full size image A particular feature of the Himalayan wedge is the seismic–aseismic transition zone, which seems to coincide with the mid-crustal ramp beneath the front of the high Himalaya 8 , 11 . However, the feedback between the geometry and the rheological behaviour of the mid-crustal ramp are difficult to ascertain on the basis of natural observations alone. When a rupture occurs in our simulations, it generally expands upwards from the locked edge, but not much downwards. This occurs because the zone of aseismic slip acts as an efficient barrier to downdip propagation of ruptures. This self-consistent feature of our models as an effect of the temperature increase with depth, which in turn decreases the viscosity of rocks. Also, our models show that all hypocentre locations fall in a narrow zone near the edge of the mid-crustal ramp (Fig. 3a ), indicating a pivotal role of this crustal asperity in localising the strain both on and off the megathrust (Fig. 2b ). Thus, our results suggest that both the geometric-structural and the thermal-rheological strength of the mid-crustal ramp control the downdip rupture width on the MHT. Effect of fault friction and geometry on seismic ruptures In our simulations we identify frictional properties and geometry of the MHT as key parameters that influence the emergence of the observed bimodal earthquake pattern. To examine the role of the three MHT fault geometries considered in this study, we first analyse the relation between the S parameter and rupture width of all events when a bimodal seismicity pattern is observed ( μ s = 0.16 and γ = 0.7; Fig. 4a–c ). Results from the reference model EF (Fig. 4a ) indicate that the S parameter decreases with increasing rupture width. Most importantly, we find that this ramp-flat-ramp geometry results in a rupture-width gap between 60–65 km and 90–95 km. A very similar trend is also observed in model DF (Fig. 4b ). Pulse-like partial ruptures are confined to a critical width of 60–65 km, whereas large crack-like events propagate through the whole seismogenic zone. Consequently, models EF and DF result in a bimodal distribution of rupture widths. On the other hand, results from the simple planar fault (Fig. 4c ) indicate that the S parameter decreases linearly with increasing rupture width. This means that the larger the event, the higher the stress released and the resulting S parameter is lower. Although this model displays a wide spectrum of rupture widths, the general pattern does not indicate any bimodal distribution. We then analyse the average downdip stress vs. strength distribution for the three fault geometries adopted (Fig. 4d–f ). In general, these profiles suggest that, the steeper the fault dips in the updip region of the MHT, the higher would be the pressure-dependent fault strength. This, together with a relatively higher fault friction, increases the fault strength even further. Consequently, the strength excess also increases, and a higher pre-stress is thereby necessary to reach a critical level at which eventually a crack-like event ruptures the entire megathrust. As in the case of model EF (Fig. 4d ), and even more clearly on model DF (Fig. 4e ), the strength excess in the shallower region of the MHT is notably high. This behaviour arises because when the model accounts for a ramp-and-flat fault geometry, the far-field tectonic loading is not fast enough to bring the pre-stress up to a critical state in the upper edge of the MHT. Most of the simulated earthquakes are thus capable of rupturing only a fraction of the seismogenic zone. Then, the static stress distribution left over from these previous partial ruptures contribute significantly to increase the stress state in the updip limit of the MHT. On the other hand, the planar fault geometry (model PF) maintains a relatively low strength excess throughout the seismogenic zone (Fig. 4f ), thereby allowing the propagation of frequent complete ruptures. We then explore the effect of static fault friction ( μ s ), and of the maximum friction drop from static to dynamic friction coefficient ( γ = 1 − μ d / μ s ) on the resulting bimodal pattern of large earthquakes. Our model produces distinctly different rupture patterns within a narrow range of frictional parameters (Fig. 5 ). In fact, an increase of both the static fault friction and friction drop leads to an increase of the number of events per cycle (Fig. 5a ), the average recurrence interval between the largest events (Fig. 5b ), and the median S parameter values (Fig. 5c ). As illustrated in Fig. 5 , this corresponds to a transition from ordinary (unimodal) cycles to irregular cycles, which display a bimodal seismicity (see also Supplementary Fig. 6 ). In contrast, the spatiotemporal evolution of the model with a lower static fault friction ( μ s = 0.1) shows a more ordinary recurrence pattern of quasi-periodic large events (Supplementary Fig. 7 ). These events mostly nucleate near the edge of the mid-crustal ramp, propagate both up- and downwards, and typically activate the whole flat-and-ramp system. These ruptures break the entire locked zone of the MHT in a crack-like style, and lead to significant stress drops. Consequently, this model is related to a low median S value (Fig. 5c ). Thus, these results indicate that the maximum friction drop ( γ ) can significantly affect the recurrence interval of complete ruptures (Fig. 5b ) and the average S parameter (Fig. 5c ), but it cannot prevent the genesis of bimodal seismicity for relatively higher fault friction ( μ s ⩾ 0.16). Fig. 5 Effect of frictional properties on the seismic behaviour of model EF. Average a number of events per cycle, b recurrence time of complete ruptures (Mw > 8 events) and c median of the S parameter. Dashed black lines indicate the transition from ordinary cycles to irregular cycles (bimodal seismicity) Full size image Discussion Our simulations show that it is probably incorrect to assume that earthquakes known to have occurred along the Himalayan front over history 46 are representative of the greatest possible earthquakes. Along many segments, the large historical events probably represent partial ruptures with magnitude significantly lower than what complete ruptures would produce. In our model, the same segment of the MHT can in principle produce a sequence of partial ruptures similar to the Gorkha earthquake and occasionally much larger events, similar to the 1934 Mw 8.4 event or even larger. This is confirmed by moment conservation calculations at the scale of the Himalayan arc, which require Mw ~ 9 earthquakes with a 800–1000 years return period 47 . Our models indicate that a great earthquake (Mw > 8) can occur at the same location as a Mw ≤ 7.8 earthquake, and that it may strike sooner than would be anticipated from considerations of renewal time from plate convergence rates. While we cannot rule out the plausible presence of along-strike heterogeneities given the lack of the third dimension, our models show that the combined effects of fault geometry and frictional properties in controlling the along-dip bimodal behaviour of the MHT could potentially hold for the entire Himalayan arc. In support of this claim, recent pattern of interseismic coupling on the MHT along the entire Himalayan arc do not indicate any aseismic barrier that could affect the seismic segmentation of the arc and limit the along-strike propagation of seismic ruptures 48 (Fig. 1a ). For a finite range of static fault friction ( μ s = 0.06–0.2), our model exhibits a large spectrum (250–1500 years) of recurrence time of great earthquakes. It also shows that an indication for the temporal proximity of such a Mw > 8 earthquake can come from the maximum updip limit of the prior, partial earthquake, which provides an indication for a likely critically stressed MHT (Fig. 3a ). Our results indicate that an average recurrence time of ~600 years leads to coseismic slip of 8–10 m in order to release the elastic strain accumulated during such interseismic periods. However, partial ruptures account only for an average slip of 4–6 m, in agreement with the average slip of moderate (Mw ≤ 7.8) Himalayan earthquakes such as the Gorkha earthquake 9 . Finally, it appears that the static stress change due to partial ruptures is the major factor introducing irregularity in the seismic cycle. This is the main reason that could explain why the model obeys neither the slip- nor time-predictable behaviour at any given point on the fault (Supplementary Fig. 8 ), since it does not incorporate a fixed threshold shear stress for slip to occur. This is because, after each earthquake, the stress on the ruptured area drops to a low level, approximately determined by the rate-dependent friction formulation evaluated at the coseismic slip rate. To conclude, this seismo-thermo-mechanical model constrained by observations provides physical explanations to understand the behaviour of the seismic cycle in the Himalaya. It shows that frictional properties and non-planar geometry of the MHT control a variety of phenomena, such as the bimodal seismicity, the relative persistence of along-dip variations of seismic ruptures, and the variable recurrence time of large (Mw ≤ 7.8) and great (Mw > 8) earthquakes. Based on our numerical experiments, we postulate that large crack-like earthquakes on the MHT may release stress inherited from former pulse-like partial ruptures. These very large events account for the bulk of the deformation that is transferred to the most frontal structures in the Sub-Himalaya. If this scenario is in fact correct, it has consequences for the assessment of seismic moment where only rupture length and surface slip are known, as is the case for all palaeoseismic ruptures inferred from slip on the MFT 16 , 19 , 20 , 22 . Because a heterogeneous strain condition is likely to prevail throughout the Himalaya, our results may provide an answer to the long-standing difficulties in explaining the source of the stored stresses needed to drive large (>8–10 m) paleoseismic surface ruptures recorded on the MFT 20 , 49 . The risk of such extreme earthquakes may have been underestimated because of the evidence that these events might only exist in geological records given their millenary return period 47 . Seismicity catalogues might also give the false impression that they include the largest possible earthquakes. This might be related to the magnitude gap separating Mw 8+ events from the dominant mode formed by the smaller partial ruptures, which make up the bulk of the seismicity. In light of our modelling results, the updip arrest of the 2015 Gorkha earthquake calls for special attention. That fault patch, updip of the Gorkha rupture, stayed locked in the post-seismic period 50 . The stress level was increased by the Gorkha earthquake 2 , making that patch more likely to fail in the next large rupture of the MHT in that area. The nearly 800-km-long stretch between the 1833/2015 ruptures and the 1905 Mw 7.8 Kangra earthquake is also a well-identified seismic gap with no large earthquake for over 500 years 1 . The MHT is clearly locked there 8 , 48 and its deficit of slip may exceed ~10 m. The last large earthquake in that area occurred in 1505, and could have exceeded Mw 8.5 51 . These factors make this area a prime location for a future complete rupture of the MHT. Continued geodetic monitoring of the Himalayan arc in the coming years will help to provide new constraints and to ascertain these speculations. Methods Seismo-thermo-mechanical methodology The 2D seismo-thermo-mechanical (STM) code uses an implicit, conservative finite difference scheme on a fully staggered Eulerian grid in combination with a Lagrangian marker-in-cell technique 52 . The code solves for the conservation of mass for an incompressible material, momentum and energy. The advection of transport properties including viscosity, plastic strain and temperature is performed with the displacement of Lagrangian markers. The following three mechanical equations are solved to obtain the horizontal and vertical velocities, v x and v z , and pressure P (defined as the mean stress): $$\frac{{\partial v_x}}{{\partial x}} + \frac{{\partial v_z}}{{\partial z}} = 0$$ (1) $$\frac{{\partial \sigma{\prime}_{\hskip -3ptxx} }}{{\partial x}} + \frac{{\partial \sigma{\prime}_{\hskip -3pt xz} }}{{\partial z}} - \frac{{\partial P}}{{\partial x}} = \rho \frac{{{\mathrm{D}}v_x}}{{{\mathrm{D}}t}}$$ (2) $$\frac{{\partial \sigma\prime_{\hskip -3pt zz} }}{{\partial z}} + \frac{{\partial \sigma\prime_{\hskip -3pt zx} }}{{\partial x}} - \frac{{\partial P}}{{\partial z}} = \rho \frac{{{\mathrm{D}}v_z}}{{{\mathrm{D}}t}} - \rho g$$ (3) where ρ is density, \(\sigma _{ij}\hskip -3pt\prime\) are deviatoric stress tensor components, and g = 9.81 m s −2 is the vertical component of the gravitation acceleration. The momentum equations include the inertial term to stabilise high-coseismic slip rates at low time steps. A time step of 1 year, however, reduces our formulation to a virtually quasi-static one. Ruptures during the resulting events hence represent the occurrence of rapid threshold-exceeding slip during which permanent displacement and stress drop occur along a localised interface. The energy equation describes the balance of heat in a continuous medium and relates temperature changes due to internal heat generation, as well as with advective and conductive heat transport 53 . The Lagrangian form of energy equation solves for the temperature T : $$\rho C_{\mathrm{p}}\frac{{{\mathrm{D}}T}}{{{\mathrm{D}}t}} = \frac{\partial }{{\partial x_i}}\left( {k\frac{{\partial T}}{{\partial x_i}}} \right) + H_{\mathrm{r}} + H_{\mathrm{s}}$$ (4) where C p is isobaric heat capacity, k is thermal conductivity, H r is radioactive heat production and H s is shear heat production during non-elastic deformation, as follows: $$H_{\mathrm{r}} = {\mathrm{cst}}{\mathrm{.;}}$$ (5) $$H_{\mathrm{s}} = \sigma\prime_{\hskip -3pt ij} \; \dot \varepsilon _{ij,{\mathrm{vp}}};$$ (6) where H r is a constant value for each rock type, \(\dot \varepsilon _{ij,{\mathrm{vp}}}\) is the visco-plastic component of the deviatoric strain rate tensor. Rheological model The fundamental Eqs. ( 1 )–( 4 ) are solved using constitutive relations that relate deviatoric stresses and strain rates in a nonlinear visco-elasto-plastic manner: $$\dot \varepsilon _{ij} = \frac{1}{{2G}}\frac{{{\mathrm{D}}\sigma\prime_{\hskip -3pt ij} }}{{{\mathrm{D}}t}} + \frac{1}{{2\eta }}\sigma\prime_{\hskip -3pt ij} + \left\{ {\begin{array}{*{20}{l}} 0 \hfill & {{\mathrm{for}}\,\sigma\prime_{\hskip -3pt {\mathrm{II}}}\, < \,\sigma _{{\mathrm{yield}}}} \hfill \\ {\chi \frac{{\partial \sigma\prime_{\hskip -3pt {\mathrm{II}}} }}{{\partial \sigma\prime_{\hskip -3pt ij} }} = \chi \frac{{\partial \sigma\prime_{\hskip -3pt ij} }}{{2\sigma\prime_{\hskip -3pt {\mathrm{II}}} }}} \hfill & {{\mathrm{for}}\,\sigma\prime_{\hskip -3pt {\mathrm{II}}} = \sigma _{{\mathrm{yield}}}} \hfill \end{array}} \right.$$ (7) where G is shear modulus and η is effective viscosity. \({\mathrm{D}}\sigma\prime_{\hskip -3pt ij} {\mathrm{/}}{\mathrm{D}}t\) is the objective co-rotational time derivative solved using a time explicit scheme 53 and \(\sigma _{{\mathrm{II}}} = \sqrt {{\sigma{\prime }_{\hskip -3pt xx}}^2 + {\sigma{\prime}_{\hskip -3pt xz}}^2}\) is the second invariant of the deviatoric stress tensor, and χ is a plastic multiplier connecting plastic strain rates and stresses. Introducing a visco-plastic viscosity ( η vp ), we can rewrite Eq. ( 7 ) as: $$\sigma\prime_{\hskip -3pt ij} = 2\eta _{{\mathrm{vp}}}Z\dot \varepsilon _{ij} + \sigma _{ij}(1 - Z)$$ (8) where Z is the visco-elasticity factor: $$Z = \frac{{G{\mathrm{\Delta }}t_{{\mathrm{comp}}}}}{{G{\mathrm{\Delta }}t_{{\mathrm{comp}}} + \eta _{{\mathrm{vp}}}}}$$ (9) where Δ t comp is the computational time step. The plastic behaviour is taken into account assuming a non-associative Drucker–Prager yield criteria 54 . Plastic flow is evaluated at each Lagrangian marker if \(\sigma\prime_{\hskip -3pt {\mathrm{II}}}\) reaches the local pressure-dependent yield strength σ yield : $$\sigma _{{\mathrm{yield}}} = C + \mu _{{\mathrm{eff}}}\,P$$ (10) where C is the cohesion. An important component in the yield criterion is the friction coefficient. Following the approach in van Dinther et al. 55 , we apply a strongly rate-dependent friction formulation 56 , in which the effective friction coefficient μ eff depends on the visco-plastic slip velocity V = ( σ yield / η m )Δ x , in which η m is the local viscosity from the previous time step and Δ x is the Eulerian grid size: $$\mu _{{\mathrm{eff}}} = \mu _{\mathrm{s}}(1 - \gamma ) + \mu _{\mathrm{s}}\frac{\gamma }{{1 + \frac{V}{{V_{\mathrm{c}}}}}}$$ (11) $$\gamma = 1 - \left( {\mu _{\mathrm{d}}{\mathrm{/}}\mu _{\mathrm{s}}} \right)$$ (12) where μ s and μ d are static and dynamic friction coefficients, respectively, V c is the characteristic velocity, namely the velocity at which half of the friction change has occurred, and γ represents the amount of slip velocity-induced weakening if γ = 1 − ( μ d / μ s ) is positive, or strengthening if γ is negative. When plastic yielding condition is locally reached, we require a constant second invariant of deviatoric stresses (assuming the absence of elastic deformation). $${\mathrm{If}}\quad \sigma\prime_{\hskip -3pt {\mathrm{II}}} = \sigma _{{\mathrm{yield}}}:\left\{ {\begin{array}{*{20}{c}} {\frac{{{\mathrm{D}}\sigma\prime_{\hskip -3pt {\mathrm{II}}} }}{{{\mathrm{D}}t}} = 0,} & {{\dot \varepsilon _{ij}}^{{\mathrm{elastic}}} = 0} \end{array}} \right\}$$ (13) then the stress components are similarly (i.e., isotropically) corrected so that $$\sigma\prime_{\hskip -3pt ij} = \sigma\prime_{\hskip -3pt ij} \cdot \frac{{\sigma _{{\mathrm{yield}}}}}{{\sigma\prime_{\hskip -3pt {\mathrm{II}}} }}.$$ (14) Accordingly, the local viscosity-like parameter η vp decreases to weaken the material and to localise deformation $$\eta _{{\mathrm{vp}}} = \eta \frac{{\sigma\prime_{\hskip -3pt {\mathrm{II}}} }}{{\eta \chi + \sigma\prime_{\hskip -3pt {\mathrm{II}}} }}$$ (15) where $$\chi = 2\left( {\dot \varepsilon _{{\mathrm{II}}} - {\dot \varepsilon _{{\mathrm{II}}}}^{{\mathrm{viscous}}}} \right) = 2\left( {\dot \varepsilon _{{\mathrm{II}}} - \frac{1}{{2\eta }}\sigma\prime_{\hskip -3pt {\mathrm{II}}} } \right)$$ (16) and $$\dot \varepsilon _{{\mathrm{II}}} = \sqrt {{\dot \varepsilon _{xx}}^2 + {\dot \varepsilon _{xz}}^2}.$$ (17) Finally, the visco-plastic viscosity η vp is corrected during plastic deformation: $$\eta _{{\mathrm{vp}}} = \frac{{\sigma _{{\mathrm{yield}}}}}{{2\dot \varepsilon _{{\mathrm{II}}}}}.$$ (18) On the other hand, if the plastic yielding condition is not satisfied, this means that the material is under elastic and/or viscous deformation (i.e., diffusion and/or dislocation creep), therefore η vp = η . Model setup and boundary conditions The initial 2D model setup consists of a 1000 × 250 km computational domain (Supplementary Fig. 1 ). The visco-elasto-plastic thermomechanical parameters of these lithologies are based on a range of laboratory experiments and are listed in Supplementary Table 1 . The models use a grid resolution of 1491 × 404 nodes with variable grid spacing. This allows a high resolution of 200 m in the area subjected to largest deformation. More than 35 million Lagrangian markers carrying material properties were used in each experiment. Velocity boundary conditions are free slip with the exception of the permeable lower boundary along which infinity-like external free slip and external constant temperature conditions are imposed implying free slip and constant temperature condition to be satisfied at 1000 km below the bottom of the model 57 . The free surface boundary condition atop the crust is implemented by using a ‘sticky-air’ layer 58 with low density (1 kg m −3 ) and viscosity (10 17 Pa s). A prescribed convergence rate of 38 mm year −1 is imposed on the left boundary, as inferred from several GPS campaigns 59 . This allows the subducting Indian plate to converge underneath the Asian upper plate. The model accounts for shear heating (see Methods, Eq. ( 6 )) and for solid–solid phase transitions, such as the process of eclogitization, which has been shown to be an important component of the overall buoyancy budget of the underthrusting Indian lower crust 60 . These phase transitions are parameterised as a function of thermodynamic state variables ( P , T , V ) and composition by using polynomials to interpolate the reaction boundary 61 . Initial geometry Our proposed initial geometry of the India–Asia collision is based on the crustal data 62 and geophysical constraints 7 , 63 , 64 , and is also consistent with geomorphic and geologic structural constraints 6 , 8 , 9 , 31 , 60 , 65 . The initial setup consists of a ~600-km-long Indian plate (on the left, Supplementary Fig. 1 ) underthrusting the fixed overriding Asian plate (on the right) and the Himalayan wedge. The Indian upper crust is made of a thick layer of sediments (4-km thick) and crystalline rocks (16-km thick), overlying a 16-km-thick lower crust and a 65-km-thick lithospheric mantle. On the other side, the Eurasian plate is made of a very thick upper crust (36-km thick) and lower crust (30-km thick) due to crustal shortening and thickening. The tectonic architecture of the Himalayan wedge is instead more complicated, since it represents the impingement of the Indian continental margin on the Eurasian plate. The Himalayan fold-and-thrust belt is here divided into four lithotectonic units: the Sub-Himalaya, the Lesser Himalayan, the Higher Himalaya and the Tibetan Plateau. The geometry of the MHT across the Himalaya has been derived from geophysical observations and surface geology. The frontal part of the fault system is well constrained from balanced cross sections across the Siwalik fold-and-thrust belt and some gravity and magnetotelluric data 6 , 66 . These data show that the MFT, the MBT and the intervening thrust faults all root into a 5–7 km deep décollement at the top of the underthrusted Indian basement. The décollement probably extends beneath the front of the Higher Himalaya, as indicated from a zone of high conductivity that has been interpreted as sediments dragged along the décollement 65 . This décollement constitutes the sole of the Himalayan wedge and is usually called MHT fault. The décollement extends beneath the Lesser Himalaya at depth of 7–12 km, and connects with a mid-crustal ramp (12–22 km depth) beneath southern Tibet. This mid-crustal ramp has been proposed in a number of previous studies 30 , 39 , 67 and is in agreement with recent slip inversions after the Gorkha event 9 . This deeper ramp then roots into a shallow north-dipping shear zone of aseismic deformation, coinciding with the deeper portion of the MHT imaged from various seismological experiments 7 , 63 . This complicated 2D geometry is then transferred into the computational domain by using GeoMAC (Geo-Mapping And Converting tool), a C–based tool that allows for resampling and converting drawings into material properties. Hence, GeoMAC converts 2D vector drawings based on geological profiles, seismic data, tomography models and other geophysical constraints, by assigning material phases to Lagrangian markers inside the large-scale tectonic shapes. Friction properties on the Main Himalayan Thrust Several constraints indicate that the friction on the MHT is low 31 . For instance, recent thermometric and thermochronological data from the central Nepal suggest that the shortening across the range has been taken up primarily by thrusting along the Main Himalayan Thrust fault, with negligible internal shortening of the Himalayan wedge 13 . These data thus suggest that friction along the MHT is ~0.07, which is in agreement with the observed pattern of erosion and the present morphology of the Himalayan range 12 , 31 . Also, given the slope of the Himalayan wedge and the dip angle of the MHT beneath the Lesser Himalaya, the effective basal friction on the flat portion of the MHT is inferred to be smaller than 0.12 30 . This is in keeping with previous numerical models, which require a basal effective friction <0.13. This value is consistent with the previous analysis of Davis et al. 68 that, considering the whole Himalayan wedge (which has a steeper slope), results in a larger effective basal friction of 0.25. Another constrain comes from relationship between the topographic elevation and the cutoff in the microseismic activity, which is used to infer the shear stress on the fault. Since the décollement lies at a depth of ~10 km and the inferred shear stresses cannot exceed 35 MPa, this corresponds to a friction of ~0.1 39 . Temperature distribution The thermal structure of both Indian shield and Asia (Supplementary Fig. 1c ) are calculated from the steady-state continental geotherm 69 . The initial thermal gradient is set using values of 273 K at the top and 1617 K at the bottom of the lithosphere, whereas within the mantle was quasi-adiabatic (0.5 K/km). The thermal structure within central Nepal is computed using the thermokinematic model proposed in Bollinger et al. 38 and updated in Herman et al. 13 . The formal inversion suggests a radiogenic production of 2.2 μWm −3 for the Higher and Lesser Himalayan units and a value of 0.8 μWm −3 for the middle and lower crust of India 13 , which are used for the corresponding lithologies of our numerical model. This latter was obtained from a formal inversion of the large data set of thermochronological and thermobarometric data available from central Nepal. The topography is assumed to be steady state. The model fits the data and shows a transition from brittle to ductile rheologies at a temperature of about 400–450 °C, corresponding to about 18-km depth on the Main Himalayan Thrust. Modelling procedure The STM modelling approach adopted for this study comprises two steps. Prior to the first modelling stage, we define the initial geometry, rock properties, temperature distributions and the boundary conditions. During the first stage, which utilises a time step of 100 years, the stress builds up and all the physical properties iterate up to reach a isostatic equilibrium. Stress build-up occurs as the strain is accumulated. Differential loading due to rheological discontinuities, tectonic asperities, temperature and viscosity distribution causes heterogeneous stress localisation in the Himalayan wedge (Fig. 2 ). In the second modelling stage, the time step progressively decreases to approach a final value of 1 year, while the inertia term and rate-dependent friction are activated. During the seismic cycle, when the maximum strength is locally reached, the instability is fed through the feedback of decreasing viscosities. This increases slip velocities, which decreases the slip rate-dependent friction and strength, and in turn decreases viscosities even further. Spontaneous rupture propagation occurs because stresses are increased ahead of the rupture front to balance the dropping stresses within the rupture and to thereby maintain a static equilibrium. Finally, healing of strength occurs as slip velocities decrease. Modelling limitations General limitations of our modelling approach are discussed in previous studies, in which the STM models have been applied to subduction 55 and collision zones 70 . In nature, earthquake ruptures occur within a three-dimensional, geometrically complex fault system with various scales of downdip and along-strike variations in its seismogenic behaviour. The lateral, third dimension is absent in our numerical model. That means that our two-dimensional plane strain model ignores lateral variations in interseismic stress build-up and rupture propagation. Compared to nature, our model produces unrealistically long seismic events because of the large time step (i.e., 1 year). This means that a simulated event or earthquake refers to the occurrence of rapid threshold-exceeding slip during which permanent displacement and stress drop occur along a localised interface. On the other hand, the presented results generally demonstrate a satisfactory agreement with a wide range of long- and short-term natural observations in the Himalaya. Events in our numerical model have reasonable downdip extension, similar behaviour and comparable surface displacement. However, it is important to stress that our incompressible inertial formulation does not account for the full inertial dynamics. Also, we acknowledge that modelling pressure and shear waves might impact part of our results. In spite of these limitations, our simulations mostly cover new ground, as yet unexplored, not only as far as the bimodal seismicity is concerned, but also how short-term seismic processes are related to the long-term interseismic deformation. Code availability Computer code used within the manuscript and its Supplementary Information are available from the corresponding author upon reasonable request. Data availability Data within the manuscript and its Supplementary Information are available from the corresponding author upon reasonable request.
Nepal was struck by an earthquake with a magnitude of 7.8 in 2015, but the country may still face the threat of much stronger temblor. This is the conclusion reached by ETH researchers based on a new model that simulates physical processes of earthquake rupture between the Eurasian and Indian Plates. In April 2015, Nepal – and especially the region around the capital city, Kathmandu – was struck by a powerful tremor. An earthquake with a magnitude of 7.8 destroyed entire villages, traffic routes and cultural monuments, with a death toll of some 9,000. However, the country may still face the threat of much stronger earthquakes with a magnitude of 8 or more. This is the conclusion reached by a group of earth scientists from ETH Zurich based on a new model of the collision zone between the Indian and Eurasian Plates in the vicinity of the Himalayas. Using this model, the team of ETH researchers working with doctoral student Luca Dal Zilio, from the group led by Professor Taras Gerya at the Institute of Geophysics, has now performed the first high-resolution simulations of earthquake cycles in a cross-section of the rupture zone. "In the 2015 quake, there was only a partial rupture of the major Himalayan fault separating the two continental plates. The frontal, near-surface section of the rupture zone, where the Indian Plate subducts beneath the Eurasian Plate, did not slip and remains under stress," explains Dal Zilio, lead author of the study, which was recently published in the journal Nature Communications. Cross section through the fracture zone (black thick line) between the Indian (grey areas) and the Eurasian plate (green areas). Credit: Dal Zilio et al., Nat.Comm. 2019 Normally, a major earthquake releases almost all the stress that has built up in the vicinity of the focus as a result of displacement of the plates. "Our model shows that, although the Gorkha earthquake reduced the stress level in part of the rupture zone, tension actually increased in the frontal section close to the foot of the Himalayas. The apparent paradox is that 'medium-sized' earthquakes such as Gorkha can create the conditions for an even larger earthquake," says Dal Zilio. Tremors of the magnitude of the Gorkha earthquake release stress only in the deeper subsections of the fault system over lengths of 100 kilometres. In turn, new and even greater stress builds up in the near-surface sections of the rupture zone. According to the simulations performed by Dal Zilio and his colleagues, two or three further Gorkha quakes would be needed to build up sufficient stress for an earthquake with a magnitude of 8.1 or more. In a quake of this kind, the rupture zone breaks over the entire depth range, extending up to the Earth's surface and laterally—along the Himalayan arc—for hundreds of kilometres. This ultimately leads to a complete stress release in this segment of the fault system, which extends to some 2,000 kilometres in total. Historical data shows that mega events of this kind have also occurred in the past. For example, the Assam earthquake in 1950 had a magnitude of 8.6, with the rupture zone breaking over a length of several hundred kilometres and across the entire depth range. In 1505, a giant earthquake struck with sufficient power to produce an approximately 800-kilometre rupture on the major Himalayan fault. Where plates collide: The main frontal thrust (red line) extends over the entire length of the Himalayas. Credit: NASA Earth Observatory "The new model reveals that powerful earthquakes in the Himalayas have not just one form but at least two, and that their cycles partially overlap," says Edi Kissling, Professor of Seismology and Geodynamics. Super earthquakes might occur with a periodicity of 400 to 600 years, whereas "medium-sized" quakes such as Gorkha have a recurrence time of up to a few hundred years. As the cycles overlap, the researchers expect powerful and dangerous earthquakes to occur at irregular intervals. However, they cannot predict when another extremely large quake will next take place. "No one can predict earthquakes, not even with the new model. However, we can improve our understanding of the seismic hazard in a specific area and take appropriate precautions," says Kissling. The two-dimensional and high-resolution model also includes some research findings that were published after the Gorkha earthquake. To generate the simulations, the researchers used the Euler mainframe computer at ETH Zurich. "A three-dimensional model would be more accurate and would also allow us to make statements about the western and eastern fringes of the Himalayas. However, modelling the entire 2,000 kilometres of the rupture zone would require enormous computational power that not even the supercomputers at the CSCS can provide," says Dal Zilio. Nepal lies at the point where two continents meet: India and Eurasia. It is here that the Indian Plate subducts into the mantle beneath the Eurasian plate. Due to the suction effect exerted by the Indian Plate as it sinks into the mantle, the Indian subcontinent moves north by up to 4 centimetres a year. As a result, the plates rub against each other over the length of this 2,000-kilometre fault system, allowing considerable amounts of stress to build up. During an earthquake, the sudden release of this stress causes an abrupt displacement of the plates next to each other. This is why Nepal and the southern foothills of the Himalayas repeatedly experience very powerful earthquakes.
10.1038/s41467-018-07874-8
Biology
Scientists discover way to improve effectiveness of antibiotics
Song Lin Chua et al. Selective labelling and eradication of antibiotic-tolerant bacterial populations in Pseudomonas aeruginosa biofilms, Nature Communications (2016). DOI: 10.1038/ncomms10750 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms10750
https://phys.org/news/2016-03-scientists-effectiveness-antibiotics.html
Abstract Drug resistance and tolerance greatly diminish the therapeutic potential of antibiotics against pathogens. Antibiotic tolerance by bacterial biofilms often leads to persistent infections, but its mechanisms are unclear. Here we use a proteomics approach, pulsed stable isotope labelling with amino acids (pulsed-SILAC), to quantify newly expressed proteins in colistin-tolerant subpopulations of Pseudomonas aeruginosa biofilms (colistin is a ‘last-resort’ antibiotic against multidrug-resistant Gram-negative pathogens). Migration is essential for the formation of colistin-tolerant biofilm subpopulations, with colistin-tolerant cells using type IV pili to migrate onto the top of the colistin-killed biofilm. The colistin-tolerant cells employ quorum sensing (QS) to initiate the formation of new colistin-tolerant subpopulations, highlighting multicellular behaviour in antibiotic tolerance development. The macrolide erythromycin, which has been previously shown to inhibit the motility and QS of P. aeruginosa , boosts biofilm eradication by colistin. Our work provides insights on the mechanisms underlying the formation of antibiotic-tolerant populations in bacterial biofilms and indicates research avenues for designing more efficient treatments against biofilm-associated infections. Introduction The therapeutic efficacy of antibiotics can be severely crippled by two major bacterial defence strategies. ‘Resistance’ occurs when bacteria acquire genetic mutations or mobile genetic elements that enable growth under conditions of constant antibiotic exposure 1 , while ‘tolerance’ is mediated by transient phenotypic variations (based on protein expression) that enable bacteria to survive exposures to the highest deliverable doses of antibiotics 2 , 3 . While the current view is that development of tolerance is intrinsically linked to the biofilm lifestyle, early experiments conducted on bacteria in the planktonic lifestyle revealed that tolerance can develop to a number of stresses including antibiotics 4 , 5 . However, in contrast to the planktonic mode of life, biofilms are sessile, densely populated multicellular communities embedded in biopolymeric matrix components 6 . A major advantage for bacteria that aggregate into biofilms is that a multitude of matrix components and physiological states offers additional levels of protection against immune defences and antibiotics, which cannot be obtained in the planktonic growth mode 7 , 8 , 9 . It is widely accepted that pathogenic bacteria in the biofilm mode significantly contribute to the development of nosocomial infections as they are colonizing hospital settings and chronic infection sites, where they represent a serious threat to public health 6 . Bacterial cells differentiate into several physiologically distinct subpopulations within a biofilm 10 . Antibiotic treatment often eradicates sensitive subpopulations, but leaves small antibiotic-tolerant subpopulations behind, resulting in recurring infections after antibiotic treatment has been stopped 10 . Antibiotic-tolerant bacterial cells are variants of wild-type cells that can revert back to the wild type when antimicrobial treatment has ceased 11 . Their transient nature and low abundance make it difficult to investigate their tolerance-related physiology. With only a few studies that have investigated planktonic antibiotic-tolerant cells by pre-isolating them using flow cytometry 12 , 13 , 14 , 15 , the mechanisms underlying the formation of antibiotic-tolerant subpopulation in biofilms have remained elusive. Moreover, these pre-isolation-based strategies are destructive and not feasible for studying the antibiotic-tolerant subpopulations in biofilms. This prompted us to develop an alternative, high-resolution ‘-omics’ approach to simultaneously characterize the physiologies of sensitive and antibiotic-tolerant subpopulations in biofilms. Pseudomonas aeruginosa forms biofilm with extreme tolerance to antibiotics in nosocomial infections, such as pneumonia and surgical site infections, prompting the Centers for Disease Control and Prevention to classify P. aeruginosa under ‘serious’-threat level ( ) and the Infectious Diseases Society of America to declare it as one of the six ‘top-priority dangerous, drug-resistant microbes’ 16 . Colistin is a last-resort polymyxin antibiotic available for treatment of infections caused by drug-resistant Gram-negative bacteria 17 . P. aeruginosa biofilms in flow chambers develop colistin-tolerant cells rapidly and these cells feature high expression levels of the pmr operon 18 . Here we observe the dynamic development of drug-tolerant subpopulations in P. aeruginosa biofilms after treatment with colistin. To obtain knowledge about the physiology of the colistin-tolerant biofilm cells, we apply pulsed stable isotope labelling with amino acids (pulsed-SILAC) proteomics method to selectively quantify the newly expressed proteins that incorporate and are thus labelled with heavy C13 lysine, to promote understanding of the colistin-tolerant subpopulation physiology. The colistin-tolerant populations are able to migrate to the top of the dead biofilm by employing type IV pili-dependent motility and initiate formation of new biofilm via quorum sensing (QS)-regulated group activity. Synergistic treatment by use of erythromycin, which can inhibit QS and motility with colistin, is able to boost the elimination of P. aeruginosa biofilms. Hence, our study provides key insights in developing novel treatments against biofilm-associated infections, which otherwise prove hard to eradicate. Results and Discussion Development of colistin-tolerant subpopulations in biofilms We monitored the development of live and dead subpopulations of P. aeruginosa biofilms after exposure to colistin in real-time by using time series confocal image acquisition as well as enumeration of viable cells with colony forming units (c.f.u. per ml). Colistin exposure of P. aeruginosa wild-type biofilms tagged with green fluorescent protein (GFP) led to a sudden reduction in cell viability according to the propidium iodide (PI) dead staining ( Fig. 1a and Supplementary Movies 1 and 2 ) and c.f.u. per ml counting ( Fig. 1b ). However, we noticed that a few P. aeruginosa cells localized at the substratum of the biofilms remained alive even after 24 h of colistin treatment ( Fig. 1a ), which might result from the heterogeneous compositions/structures of the exopolymeric matrix materials 19 , 20 . Certain exopolymeric matrix components might act as physical barriers and reduce the penetration of colistin into deep part of biofilms 21 , which allows a few biofilm cells at the substratum to survive and acquire colistin tolerance. Figure 1: The migration and formation of colistin-tolerant subpopulations in biofilm. ( a ) P. aeruginosa PAO1 wild-type biofilms were treated with minimal medium containing 10 μg ml −1 colistin followed by real-time CLSM observation from 2 to 32 h of treatment. Scale bars, 50 μm. Colistin-tolerant cells in P. aeruginosa PAO1 biofilms migrated onto the dead biofilm and formed a live colistin-tolerant biofilm. Culture medium flow through on top of the biofilms from the top of image. Experiments were performed in triplicate, and a representative image for each condition is shown. Live cells appear green, whereas dead cells appear red or yellow. Videos of the migration and formation of colistin-tolerant biofilm are available in online Supplementary Videos 1 and 2. ( b ) C.f.u. per ml of the PAO1 biofilms after 0, 6, 24 and 48 h of colistin treatment. The means and s.d. from three experiments were shown. ( c ) Movement trajectories and track displacement of live colistin-tolerant cell aggregates moving onto the dead biofilm. Culture medium flow through on top of the biofilms from top of image. Scale bars, 10 μm. Full size image Interestingly, we observed that colistin-tolerant subpopulations were able to migrate towards the top of the large microcolonies of dead P. aeruginosa biofilm cells ( Fig. 1a,b and Supplementary Movies 3 and 4 ). Tracking of colistin-tolerant cell aggregates close to the large microcolonies revealed that those aggregates moved towards the dead microcolonies ( Fig. 1c and Supplementary Movies 3 and 4 )—a migration that was unrelated to the flow direction of the culture medium (from top to bottom of the image). In contrast, single colistin-tolerant cells far away from the large microcolonies appeared randomly distributed ( Fig. 1c and Supplementary Movies 3 and 4 ). These results suggested that migration of colistin-tolerant cell aggregates towards large microcolonies of dead biofilm cells was a coordinated process that might involve specific signals. Colistin-tolerant cells obtained from biofilms only showed transient expression of the pmr operon ( Supplementary Fig. 1a ) and lost their colistin tolerance after overnight culturing in ABTGC medium containing no colistin ( Supplementary Fig. 1b,c ), supporting our view that the colistin-tolerant cells were the result of phenotypic variation and not per se resistant cells. DNA sequencing of the biofilms with and without colistin treatment showed that there was no convergent non-synonymous mutation gained by the three colistin-treated biofilm populations compared with the control biofilm populations ( Supplementary Data 1 ). The development of motile colistin-tolerant subpopulations in biofilms has major clinical implications as they can result in persistent infections. Using pulsed-SILAC to study colistin-tolerant subpopulations To further understand the process that caused the directed motility and formation of colistin-tolerant biofilm subpopulations, we further used the pulsed-SILAC quantitative proteomic approach, with the aim of studying new proteins’ abundance in the colistin-tolerant subpopulations formed by P. aeruginosa biofilms after a high-dose colistin treatment ( Fig. 2a ). We hypothesized that the actively expressed proteins in the colistin-tolerant subpopulations provided the survival advantages to the bacteria. SILAC is a simple and fast but powerful in vivo method, commonly used for eukaryotes to label proteins for mass spectrometry (MS)-based quantitative proteomics 22 . Although this method had been used in labelling prokaryotes to compare biofilm and planktonic cells 23 , 24 , pulsed-SILAC had never been employed to determine new proteins’ abundance in the antibiotic-sensitive and -tolerant subpopulations coexisting in the same bacterial biofilm. Figure 2: Workflow of pulsed-SILAC proteome analysis of antibiotic-tolerant and -sensitive subpopulations. ( a ) Biofilms were grown in medium containing C12 L -lysine for 72 h and were then treated with medium containing 10 μg ml −1 colistin for 6 h to allow the development of colistin-tolerant cells. Colistin-tolerant subpopulations were then treated with medium containing C13 L -lysine and 10 μg ml −1 colistin to label the tolerant cells with C13. Cells were collected, and pulsed-SILAC proteome analysis was conducted to determine the protein abundance in the colistin-tolerant cells in which the new synthesized proteins are tagged with C13 L -lysine, while the proteome of colistin-susceptible biofilm cells contain the normal C12 L -lysine. Downregulated: lowly expressed proteins in the colistin-tolerant cells ( b ); and upregulated: highly expressed proteins in the colistin-tolerant cells ( c ); proteins in antibiotic-tolerant cells were classified into function groups. Full size image A P . aeruginosa mutant that cannot synthesize lysine, mPAO1Δ lysA , was employed to form biofilms in medium supplemented with C12 L -lysine. Although this mPAO1Δ lysA mutant could only grow in the presence of L -lysine ( Supplementary Fig. 2 ), it formed biofilms similar to wild-type mPAO1 when lysine was added to the culture medium ( Supplementary Fig. 3 ). Biofilms were treated with high doses of colistin at 10 μg ml −1 (10-fold higher than the minimum bactericidal concentration) in minimal medium supplemented with C12 L -lysine for 8 h. After killing the antibiotic-sensitive population, the remaining surviving antibiotic-tolerant cells were then treated with fresh medium containing colistin and C13 L -lysine for 48 h to incorporate C13 L -lysine into the newly expressed proteins of only live cells ( Fig. 2a ). This pulsed-labelling approach allowed for selective tagging of newly expressed proteins in the antibiotic-tolerant subpopulations, with C13 L -lysine without prior physical isolation. The level of new protein abundance could be normalized by dividing the relative abundances of C13 L -lysine-tagged proteins (proteins expressed after colistin treatment) with the relative abundances of C12 L -lysine-tagged proteins (proteins expressed before colistin treatment). We used Q Exactive MS to analyse the pulsed-SILAC labelled samples, followed by the MaxQuant and Andromeda software 25 , 26 to identify 4,250 P. aeruginosa proteins with <1% false discovery rate (FDR). Only proteins that were significantly and consistently changed/regulated in three biological replicates (variability <30%) were shortlisted for further analysis ( Supplementary Data 2 ). Lowly and highly expressed proteins in the colistin-tolerant subpopulation as compared with the colistin-sensitive population are functionally grouped in Fig. 2b,c , respectively, while the complete list of lowly and highly expressed proteins are listed in Supplementary Data 3 and 4 , respectively. Our initial analysis of the data showed that some of the actively expressed proteins were subunits of ribosomes ( Supplementary Data 4 ), supporting the notion that colistin-tolerant cells are metabolically active 27 . We also found that the bifunctional polymyxin-resistance proteins ArnA and PmrA were expressed at a high level in antibiotic-tolerant cells ( Supplementary Data 2 , markers intensity sheet: ArnA (intensity L:H=1.51E+10:4.58E+10); PmrA (intensity L:H=5.26E+09:1.12E+10)). The arnBCADTEF lipopolysaccharide-modification operon had been shown to be controlled by the upstream pmr operon and contribute to the tolerance to polymyxin B and colistin in P. aeruginosa 28 . Hence, both proteins served as positive controls and showed that pulsed-SILAC was highly reliable in detecting proteins important in colistin tolerance. These two markers were consistently detected with high abundance level in the heavy C13 lysine-labelled proteins in seven out of eight technical replicates of the three biological replicates except in BR1:R2, where ArnA was not detected in the light C12 condition, and PmrA was not detected in both H and L conditions due to technical variation ( Supplementary Data 2 , markers intensity sheet). The missing detection of marker proteins in BR1:R2 might be attributed to growth variations during biofilm cultivation as biofilm experiments normally require a long cultivation period (5 days). Pili and QS led to colistin tolerance Interestingly, we found that antibiotic-tolerant subpopulations also expressed proteins required for type IV pili assembly, such as PilF (ref. 29 , at high levels ( Supplementary Data 4 ). Hence, we hypothesized that type IV pili are required for the migration of colistin-tolerant cell aggregates onto the dead microcolonies. Migration of colistin-tolerant subpopulations was decreased in biofilms formed by the type IV pili mutant, Δ pilA ( Fig. 3a,b ). Complementation of Δ pilA with pDA2, a plasmid containing the pilA gene in a pUCP18 vector, restored the antibiotic-tolerant phenotype ( Fig. 3a,b ). Figure 3: The role of pili and QS in the development of antibiotic-tolerant subpopulations in biofilms. ( a ) Biofilms were cultivated for 72 h using P. aeruginosa PAO1, Δ pilA , Δ pilA /pDA2, Δ lasI Δ rhlI , Δ lasI Δ rhlI +OdDHL and Δ pilA Δ lasR Δ rhlR strains, followed by treatment with medium containing 10 μg ml −1 colistin. No migration of tolerant subpopulation was observed for Δ pilA mutant biofilms, and the majority of the cells in Δ pilA biofilms were killed by colistin. Complementation of Δ pilA with pDA2 restored the antibiotic-tolerant subpopuation development after colistin treatment. The QS-defective Δ lasI Δ rhlI mutant biofilms could only develop small amount of antibiotic-tolerant subpopulation after colistin treatment. Addition of OdDHL partially restored the development of antibiotic-tolerant subpopulation after colistin treatment. The pili and QS-defective Δ pilA Δ lasR Δ rhlR mutant biofilms were unable to develop antibiotic-tolerant subpopulation after colistin treatment. The central images show horizontal optical sections, whereas the flanking images show vertical optical sections. Live cells appear green and dead cells appear red. Scale bars, 50 μm. ( b ) Live/dead ratios of 72-h biofilms formed by P. aeruginosa PAO1, Δ pilA , Δ pilA /pDA2, Δ lasI Δ rhlI , Δ lasI Δ rhlI +OdDHL and Δ pilA Δ lasR Δ rhlR strains after colistin treatment. The mean and s.d. from three experiments is shown. * P <0.01, Student’s t -test. Full size image Furthermore, we found that QS-regulated proteins, including LasB, chitinase and phenazine/pyocyanin-synthesis (Phz) proteins, were highly expressed in colistin-tolerant antibiotic-tolerant subpopulations ( Supplementary Data 4 ). Because QS coordinates group behaviour in biofilms 30 , we thus hypothesized that QS coordinates the development of antibiotic-tolerant microcolonies within biofilms. The QS null mutant, Δ lasI Δ rhlI , developed less antibiotic-tolerant subpopulations in response to colistin treatment than the wild type ( Fig. 3a,b ). Chemical complementation by the addition of 1 μM N-3-(oxododecanoyl)-L-homoserine lactone (OdDHL) (a QS signalling molecule) to Δ lasI Δ rhlI biofilms allowed Δ lasI Δ rhlI to regain the development of colistin-tolerant subpopulations ( Fig. 3a,b ). We further showed that biofilms formed by a QS and type IV pili-defective triple mutant, Δ pilA Δ lasR Δ rhlR , could only develop a negligible amount of antibiotic-tolerant subpopulations in response to colistin treatment as compared with the wild type ( Fig. 3a,b ). We found that type IV pili-mediated migration preceded induction of QS activity in the colistin-tolerant subpopulations. The QS reporter lasB-gfp translational fusion 31 was highly expressed in antibiotic-tolerant subpopulations of wild-type biofilms but not in the antibiotic-tolerant subpopulations of Δ pilA biofilms in response to colistin exposure ( Fig. 4a ). Colistin-tolerant biofilms produced more OdDHL, the QS autoinducer, than wild-type biofilms ( Fig. 4b ). In response to colistin treatment, colistin-tolerant wild-type biofilms also secreted more elastase and pyocyanin than untreated biofilms, which correlates with our proteomic data that LasB and pyocyanin-synthesis proteins were highly expressed in colistin-tolerant biofilm cells ( Fig. 4c,d ). Figure 4: QS-related products are upregulated in antibiotic-tolerant subpopulations in P. aeruginosa biofilms. ( a ) The 72-h biofilms formed by the P. aeruginosa PAO1 and Δ pilA containing the lasB-gfp translational fusion were treated with medium containing 10 μg ml −1 colistin for 24 h followed by CLSM observation. The lasB-gfp translational fusion was induced to high levels in PAO1 colistin-tolerant cells but was not observed in Δ pilA biofilms. Experiments were performed in triplicate, and a representative image for each condition is shown. Live cells appear green, whereas dead cells appear yellow or red. The central images show horizontal optical sections, whereas the flanking images show vertical optical sections. Scale bars, 50 μm. ( b ) Antibiotic-tolerant cells from PAO1 biofilms secreted more OdDHL than untreated biofilm cells. ( c ) Antibiotic-tolerant cells from PAO1 biofilms produced more elastase than untreated biofilm cells. ( d ) Antibiotic-tolerant cells from PAO1 biofilms produced more pyocyanin than untreated biofilm cells. The mean and s.d. from three experiments is shown. * P <0.01, Student’s t -test. Full size image Given the active expression of virulence factors in the colistin-tolerant subpopulations, we evaluated the macrophage-killing capacity of supernatants from wild-type biofilms with and without colistin treatment by using the mouse RAW264.7 cell line. After staining the dead macrophages with 20 μM PI, we observed that the supernatant from colistin-treated biofilms was more cytotoxic to macrophages than that of control biofilms ( Supplementary Fig. 4 ). Induction of QS-regulated virulence factors in colistin-tolerant microcolonies is clinically relevant because they are detrimental to the host immune system 32 , 33 , 34 , 35 , 36 . It is unclear why the colistin-tolerant cell aggregates migrate to the top of dead biofilms. Our previous work demonstrated that the motile subpopulation of the wild-type P. aeruginosa biofilms migrate towards the non-motile subpopulation to seek iron source 37 . We hypothesized that the tolerant cells migrate to acquire iron. To test this hypothesis, we added the iron chelator 2,2-dipyridyl (DIPY) 38 together with colistin to treat P. aeruginosa wild-type biofilms. We found that DIPY was able to interfere with the development of colistin-tolerant subpopulations associated with the dead microcolonies ( Supplementary Fig. 5 ). Hence, in addition to mechanisms directly involved in antibiotic tolerance, we showed that multicellular behaviours such as migration and cell–cell signalling (QS) are important in the recovery of pathogenic biofilms that can survive the otherwise lethal antibiotic treatments. Since clinical departments only test for resistance (growth versus no growth on medium supplemented with antibiotics) before the commencement of antibiotic treatments of the patient, the development of antibiotic-tolerant subpopulations in response to exposures can be considered a concealed mechanism of antibiotic recalcitrance. Chemical Approach to combat colistin-tolerant subpopulations Given that both type IV pili-mediated migration and QS are required for establishing antibiotic-tolerant subpopulations in developing P. aeruginosa biofilm, we employed a chemical biology approach to disable these activities. Macrolides, which can inhibit type IV pili assembly and QS in P. aeruginosa 39 , were used in combination with colistin to treat P. aeruginosa biofilms. Treatment with erythromycin had a mild killing effect on biofilm cells and planktonic cells ( Fig. 5a–c , respectively), implying that erythromycin was unable to kill the biofilms. However, combination treatments with erythromycin and colistin lead to complete eradication and repression of the development of antibiotic-tolerant subpopulations ( Fig. 5a,b ). Supporting that our combination treatment was specific to the colistin-tolerant cells present in the biofilm mode of life, we did not observe the synergistic effect of colistin and erythromycin against planktonic P. aeruginosa at concentrations equivalent to that of the biofilm treatment ( Fig. 5c ). Figure 5: Targeting type IV pili and QS simultaneously leads to eradication of antibiotic-tolerant cells. ( a ) Colistin-tolerant cells formed in PAO1 flow cell biofilms after colistin and erythromycin single treatment and combined treatment. Colistin-tolerant cells were unable to form in PAO1 biofilms after combined erythromycin+colistin treatment. Experiments were performed in triplicate. Live cells appear green and dead cells appear red. Scale bars, 50 μm. ( b ) Live/dead ratios were calculated based on CLSM images. The mean and s.d. from five experiments is shown for in vivo biofilms. * P <0.01, One-way ANOVA. ( c ) C.f.u. per ml of PAO1 planktonic cultures treated with colistin, erythromycin and colistin+erythromycin. The mean and s.d. from three experiments is shown. C.f.u. per ml of in vivo PAO1 biofilms obtained from implant ( d ) and cells from the spleen ( e ), with and without antibiotic treatment. Dotted horizontal lines represent limit of detection. The mean and s.d. from five experiments is shown for in vivo biofilms. * P <0.01, Student’s t -test. Full size image To verify the functional eradication of antibiotic-tolerant cells by both antibiotics in vivo , we used a murine model for implant-associated infection 40 . As opportunistic infections on the host could occur when biofilm is formed on implants such as catheters and heart valves, treatments that prevent the rise of antimicrobial-tolerant or -resistant populations are essential 41 , 42 . Implants coated with P. aeruginosa biofilms were surgically inserted into the peritoneum of mice. The mice were treated locally with antibiotics to emulate the treatment given to patients with infections from implants. The concentration of antibiotics (1 mg kg −1 colistin and 10 mg kg −1 erythromycin) used for each mouse was well below the median lethal dose of each antibiotic (86 mg kg −1 colistin and 280 mg kg −1 erythromycin), ensuring the non-lethal treatment of P. aeruginosa biofilm in the mouse. The c.f.u. per ml count of P. aeruginosa from the implant and spleen after 24 h incubation in the mouse revealed that administration of colistin or erythromycin alone did not eradicate the infection. However, synergistic treatments with colistin and erythromycin eradicated the biofilm to the limit of detection by c.f.u. per ml ( Fig. 5d ). Since we were unable to detect live bacterial cells in the spleen samples, we concluded that the spread of infection was also halted ( Fig. 5e ). Hence, supplementing conventional antibiotic treatment with a tolerance-interfering compound appears to be a promising therapy for eradicating biofilm-associated infections. The association between type IV pili and the colistin-tolerant subpopulations in developing P. aeruginosa biofilm has been presented previously 18 , whereas the position in a nutrient-rich microenvironment seems to be important for the development of colistin-tolerant subpopulations in more mature P. aeruginosa biofilms 27 . We here tracked the formation of antibiotic-tolerant subpopulations in developing P. aeruginosa biofilms and used pulsed-SILAC to selectively label their proteome to determine new protein abundance in the antibiotic-tolerant subpopulations, which led to the identification of multiple genes/proteins essential for the development of antibiotic-tolerant biofilm. We propose the following model for the development of colistin-tolerant subpopulations in developing P. aeruginosa biofilm, which links it to the ‘phoenix effect’ ( Fig. 6 ): (a) the majority of the biofilm is killed by colistin treatment; (b) surviving antibiotic-tolerant cell aggregates overexpress type IV pili for targeted migration to the top of the dead biofilm; (c) and produce QS-regulated factors for promoting the formation of new antibiotic-tolerant microcolonies. The microcolony size in our in vitro P. aeruginosa biofilms is around 50 μm, which is within the range of microcolony sizes identified from in vivo 43 , 44 and ex vivo 45 , 46 P. aeruginosa biofilms. This suggested that colistin might have similar effects on in vivo P. aeruginosa biofilms as those observed on our in vitro P. aeruginosa biofilms. Figure 6: Model of antibiotic-tolerant cell formation in P. aeruginosa biofilms. ( a ) Antibiotics such as colistin treatment kill most of the cells in the biofilms, but leave a few antibiotic-tolerant cells at the bottom of the biofilm. ( b ) Antibiotic-tolerant cells expand in numbers and migrate to the top of the biofilm using pilus-mediated motility. ( c ) Assemblies of antibiotic-tolerant cells induce QS that leads to the production of QS-related virulence factors, such as elastase and pyocyanin. A new antibiotic-tolerant biofilm is formed. Full size image Furthermore, our study showed that targeting mechanisms important to antibiotic-tolerant cells greatly improved conventional biofilm treatment strategies. It highlighted the importance of developing QS and motility inhibitors that can be given to chronically infected patients, with the aim of constituting functional anti-biofilm chemotherapies. This strategy might also be applied to other complex differentiated communities, such as cancer, as a recent study revealed the presence of motile drug-tolerant cells in chemotherapy-treated cancer cell populations 47 . Methods Bacterial strains and growth conditions The bacterial strains and plasmids used in this study are listed in Supplementary Table 1 . P. aeruginosa strains were grown at 37 °C in ABT minimal medium 48 supplemented with 5 g l –1 glucose (ABTG) or 2 g l −1 glucose and 2 g l –1 casamino acids (ABTGC). For marker selection in P. aeruginosa , 30 μg ml −1 gentamicin and 200 μg ml −1 carbenicillin were used as appropriate. Cultivation of biofilms in flow chambers MiniTn7- gfp -tagged P. aeruginosa biofilms were cultivated in ABTG medium at 37 °C using 40 mm × 4 mm × 1 mm three-channel flow chambers. Flow chambers were assembled as described previously, consisting of a flow cell that acted as a chamber for the growth of biofilms 49 . The flow cell was supplied with a flow of medium and oxygen, while waste medium was removed into a waste flask, by using a peristaltic pump. Each flow channel was inoculated with 300 μl of a 1:1,000 dilution of an overnight culture using a syringe and was incubated without flow for 1 h. Medium flow was started and maintained at a velocity of 0.2 mm s −1 by the A Cole-Palmer peristaltic pump. After 72 h of growth, PAO1 biofilms were treated with 10 μg ml −1 colistin. After 48 h of treatment, 300 μl of 20 μM PI was injected into each flow channel to stain dead cells in the biofilm. Experiments were performed in triplicate, and results are shown as the mean±s.d. Reversion of antibiotic-tolerant phenotype in biofilms MiniTn7- gfp -tagged and pmr - gfp -tagged P. aeruginosa biofilms were cultivated and treated with colistin as described above. Colistin-containing ABTG medium was switched to antibiotic-free ABTG to allow the reversion of antibiotic-tolerant phenotype back to normal biofilm for 48 h. To examine whether the antibiotic-tolerant phenotype can be re-induced, the biofilms were challenged with 10 μg ml −1 colistin for the second cycle. A volume of 300 μl of 20 μM PI was injected into each flow channel to stain dead cells in the biofilm. Biofilms were observed at 0 and 4 h after the second challenge of colistin. Reversion of antibiotic-tolerant cells in planktonic cultures PAO1 and PAO1/ pmr-gfp biofilm containing the live colistin-tolerant subpopulations and dead colistin-sensitive subpopulations were obtained from the flow chamber biofilm by flushing out the entire biofilm with 5 ml of 0.9% NaCl. The biofilm was homogenized by vortexing for 30 s and the cells were washed twice with 1 ml 0.9% NaCl. The antibiotic-tolerant cells were grown in 2 ml ABTGC to revert to normal phenotype and 2 ml ABTGC with 10 μg ml −1 colistin to maintain the colistin-tolerance phenotype, at 37 °C, 200 r.p.m. for 16 h. The cells were then washed twice in 2 ml 0.9% NaCl. PAO1 cells were serially diluted, plated on Luria–Bertani (LB) agar and incubated at 37 °C overnight. C.f.u. per ml was calculated by multiplying the average number of colonies by the dilution factor and dividing by the volume. For PAO1/ pmr-gfp cells, fluorescence from pmr-gfp expression (expressed in relative fluorescence units, r.f.u.) was measured for each well using a microplate reader (Tecan Infinite 2000) and was normalized to the OD 600 of each well. Experiments were performed with three replicates, and the results are shown as the mean±s.d. Viable colony counts of biofilms with colistin treatment PAO1 biofilms were cultivated as described above. After 72 h of growth, PAO1 biofilms were treated with 10 μg ml −1 colistin. Biofilms were collected from flow chamber by mechanical disruption with syringes at 0, 6, 24 and 48 h of colistin treatment. The biofilms were resuspended in 0.9% NaCl and further homogenized by vortexing. PAO1 cells were serially diluted, plated on LB agar and incubated at 37 °C overnight. C.f.u. per ml was calculated by multiplying the average number of colonies by the dilution factor and dividing by the volume. Experiments were performed with three replicates, and the results are shown as the mean±s.d. Microscopy and image acquisition of biofilms All microscopy images were captured and acquired using an LSM confocal laser scanning microscope (CLSM; Carl Zeiss, Germany). The × 20 objective was used to monitor GFP and PI fluorescence. IMARIS software (Bitplane AG, Zurich, Switzerland) was used to process the images. Experiments were performed in triplicate, and representative images are shown. Video of antibiotic-tolerant cell migration in biofilms MiniTn7- gfp -tagged PAO1 biofilms were treated with 10 μg ml −1 colistin and 4 μg ml −1 PI. From 24 to 32 h of colistin treatment, videos were captured and acquire using a CLSM with a × 40 objective lens. GFP and PI fluorescence were observed. IMARIS software (Bitplane AG, Zurich, Switzerland) was used to process the videos. Experiments were performed in triplicate, and representative videos are shown. Tracking antibiotic-tolerant cells migration in biofilms After acquiring videos as described in previous section, IMARIS software (Bitplane AG) was used to process the particle tracking of antibiotic-tolerant cells migrating, according to the manufacturer’s instructions. DNA sequencing analysis Genomic DNA of the P. aeruginosa biofilms with and without 48 h colistin treatment was purified using QIAamp DNA Mini Kit (Qiagen, Germany) and sequenced on an Illumina MiSeq V3 platform generating 300 bp long paired-end reads. The experiment was performed in three biological replicates: three colistin-treated biofilms and three control biofilms. The average insert sizes are 490–544 nucleotides and the average genomic coverage depths are 63–167 folds. Nucleotide differences were generated from the CLC Genomics Workbench 8.0 (CLC Bio, Aarhus, Denmark), and all of the used parameters were listed in Supplementary Methods . Briefly, adapters and low-quality reads were trimmed off. Paired-end reads in FASTQ format of colistin-treated and control biofilm genomes were first mapped against the P. aeruginosa PAO1 genome (NC_002516). Variants were detected using the low-frequency variant detection method with the required significance of 1%. Convergent non-synonymous variants of colistin-treated biofilm genomes compared with the P. aeruginosa PAO1 reference genome were obtained with the following criterions: (i) Parallel variants were found in the same gene in all of the three colistin-treated biofilm samples; and (ii) no variant was found in the same gene in any of the control biofilm samples. Pulsed-SILAC Three independent biological replicates (BR1, BR2 and BR3) were performed. Biofilms treated with colistin were grown and prepared for pulsed-SILAC analysis 22 . PAO1 Δ lysA biofilms were cultivated in 25 cm × 5 mm Ø flow tubes as described previously 49 , using ABTG medium, 500 μM C12 L -lysine and 192 mg l −1 Amino acid Drop-out Mix Minus Lysine without Yeast Nitrogen Base (United States Biological, MA) at 37 °C. Flow channels were inoculated with 1 ml of a 1:1,000 dilution of an overnight culture using a syringe and were incubated without flow for 1 h. A Cole–Palmer peristaltic pump was used to supply medium at a velocity of 0.2 mm s −1 , which corresponds to a laminar flow with a Reynolds number of 0.02. After culturing for 72 h, biofilms were treated with 10 μg ml −1 colistin for 6 h to kill the antibiotic-sensitive cells in biofilms. Biofilms were then treated with fresh ABTG supplemented with 500 μM C13 L -lysine and 192 mg l −1 Amino acid Drop-out Mix minus Lysine without Yeast Nitrogen Base for 48 h to tag new synthesized proteins in live antibiotic-tolerant cells with C13 L -lysine. Biofilms were washed with PBS and sonicated by probe sonicator (5 s on, 5 s off for 5 min in ice slurry, amplitude 30%) for cell lysis. Protein samples were separated on a SDS–PAGE gel. Protein bands were washed with ddH 2 O mixed with 50% acetonitrile (ACN)/50% 25 mM NH 4 HCO 3 via vigorous vortexing for 30 min, and dehydrated with 100% ACN until the gel particles became white. They were then reduced with 10 mM dithiothreitol at 56 °C for 1 h and alkylated with 55 mM iodoacetamide (IAA) for 45 min in the dark. The proteins were then washed with 25 mM NH 4 HCO 3 and 50% ACN/50% 25 mM NH 4 HCO 3 . Gel particles were then dehydrated with 100% ACN and dried under vacuum. Trypsin (V5111, Promega, Madison, WI) was added to the gel particles at a ratio of 1:30, and allowed to be completely adsorbed by the gel particles. A unit of 25 mM NH 4 HCO 3 was then added to completely cover the particles for incubation at 37 °C overnight. Peptides were extracted from the gel particles by two 20-min sonications in the presence of 50% ACN containing 0.1% Trifluoroacetic acid (TFA). Extracts were combined, vacuum-dried and resuspended in 0.1% FA for liquid chromatography (LC)–MS/MS analysis. Peptides were separated and analysed on a Dionex Ultimate 3000 RSLCnano system coupled to a Q Exactive (Thermo Fisher, MA) as previously described 50 . Approximately 1 μg of peptide from each pooled fraction was injected into an Acclaim peptide trap column (Thermo Fisher, MA, USA) via the Dionex RSLCnano auto-sampler. Peptides were separated in a Dionex EASY-Spray 75 μm × 10 cm column packed with PepMap C18 3 μm, 100 Å (Thermo-Scientific, MA, USA) at room temperature. The flow rate was 300 nl min −1 . Mobile phase A (0.1% formic acid in 5% ACN) and mobile phase B (0.1% formic acid in 90% ACN) were used to establish a 60-min gradient. Peptides were then analysed on Q Exactive with an EASY nanospray source (Thermo Fisher, MA) at an electrospray potential of 1.5 kV. A full MS scan (350–1,600 m / z range) was acquired at a resolution of 70,000 at m / z 200 and a maximum ion accumulation time of 100 ms. Dynamic exclusion was set as 15 s. The resolution of the higher energy collisional dissociation (HCD) spectra was set to 17,500 at m / z 200. The automatic gain control (AGC) settings of the full MS scan and the MS 2 scan were 3E6 and 2E5, respectively. The 10 most intense ions above the 2,000 count threshold were selected for fragmentation in HCD, with a maximum ion accumulation time of 100 ms. An isolation width of 2 was used for MS 2 . Single and unassigned charged ions were excluded from MS/MS. For HCD, the normalized collision energy was set to 28%. The underfill ratio was defined as 0.2%. Two injections have been performed for the first biological replicates and three injections have been performed for the second and third biological replicates, respectively, to evaluate the technical reproducibility of the instrument and workflow. Raw data files of the eight technical replicates were processed and searched as eight experiments using MaxQuant (v1.5.2.8) (refs 25 , 26 ) and the Genebank P. aeruginosa protein database (downloaded on 21 May 2015, 55,063 sequences, 17,906,244 residues) together with the common contaminant proteins. Standard search type with 2 multiplicity, 3 maximum labelled AAs and heavy labelled Lys6 were used for the pulsed-SILAC quantitation. The database search was performed using the Andromeda search engine bundled with MaxQuant using the MaxQuant default parameters for Q Exactive Orbitrap mass spectrometer. The first and main searches peptide mass tolerance for were 20 and 4.5 parts per million (p.p.m.) respectively, while the MS/MS match tolerance was 20 p.p.m. with FTMS de-isotoping enabled. The absence of two trypsin cleavage sites per protein was allowed. Carbamidomethylation (C) was set as a fixed modification. Oxidation (M) and deamidation (NQ) were set as variable modifications. The search was performed in the Revert decoy mode with PSM FDR, protein FDR and site decoy fraction set to 0.01. A total of 4,382 proteins including 4,250 P. aeruginosa proteins were identified by Andromeda in MaxQuant with FDR <1% ( Supplementary Data 2 ). The scatter plots of inter-technical replicates of each biological replicate were used to evaluate the technical reproducibility of the results ( Supplementary Data 2 ). The technical reproducibility is good and follows the order of BR2( R 2 = ∼ 0.95)>BR3( R 2 = ∼ 0.9)>BR1( R 2 =0.86). To evaluate the repeatability of the biological replicates, all LC–MS/MS raw data files of BR1, BR2 or BR3 were grouped and searched, respectively, using MaxQuant to obtain the pulsed-SILAC protein abundance levels in the three biological replicates. The scatter plots of inter-biological replicates were all included in Supplementary Data 2 . Relative quantification of OdDHL Effluents from flow chambers were collected, filtered through 0.2-μm filters and the filtrates were collected. Overnight cultures of the reporter strain Δ lasI Δ rhlI / lasB-gfp (ref. 51 ) were adjusted to OD 600 =0.15 using ABTGC medium. A volume of 100 μl of filtrate was added to 100 μl of Δ lasI Δ rhlI / lasB-gfp in a 96-well plate (Nunc, Denmark). Because Δ lasI Δ rhlI cannot produce its own OdDHL, lasB-gfp is induced by the addition of filtrates containing OdDHL. Fluorescence from lasB-gfp expression (expressed in r.f.u.) was measured for each well using a microplate reader (Tecan Infinite 2000) and was normalized to the OD 600 of each well. Experiments were performed in triplicate, and results are shown as the mean±s.d. Elastase assay Effluents from flow chambers were collected, centrifuged to remove bacterial cells and filtered through 0.2-μm filters. An Elastase Assay Kit (EnzChek) was used according to the manufacturer’s instructions. ABTG medium was used as the negative control and elastase was used as the positive control. The fluorescence from each reaction (expressed in r.f.u.) was measured using a microplate reader (Tecan Infinite 2000) and was normalized to the OD 600 for each well of the 96-well microplate. Experiments were performed in triplicate, and results are shown as the mean±s.d. Pyocyanin assay Relative pyocyanin concentrations were quantified as described previously 52 . Effluents from flow chambers were collected, centrifuged to remove bacterial cells and filtered through 0.2-μm filters. ABTG medium was used as the negative control. A volume of 10 ml of filtrate was mixed with 1 ml of chloroform by vortexing. The lower chloroform phase was transferred to a new tube containing 200 μl of 0.2 M hydrochloric acid and vortexed. The top phase was then carefully transferred to a 96-well plate to measure OD 520 using a microplate reader (Tecan Infinite 2000). Experiments were performed in triplicate, and the results are shown as the mean±s.d. Cytotoxicity assay for macrophages In all, 5 × 10 5 RAW264.7 (ATCC TIB-71) murine macrophages were grown in 24-well culture plate as previously described 38 . The effluent from the flow chamber biofilms with or without colistin treatment were collected and filtered through a 0.2-μm filter to remove bacterial cells. ABTG media was used as negative control, while ABTG media+10 μg ml −1 colistin was used to show that colistin had no side effects on macrophages. The resultant supernatant was mixed at equal volumes with fresh DMEM and added to the macrophages. Macrophages were incubated for 4 h in 37 °C, 5% CO 2 . The cytotoxicity of the supernatants on macrophages was determined by adding 20 μM PI to monitor cell integrity. Macrophages stained red by PI under epifluorescent microscopy ( × 20 objective) were counted as dead, leaving the live ones unstained. Cells from five images of each samples were enumerated and the ratio of dead cells to live cells was calculated. Experiments were performed in triplicate, and the results are shown as the mean±s.d. Eradication of biofilms with colistin and DIPY iron chelator Fluorescent-tagged P. aeruginosa biofilms were cultivated as described above. After 72 h of growth, PAO1 biofilms were treated with 10 μg ml −1 colistin or 100 μg ml −1 DIPY plus 10 μg ml −1 colistin. After 48 h of treatment, 300 μl of 20 μM PI was injected into each flow channel to stain dead cells in the biofilm. Biofilm were observed under the CLSM as described above. As pyoverdine is a fluorescent metabolite 53 , its presence and localization in the biofilm was observed by CLSM with Ex 358 nm/Em 461 nm. Experiments were performed in triplicate, and results are shown as the mean±s.d. Eradication of biofilms by colistin and erythromycin Fluorescent-tagged P. aeruginosa biofilms were cultivated as described above. After 72 h of growth, PAO1 biofilms were treated with 10 μg ml −1 colistin or 100 μg ml −1 erythromycin plus 10 μg ml −1 colistin. After 48 h of treatment, 300 μl of 20 μM PI was injected into each flow channel to stain dead cells in the biofilm. Biofilm were observed under the CLSM as described above. Experiments were performed in triplicate, and results are shown as the mean±s.d. Time-dependent killing of planktonic PAO1 with erythromycin and colistin The minimal inhibitory concentration (MIC) of erythromycin is 2,000 μg ml −1 , while the MIC for colistin is 1 μg ml −1 in PAO1 strains. Planktonic P. aeruginosa strains were grown at 37 °C in ABTGC with 100 μg ml −1 of erythromycin, 10 μg ml −1 of colistin and 100 μg ml −1 of erythromycin+10 μg ml −1 of colistin. Cells were collected at 0, 1 and 3 h. PAO1 cells were serially diluted, plated on LB agar and incubated at 37 °C overnight. C.f.u. per ml was calculated by multiplying the average number of colonies by the dilution factor and dividing by the volume. Mouse implant infection model The animals used in this study were 7-week-old female BALB/c mice (Taconic M&B A/S). The animal experiments were performed in accordance to the NACLAR Guidelines and Animal and Birds (Care and Use of Animals for Scientific Purposes) Rules by Agri-Food & Authority of Singapore (AVA), with authorization and approval by the Institutional Care and Use Committee (IACUC) and Nanyang Technological University, under the permit number A-0191 AZ. PAO1 biofilms were grown on cylindrical implants (3 mm × 5 mm Ø) in 0.9% NaCl at 37 °C with shaking at 110 r.p.m. for 20 h, as described previously 40 , 54 . The biofilm-coated implants were washed three times with 0.9% NaCl and transplanted into the peritoneum of the mice after they were anesthetized with 100 mg kg −1 ketamine and 10 mg kg −1 xylene. Antibiotics were injected at the site of implantation. No antibiotics, 1 mg kg −1 colistin, 10 mg kg −1 erythromycin, or 1 mg kg −1 colistin and 10 mg kg −1 erythromycin were used on groups of five mice. Mice were killed after 24 h of treatment. Implants were collected, sonicated in 0.9% NaCl in an ice water bath using an Elmasonic P120H (Elma, Germany; power=50% and frequency=37 KHz) for 10 min and vortexed three times for 10 s to disrupt the biofilms. Spleens were also collected and completely homogenized using the Bio-Gen PRO200 Homogenizer (Pro Scientific, US) at maximum power on ice. Samples were serially diluted, plated on LB agar and incubated at 37 °C overnight. C.f.u. per ml was calculated by multiplying the average number of colonies by the dilution factor and multiplying by the volume. Experiments were performed with five replicates, and the results are shown as the mean±s.d. Additional information Accession codes: Whole-genome sequence data for P. aeruginosa biofilms have been deposited in the NCBI Short Read Archive (SRA) database with accession code SRP058610 . LC–MS/MS raw data of the eight replicates and results for protein and peptide identification and quantification from MaxQuant have been submitted to the ProteomeXchange Consortium via the PRIDE data repository with the dataset identifier PXD002369. How to cite this article: Chua, S. L. et al. Selective labelling and eradication of antibiotic-tolerant bacterial populations in Pseudomonas aeruginosa biofilms. Nat. Commun. 7:10750 doi: 10.1038/ncomms10750 (2016). Accession codes Accessions Sequence Read Archive SRP058610
Scientists at Nanyang Technological University, Singapore (NTU Singapore) have discovered that antibiotics can continue to be effective if bacteria's cell-to-cell communication and ability to latch on to each other are disrupted. This research breakthrough is a major step forward in tackling the growing concern of antibiotic resistance, opening up new treatment options for doctors to help patients fight against chronic and persistent bacterial infections. The study, led by Assistant Professor Yang Liang from the Singapore Centre for Environmental Life Sciences (SCELSE) at NTU, found that a community of bacteria, known as biofilm, can put up a strong line of defence to resist antibiotics. The NTU team has successfully demonstrated how biofilms can be disrupted to let antibiotics continue their good work. The research was published recently in Nature Communications. "Many types of bacteria that used to be easily killed by antibiotics have started to develop antibiotic resistance or tolerance, either through acquiring the antibiotic resistant genes or by forming biofilms," said Asst Prof Yang, who also teaches at NTU's School of Biological Sciences. "The US Center for Disease Control estimates that over 60 per cent of all bacterial infections are related to biofilms. Our study has shown that by disrupting the cell-to-cell communication between bacteria and their ability to latch on to each other, we can compromise the biofilms, leaving the bacteria vulnerable and easily killed by antibiotics." Bacterial resistance to antibiotics is rapidly growing world-wide and this puts at risk the ability to treat common infections in the community and hospitals. The World Health Organisation states on its factsheet on Antimicrobial resistance that "without urgent, coordinated action, the world is heading towards a post-antibiotic era, in which common infections and minor injuries, which have been treatable for decades, can once again kill". Associate Professor Kevin Pethe, an expert in antibiotic development and infectious diseases from NTU's Lee Kong Chian School of Medicine, said that this discovery may yield new treatment options that doctors can employ against chronic and persistent bacterial infections. "Being able to disable biofilms and its protective benefits for the bacteria is a big step towards tackling the growing concern of antibiotic resistance," said Assoc Prof Pethe. "While the scientific community is developing new types of antibiotics and antimicrobial treatments, this discovery may help to buy time by improving the effectiveness of older drugs." How the discovery was made Asst Prof Yang's team discovered the mechanisms of how bacteria are able to tolerate antibiotics by using a common bacterium Pseudomonas aeruginosa. The bacteria were allowed to form a wall of biofilm in a microfluidic system. An antibiotic was then introduced. A large portion of the bacterial cells were killed by the antibiotic, leaving only a small fraction of antibiotic-tolerant cells. However, these cells were able to reproduce rapidly and dominate the community. The scientists then used an FDA-approved drug that disrupts cell-to-cell communication (known as quorum sensing) and 'velcro'-like cells that can move and "stick" to each other. This drug was added to the antibiotic and together they managed to kill all the bacterial cells. The same tests were then performed on mice with infected implants. It was found that only mice treated with a combination of anti-biofilm compound and antibiotics had their infections completely eradicated. Interdisciplinary research This discovery breakthrough was made possible through an interdisciplinary approach, where experts from three different fields – microbiology ecology, systems biology and chemical biology – came together to tackle the problem. The NTU research team included proteomics expert Assoc Prof Newman Sze Siu Kwan from NTU's School of Biological Sciences. Proteomics was the key method used to discover chemical signals that bacterial cells in the biofilm use to communicate with each other. Another researcher in the team is Professor Michael Givskov, a world-leading scientist in the area of biofilm research and bacterial cell-to-cell communication at SCELSE. Together, the team found that traditional methods of isolating the bacteria from the biofilm for observation did not work, as the bacteria behave differently after being isolated from the biofilm. This study, supported by the Ministry of Education Academic Research Fund, took Asst Prof Yang and his team four years to complete. Moving forward, they will seek more ways to improve efficiency of antibiotics for persistent infections. "What we hope to do is to develop new compounds that are able to better target biofilms. This will help existing drugs perform better at overcoming biofilm infections, which is commonly seen in cases of patients with artificial implants and chronic wounds, as they have very limited effective treatment options that are effective," said Asst Prof Yang.
10.1038/ncomms10750
Medicine
Brains of people with autism spectrum disorder share similar molecular abnormalities
Genome-wide changes in lncRNA, splicing, and regional gene expression patterns in autism, Nature, nature.com/articles/doi:10.1038/nature20612 Journal information: Nature
http://nature.com/articles/doi:10.1038/nature20612
https://medicalxpress.com/news/2016-12-brains-people-autism-spectrum-disorder.html
Abstract Autism spectrum disorder (ASD) involves substantial genetic contributions. These contributions are profoundly heterogeneous but may converge on common pathways that are not yet well understood 1 , 2 , 3 . Here, through post-mortem genome-wide transcriptome analysis of the largest cohort of samples analysed so far, to our knowledge 4 , 5 , 6 , 7 , we interrogate the noncoding transcriptome, alternative splicing, and upstream molecular regulators to broaden our understanding of molecular convergence in ASD. Our analysis reveals ASD-associated dysregulation of primate-specific long noncoding RNAs (lncRNAs), downregulation of the alternative splicing of activity-dependent neuron-specific exons, and attenuation of normal differences in gene expression between the frontal and temporal lobes. Our data suggest that SOX5, a transcription factor involved in neuron fate specification, contributes to this reduction in regional differences. We further demonstrate that a genetically defined subtype of ASD, chromosome 15q11.2-13.1 duplication syndrome (dup15q), shares the core transcriptomic signature observed in idiopathic ASD. Co-expression network analysis reveals that individuals with ASD show age-related changes in the trajectory of microglial and synaptic function over the first two decades, and suggests that genetic risk for ASD may influence changes in regional cortical gene expression. Our findings illustrate how diverse genetic perturbations can lead to phenotypic convergence at multiple biological levels in a complex neuropsychiatric disorder. Main We performed rRNA-depleted RNA sequencing (RNA-seq) of 251 post-mortem samples of frontal and temporal cortex and cerebellum from 48 individuals with ASD and 49 control subjects (Methods and Extended Data Fig. 1a–h ). We first validated differential gene expression (DGE) between samples of cortex from control individuals and those with ASD (ASD cortex) by comparing gene expression with that of different individuals from those previously profiled by microarray 8 , and found strong concordance ( R 2 = 0.60; Fig. 1a , Extended Data Fig. 1i ). This constitutes an independent technical and biological replication of shared molecular alterations in ASD cortex. Figure 1: Transcriptome-wide differential gene expression and alternative splicing in ASD. a , Replication of DGE between ASD and control cortex from previously analysed samples (16 ASD and 16 control on microarray 8 ) with new age- and sex-matched cortex samples (15 ASD and 17 control). b , P value distribution of the linear mixed effect (LME) model DGE results for cortex and cerebellum. c , LINC00693 and LINC00689 are upregulated in ASD and downregulated during cortical development (developmental expression data from ref. 12 ). Two-sided ASD–control P values are computed by the LME model, developmental P values are computed by analysis of variance (ANOVA). FPKM, fragments per kilobase million mapped reads. d , UCSC genome browser track displaying reads per million (RPM) in ASD and control samples along with sequence conservation for LINC00693 and LINC00689 . e , Cell-type enrichment analysis of differential alternative splicing events from cortex using exons with ΔPSI (per cent spliced in) >50% in each cell type compared to the others 17 . f , g , Correlation between the first principal component (PC1) of the cortex differential splicing (DS) set and gene expression of neuronal splicing factors in cortex ( f ) and cerebellum ( g ) (DGE P value in parentheses). h , Enrichment among ASD differential splicing events and events regulated by splicing factors and neuronal activity (see Methods). i , Correlations between the PC1 across the ASD versus control analyses for different transcriptome subcategories. Bottom left: scatterplots of the principal components for ASD (red) and control (black) individuals. Top right: pairwise correlation values between principal components. PowerPoint slide Full size image We next combined covariate-matched samples from individuals with idiopathic ASD to evaluate changes across the entire transcriptome. Compared to control cortex, 584 genes showed increased expression and 558 showed decreased expression in ASD cortex ( Fig. 1b ; Benjamini–Hochberg FDR < 0.05, linear mixed effects model; see Methods). This DGE signal was consistent across methods, unrelated to major confounders, and found in more than two-thirds of ASD samples ( Extended Data Fig. 1j–m ). We performed a classification analysis to confirm that gene expression in ASD could separate samples by disease status ( Extended Data Fig. 2a ) and confirmed the technical quality of our data with qRT–PCR ( Extended Data Fig. 2b, c ). We next evaluated enrichment of the gene sets for pathways and cell types ( Extended Data Fig. 2d, e ), and found that the downregulated set was enriched in genes expressed in neurons and involved in neuronal pathways, including PVALB and SYT2 , which are highly expressed in interneurons; by contrast, the upregulated gene set was enriched in genes expressed in microglia and astrocytes 8 . Although there was no significant DGE in the cerebellum (FDR < 0.05, P distributions in Fig. 1b ), similar to observations in a smaller cohort 8 , there was a replication signal in the cerebellum and overall concordance between ASD-related fold changes in the cortex and cerebellum ( Extended Data Fig. 2f–h ). The lack of significant DGE in the cerebellum is explained by the fact that changes in expression were consistently stronger in the cortex than in the cerebellum ( Extended Data Fig. 2h ), which suggests that the cortex is more selectively vulnerable to these transcriptomic alterations. We also compared our results to an RNA-seq study of protein coding genes in the occipital cortex of individuals with ASD and control subjects 4 . Despite significant technical differences that reduce power to detect DGE, and profiling of different brain regions in that study, there was a weak but significant correlation in fold changes, which was due mostly to upregulated genes in both studies ( P = 0.038, Extended Data Fig. 2i, j ). We next explored lncRNAs, most of which have little functional annotation, and identified 60 lncRNAs in the DGE set (FDR < 0.05, Extended Data Fig. 2k ). Multiple lines of evidence, including developmental regulation in RNA-seq datasets and epigenetic annotations, support the functionality of most of these lncRNAs ( Supplementary Table 2 ). Moreover, 20 of these lncRNAs have been shown to interact with microRNA (miRNA)–protein complexes, and 9 with the fragile X mental retardation protein (FMRP), whose mRNA targets are enriched in ASD risk genes 9 , 10 . As a group, these lncRNAs are enriched in the brain relative to other tissues ( Extended Data Fig. 2l, m ) and most that have been evaluated across species exhibit primate-specific expression patterns in the brain 11 , which we confirm for several transcripts ( Supplementary Information , Extended Data Fig. 3a–h ). We highlight two primate-specific lncRNAs, LINC00693 and LINC00689 . Both interact with miRNA processing complexes and are typically downregulated during development 12 , but are upregulated in ASD cortex ( Fig. 1c, d , Extended Data Fig. 2n ). These data show that dysregulation of lncRNAs, many of which are brain-enriched, primate-specific, and predicted to affect protein expression through miRNA or FMRP interactions, is an integral component of the transcriptomic signature of ASD. Previous studies have evaluated alternative splicing in ASD and its relation to specific splicing regulators in small sets of selected samples across individuals 8 , 13 , 14 . Given the increased sequencing depth, reduced 5′–3′ sequencing bias, and larger cohort represented here, we were able to perform a comprehensive analysis of differential alternative splicing ( Extended Data Fig. 4a ). We found a significant differential splicing signal over background in the cortex (1,127 differential splicing events in 833 genes; Methods), but not in the cerebellum ( P distributions in Extended Data Fig. 4b, c ). We confirmed that confounders do not account for the differential splicing signal, reproduced the global differential splicing signal with an alternative pipeline 15 , and performed technical validation with RT–PCR ( Extended Data Figs 4d–g , 5a ), confirming the differential splicing analysis. Notably, the differential splicing molecular signature is not driven by DGE ( Extended Data Fig. 4h ), consistent with the observation that splicing alterations are related to common disease risk independently of gene expression changes 16 . Cell-type specific enrichment and pathway analysis of alternative splicing demonstrated that most differential splicing events involve exclusion of neuron-specific exons 17 ( Fig. 1e , Extended Data Fig. 4i ). Therefore, we next investigated whether the shared splicing signature in ASD could be explained by perturbations in splicing factors known to be important in nervous system function 8 , 14 ( Extended Data Fig. 4j ), and found high correlations between splicing factor expression and differential splicing in the cortex ( Fig. 1f ) but not the cerebellum ( Fig. 1g ). The absence of neuronal splicing factor DGE or correlation with splicing changes in the cerebellum is consistent with the absence of a differential splicing signal in the cerebellum and suggests that these splicing factors contribute to cortex-biased differential splicing. Previous experimental perturbation of three splicing factors, Rbfox1 (ref. 18 ), SRRM4 (ref. 19 ), and PTBP1 (ref. 20 ), shows strong overlap with the differential splicing changes found in ASD cortex, further supporting these predicted relationships ( Fig. 1h , Extended Data Fig. 5b ). Given that differential splicing events in ASD cortex overlap significantly with those that are targets of neuronal splicing factors, we hypothesized that some of these events may be involved in activity-dependent gene regulation. Indeed, differential splicing events were significantly enriched in those previously shown to be regulated by neuronal activity 21 ( Fig. 1h ). This overlap supports a model of ASD pathophysiology based on changes in the balance of excitation and inhibition and in neuronal activity 22 and suggests that alterations in transcript structure are likely to be an important component. When we compared the first principal component across samples for protein coding DGE, lncRNA DGE and differential splicing, we found remarkably high correlations ( R 2 > 0.8), indicating that molecular convergence is likely to be a unitary phenomenon across multiple levels of transcriptome regulation in ASD ( Fig. 1i ). Previous analysis suggested that the typical pattern of transcriptional differences between the frontal and temporal cortices may be attenuated in ASD 8 . We confirmed this in our larger cohort and identified 523 genes that differed significantly in expression between the frontal cortex and the temporal cortex in control subjects, but not those with ASD ( Fig. 2a ); we refer to these genes as the ‘attenuated cortical patterning’ (ACP) set ( Extended Data Fig. 6a ). We demonstrated the robustness of attenuation in cortical patterning in ASD by confirming that the ACP set was not more variable than other genes, that attenuation of cortical patterning was robust to removal of previously analysed samples 8 , and that the effect could also be observed using a different classification approach ( Extended Data Fig. 6b–h ). Figure 2: Attenuation of cortical patterning in ASD. a , Heat map of genes exhibiting DGE between frontal and temporal cortex at FDR < 0.05. In control cortex and ASD cortex, 551 genes and 51 genes, respectively, show DGE in in frontal versus temporal cortex. The ACP set is defined as the 523 genes that show DGE between regions in control but not ASD samples. RIN, RNA integrity number. b , Schematic of transcription factor motif enrichment upstream of genes in the ACP set. c , SOX5 exhibits attenuated cortical patterning in ASD (lines: frontal–temporal pairs from the same individual). d , Correlation between SOX5 expression and predicted targets in control and ASD samples for all ACP genes (top left), SOX5 targets from the ACP set (top right), SOX5 non-targets from the ACP set (bottom left), and background (all other genes, bottom right). Plots show the distribution of Pearson correlation values between SOX5 and other genes in ASD and control samples. Δ R , change in median R value between distributions. e , Gene Ontology (GO) term enrichment for genes upregulated and downregulated after SOX5 overexpression in neural progenitor cells. f , Enrichment analysis of the SOX5 differential gene expression (DGE) set in the ACP set and all other genes (background). P represents significance in enrichment over background by two-sided Fisher’s exact test. PowerPoint slide Full size image Pathway and cell-type analysis showed that the ACP set is enriched in Wnt signalling, calcium binding, and neuronal genes ( Extended Data Fig. 6i, j , Supplementary Information ). We next explored potential regulators of cortical patterning by transcription factor binding site enrichment ( Extended Data Fig. 6k ). Among the transcription factors identified, SOX5 was of particular interest because of its known role in mammalian corticogenesis 23 , 24 , its sole membership in the ACP set, and its correlation with predicted targets in the brains of control subjects, which is lost in ASD ( Fig. 2b–d ). We confirmed that a significant proportion of ACP genes are regulated by SOX5 by overexpressing it in human neural progenitors. SOX5 induced synaptic genes and repressed cell proliferation ( Fig. 2e ), and predicted SOX5 targets exhibited net downregulation, consistent with the repressive function of SOX5 ( Fig. 2f , Extended Data Fig. 6l, m ). These findings support the prediction that attenuated patterning of the transcription factor SOX5 between cortical regions contributes to direct alterations in patterning of SOX5 targets. We also evaluated DGE and differential splicing in nine individuals with dup15q (which is among the most common and penetrant forms of ASD) and independent controls ( Extended Data Fig. 7a, b ). Significant upregulation in the 15q11.1–13.2 region ( cis ) was evident in duplication carriers, but not in idiopathic ASD ( Fig. 3a ). Remarkably, genome-wide ( trans) DGE and differential splicing patterns were highly concordant between dup15q and ASD ( Fig. 3b, c , Extended Data Fig. 7c–e ). Moreover, alterations in dup15q cortex were of greater magnitude and more homogeneous than those observed in idiopathic ASD cortex ( Fig. 3d , Extended Data Fig. 7f, g ). Analysis of DGE in the cerebellum confirmed a weaker signal than in the cortex and demonstrated that cis changes in dup15q cerebellum ( Extended Data Fig. 7h–j ) were more concordant with the cortex than trans changes ( Extended Data Fig. 7k, l ), further supporting the observation that the cortex is selectively vulnerable to transcriptomic alteration in ASD. Together, the DGE and differential splicing analyses in dup15q provide further biological validation of the ASD transcriptomic signature and demonstrate that a genetically defined form of ASD exhibits similar changes to idiopathic ASD. Figure 3: Duplication 15q syndrome recapitulates transcriptomic changes in idiopathic ASD. a , DGE changes across the 15q11–13.2 region for ASD and dup15q compared to control. Error bars show 95% confidence intervals for the fold changes. *FDR < 0.05 across this region. BP, breakpoint. b , Comparison of DGE effect sizes in dup15q versus control and ASD versus control. c , Comparison of differential alternative splicing effect sizes in dup15q versus control and ASD versus control. d , Average linkage hierarchical clustering of dup15q samples and controls using the DGE and differential alternative splicing (DS) gene sets. PowerPoint slide Full size image We next applied weighted gene co-expression network analysis (WGCNA; Methods) and evaluated the biological functions and ASD association of the 24 co-expression modules identified ( Extended Data Fig. 8a–d ). Of the six modules associated with ASD, three were upregulated and three were downregulated, and each showed significant cell-type enrichment ( Fig. 4a, b ). This analysis corroborates and extends previous work by identifying sub-modules of those previously identified, thus demonstrating greater biological specificity ( Extended Data Figs 8e , 9a ). It also confirms that downregulated modules are enriched in synaptic function and neuronal genes, that upregulated modules are enriched in genes associated with inflammatory pathways and glial function 4 , 8 , and that microglial and synaptic modules exhibit significant anticorrelation ( Fig. 4c ). Furthermore, the downregulated modules CTX.M10 and CTX.M16 are enriched in genes previously related to neuronal firing rate, consistent with the overlap of dysregulated splicing with events regulated by neuronal activity ( Extended Data Fig. 9b and Fig. 1h ). One glial and one neuronal module are highlighted in Fig. 4d, e (the remainder in Extended Data Fig. 9c–e ). Remarkably, the upregulated module CTX.M20 was not found in previous analyses, overlaps significantly with the ACP set (FDR < 0.05, Extended Data Fig. 9a ), and contains genes implicated in development and regulation of cell differentiation ( Fig. 4f ). Figure 4: Co-expression network analysis. a , Signed association of module eigengenes with diagnosis (Bonferroni-corrected P value from an LME model, see Extended Data Fig. 8c and Methods). Positive values indicate modules with an increased expression in ASD samples. Grey bars with labels signify six ASD-associated modules. b , Cell-type enrichment for the ASD-associated modules. c , Heat map of correlations between ASD-associated module eigengenes sorted by average linkage hierarchical clustering. d – f , Module plots displaying the top 15 hub genes and top 50 connections along with the GO term enrichment of each module. g , Plot of CTX.M20 and CTX.M19 module eigengenes across age. P values are for the difference between temporal trajectories for ASD and control by permutation test (see Methods). PowerPoint slide Full size image We also leveraged our large sample and younger age-matched ASD and control samples to detect differences in developmental trajectories in ASD compared to control subjects. We identified a remarkable difference in CTX.M19 and CTX.M20 during the first two decades of life ( Fig. 4g , additional age trajectories in Extended Data Fig. 9f ) that is most consistent with an evolving process during early brain development that stabilizes starting in late childhood and early adolescence. We also found preservation of most cortex modules in the cerebellum, but with weaker associations to ASD ( Extended Data Fig. 10a–h , Supplementary Table 4 ), consistent with the DGE analysis showing that ASD-related changes are substantially smaller in the cerebellum. To determine the role of genetic factors in transcriptomic dysregulation, we evaluated enrichment in genes affected by ASD-associated rare mutations and common variants ( Extended Data Fig. 9a ). One module, CTX.M24, exhibited significant enrichment for rare mutations found in ASD, while rare de novo mutations associated with intellectual disability were most strongly enriched in CTX.M22 (FDR < 0.05, Extended Data Fig. 9a ). Remarkably, CTX.M24 was significantly enriched for lncRNAs, genes expressed highly during fetal cortical development, and genes harbouring protein-disrupting mutations found in ASD, suggesting that lncRNAs will be important targets for investigation in ASD 10 , 25 (FDR < 0.05, Extended Data Fig. 9a, g ). By contrast, enrichment for ASD-associated common variation was observed in CTX.M20 (FDR < 0.1, Extended Data Fig. 9h – 1 , Methods). As CTX.M20 is enriched for the ACP gene set, this suggests a potential link between polygenic risk and regional attenuation of gene expression in ASD. Several other ASD-associated modules showed a weaker common variant signal for ASD, including CTX.M16, which also shows a signal for schizophrenia polygenic risk. However, other phenotypes with larger, better-powered genome-wide association studies (GWAS) also demonstrate enrichment ( Extended Data Fig. 9h–i ). It will be necessary to perform this analysis with larger ASD GWAS in the future to fully understand the extent and specificity of the contribution of common variation to the transcriptome alterations in ASD. These data contribute to a consistent emerging picture of the molecular pathology of ASD 4 , 7 , 8 , 10 , 25 , 26 , 27 . Parsimony suggests that the highly overlapping expression pattern shared by individuals with dup15q and the majority of those with idiopathic ASD represents an evolving adaptive or maladaptive response to a primary insult rather than a secondary environmental hit. Although we observe no significant association of the ASD-associated transcriptome signature with either clinical or technical confounders, some of the changes are likely to represent consequences or compensatory responses, rather than causal factors. In this regard, it is notable that the observed transcriptome changes are consistent with an ongoing process that is triggered largely by genetic and prenatal factors 3 , 9 , 10 , 23 , but that evolves during the first decade of brain development. We interpret these data to suggest that aberrant microglia–neuron interactions reflect an early alteration in developmental trajectory that becomes more evident in late childhood. This corresponds to the period of synapse elimination and stabilization after birth in humans 28 , 29 , which may have significant implications for intervention. Our analyses also reveal primate-specific lncRNAs that are probably relevant to understanding human higher cognition 11 , 30 . Co-expression of lncRNAs with genes harbouring ASD-associated protein coding mutations suggests that these noncoding RNAs are involved in similar biological functions and are potential candidate ASD risk loci. As future investigations pursue the full range of causal genetic variation that contributes to ASD risk, these data will be valuable for interpreting genetic and epigenetic studies of ASD and the relationship between ASD and other neuropsychiatric disorders. Methods Brain tissue Human brain tissue for ASD and control individuals was acquired from the Autism Tissue Program (ATP) brain bank at the Harvard Brain and Tissue Bank (which has since been incorporated into the Autism BrainNet) and the University of Maryland Brain and Tissue Bank, a Brain and Tissue Repository of the NIH NeuroBioBank. Sample acquisition protocols were followed for each brain bank, and samples were de-identified before acquisition. Brain sample and donor metadata are available in Supplementary Table 1 and further information about samples can be found in the Supplementary Information . No statistical methods were used to predetermine sample size. The sample dissections, RNA extractions, and RNA sequencing experiments were randomized ( Supplementary Information ). The investigators were blinded to diagnosis until the analysis but unblinded during the analysis. RNA library preparation, sequencing, mapping and quantification A detailed protocol, including parameters given to programs for each step, is provided in the Supplementary Information . Briefly, starting with total RNA, rRNA was depleted (RiboZero Gold, Illumina) and libraries were prepared using the TruSeq v2 kit (Illumina) to construct unstranded libraries with a mean fragment size of 150 bp. Libraries underwent 50-bp paired end sequencing on an Illumina HiSeq 2000 or 2500 machine. Paired end reads were mapped to hg19 using Gencode v18 annotations 31 via Tophat2 (ref. 32 ). Gene expression levels were quantified using union exon models with HTSeq 33 . This approach counts only reads on exons or reads spanning exon–exon junctions, and is globally similar to including reads on the introns (whole gene model) or computing probabilistic estimates of expression levels ( Extended Data Fig. 1e–g ). Differential gene expression DGE analysis was performed with expression levels normalized for gene length, library size, and G+C content (referred to as ‘normalized FPKM’). Cortex samples (frontal and temporal) were analysed separately from cerebellum samples. An LME model framework was used to assess differential expression in log 2 [normalized FPKM] values for each gene for cortical regions because multiple brain regions were available from the same individuals. The individual donor identifier was treated as a random effect, and age, sex, brain region and diagnoses were treated as fixed effects. In the cerebellum DGE analysis, a linear model was used and brain region was not included as a covariate, because only one brain region was available in each individual and a handful of technical replicates could be removed for DGE analysis. We also used technical covariates accounting for RNA quality and batch effects as fixed effects in this model ( Supplementary Information ). Significant results are reported at Benjamini–Hochberg FDR < 0.05 (ref. 34 ), and full results are available in Supplementary Table 2 . Throughout the study, we assessed replication between datasets by evaluating the concordance between independent sample sets by comparing the squared correlation ( R 2 ) of fold changes of genes in each sample set at a defined statistical cut-off. We set the statistical cut-off in one sample set (the y axis in the scatterplots) and computed the R 2 with fold changes in these genes in the comparator sample set (the x axis in the scatterplots). For details of the regularized regression analyses and cortical patterning analyses, see Supplementary Information . Differential alternative splicing Alternative splicing was quantified using the per cent spliced in (PSI) metric using Multivariate Analysis of Transcript Splicing (MATS, v3.08) 35 . For each event, MATS reports counts supporting the inclusion ( I ) or splicing ( S ) of an event. To reduce spurious events due to low counts, we required at least 80% of samples to have I + S ≥ 10. For these events, the PSI is calculated as PSI = I /( I + S ) ( Extended Data Fig. 4a ). Statistical analysis for differential alternative splicing was performed using the linear mixed effects model as described above for DGE; significant results are reported at Benjamini–Hochberg FDR < 0.5 (ref. 34 ). Full differential alternative splicing results are available in Supplementary Table 3 . Quantitative real-time PCR validation In order to ensure that our RNA-seq data were high quality and our DGE models were accurate, we evaluated gene expression changes in a representative subset of four ASD and four control samples ( Extended Data Fig. 2b ). One microgram of total RNA was reverse-transcribed using Invitrogen Superscript IV reverse-transcriptase and oligo-dT primers (Invitrogen). Real-time PCR was performed on a Lightcycler 480 thermocycler in 10 μl volume containing SYBR Green Master Mix (Roche) and gene-specific primers at a concentration of 0.5 mM each. The results shown in Extended Data Fig. 2c represent at least two independent cDNA synthesis experiments for each gene. GAPDH levels were used as an internal control. For differential alternative splicing analysis, we validated selected events with semiquantitative RT–PCR using the same samples used for DGE validation. Total RNA (600 ng) was reverse-transcribed using Invitrogen Superscript IV reverse transcriptase and gene/exon-specific primers. cDNA (50 ng) was amplified by 25 cycles using PCR. PCR products were resolved on 3% high-resolution Metaphor agarose gels (Lonza) and counterstained with SYBR Gold for visualization ( Extended Data Fig. 5a , Supplementary Fig. 1 ). Gels were quantified using ImageJ (NIH). Notably, this sample size is underpowered to evaluate significant changes in many genes or splicing events; however, the goal was to validate the accuracy of our data and analyses across genes, so we show the correlation of fold changes between ASD and control across genes or events. Genes and events were selected on the basis of being top hits or of particular biological interest. Sample details and primers are reported in Supplementary Tables 2 and 3 . Duplication 15q syndrome samples and analyses For dup15q samples, the type of duplication and copy number in the breakpoint 2–3 region were available from previous work 36 . To expand this to the regions between each of the recurrent breakpoint in these samples, eight out of nine dup15q brains were genotyped (one was not genotyped owing to limited tissue availability). The number of copies between each of the breakpoints is reported in Extended Data Fig. 7a . DGE and differential alternative splicing analysis for this set was performed with independent control samples from the main analysis, though the results were similar to those obtained using the larger set of controls used in the main analysis ( Extended Data Fig. 7d, e ). Co-expression network analysis The R package weighted gene co-expression network analysis (WGCNA) was used to construct co-expression networks using normalized data after adjustment to remove variability from technical covariates 37 , 38 ( Supplementary Information ). We used the biweight midcorrelation to assess correlations between log 2 [adjusted FPKM] and parameters for network analysis are described in Supplementary Information . Notably, we used a modified version of WGCNA that involves bootstrapping the underlying dataset 100 times and constructing 100 networks. The consensus of these networks (median edge strength across all bootstrapped networks) was then used as the final network 39 , ensuring that a subset of samples does not drive the network structure. For module-trait analyses, the first principal component of each module (the module eigengene 37 ) was related to ASD diagnosis, age, sex, and brain region with an LME model as above. These associations were also supported by enrichment analyses with ASD DGE genes in Extended Data Fig. 9a . Given that modules are relatively uncorrelated to each other, significant eigengene-trait results are reported at Bonferroni-corrected P < 0.05. Module temporal trajectories were computed with the LOESS function in R. For both ASD and control samples, the function was used to create quartic splines on module eigengenes (degree = 2, span = 2/3). The trend difference statistic was taken as the largest difference between these fitted curves between the ages of 5 and 25 years. P values were computed using 5,000 permutations. Specifically, ASD and control labels were randomly permuted 5,000 times and splines were fit to the permuted groups; therefore, significant P values reject the null hypothesis of no relationship between age trends and disease status. Detailed statistics for module membership are available in Supplementary Table 2 and additional characterization of modules is available in Supplementary Table 4 . Enrichment analysis of gene sets and common variation Gene set enrichment analyses were performed with a two-sided Fisher’s exact test (cell type and splicing factor enrichments) or with logistic regression ( Extended Data Fig. 9a , Supplementary Information ). Results were corrected for multiple comparisons by the Benjamini–Hochberg method 34 when a large number of comparisons were performed. GO term enrichment analysis was performed using GO Elite 40 with 10,000 permutations, and results are presented as enrichment Z scores. We present only the top molecular function and biological process terms for display purposes. Notably, for splicing analysis, we evaluated GO term enrichment by using the genes containing differential splicing alterations to identify functional enrichment. It is possible that longer genes, which contain more exons, also contain more detected splicing events. This could bias pathway and cell type enrichment to more neuronal and synaptic genes, which are, on average, longer than other genes in the genome. However, the correlation between the number of detected events in genes and gene length is minimal ( R 2 = 0.004), and the correlation is even smaller for events at P < 0.01 ( R 2 = 0.00012) demonstrating that longer genes are not more likely to contain differential splicing events. Common variant enrichment was evaluated by analysis of genome-wide association study (GWAS) signal with stratified linkage disequilibrium (LD) score regression to partition disease heritability within functional categories represented by gene co-expression modules 41 . This method uses GWAS summary statistics and LD explicitly modelled from an ancestry-matched 1,000 genomes reference panel to calculate the proportion of genome-wide single nucleotide polymorphism (SNP)-based heritability that can be attributed to SNPs within explicitly defined functional categories. To improve accuracy, these categories were added to a ‘full baseline model’ that includes 53 functional categories capturing a broad set of genomic annotations, as previously described 42 . Enrichment is calculated as the proportion of SNP heritability accounted for by each module divided by the proportion of total SNPs within the module. Significance is assessed using a block jack-knife procedure 42 , which accounts for module size and gene length, followed by FDR correction of P values. Data availability statement Human brain RNA-seq data have been deposited in Synapse ( ) under accession number syn4587609. Data for the SOX5 overexpression are available from the Gene Expression Omnibus (accession number GSE89057 ). All other data are available from the corresponding author upon reasonable request. Code availability Code underlying the DGE, differential alternative splicing, cortical patterning, and co-expression network analyses is available at . Change history 11 July 2018 In this Letter, the labels for splicing events A3SS and A5SS were swapped in column D of Supplementary Table 3a and b. Supplementary Table 3 has been corrected online.
Autism spectrum disorder is caused by a variety of factors, both genetic and environmental. But a new study led by UCLA scientists provides further evidence that the brains of people with the disorder tend to have the same "signature" of abnormalities at the molecular level. The scientists analyzed 251 brain tissue samples from nearly 100 deceased people—48 who had autism and 49 who didn't. Most of the samples from people with autism showed a distinctive pattern of unusual gene activity. The findings, published Dec. 5 in Nature, confirm and extend the results of earlier, smaller studies, and provide a clearer picture of what goes awry, at the molecular level, in the brains of people with autism. "This pattern of unusual gene activity suggests some possible targets for future autism drugs," said Dr. Daniel Geschwind, the paper's senior author and UCLA's Gordon and Virginia MacDonald Distinguished Professor of Human Genetics. "In principle, we can use the abnormal patterns we've found to screen for drugs that reverse them—and thereby hopefully treat this disorder." According to the Centers for Disease Control and Prevention, about 1.5 percent of children in the U.S. have autism; the disorder is characterized by impaired social interactions and other cognitive and behavioral problems. In rare cases, the disorder has been tied to specific DNA mutations, maternal infections during pregnancy or exposures to certain chemicals in the womb. But in most cases, the causes are unknown. In a much-cited study in Nature in 2011, Geschwind and colleagues found that key regions of the brain in people with different kinds of autism had the same broad pattern of abnormal gene activity. More specifically, researchers noticed that the brains of people with autism didn't have the "normal" pattern for which genes are active or inactive that they found in the brains of people without the disorder. What's more, the genes in brains with autism weren't randomly active or inactive in these key regions, but rather had their own consistent patterns from one brain to the next—even when the causes of the autism appear to be very different. The discovery suggested that different genetic and environmental triggers of autism disorders mostly lead to disease via the same biological pathways in brain cells. In the new study, Geschwind and his team analyzed a larger number of brain tissue samples and found the same broad pattern of abnormal gene activity in areas of the brain that are affected by autism. "Traditionally, few genetic studies of psychiatric diseases have been replicated, so being able to confirm those initial findings in a new set of patients is very important," said Geschwind, who also is a professor of neurology and psychiatry at the David Geffen School of Medicine at UCLA. "It strongly suggests that the pattern we found applies to most people with autism disorders." The team also looked at other aspects of cell biology, including brain cells' production of molecules called long non-coding RNAs, which can suppress or enhance the activity of many genes at once. Again, the researchers found a distinctive abnormal pattern in the autism disorder samples. Further studies may determine which abnormalities are drivers of autism, and which are merely the brain's responses to the disease process. But the findings offer some intriguing leads about how the brains of people with autism develop during the first 10 years of their lives. One is that, in people with the disorder, genes that control the formation of synapses—the ports through which neurons send signals to each other—are abnormally quiet in key regions of the brain. During the same time frame, genes that promote the activity of microglial cells, the brain's principal immune cells, are abnormally busy. This could mean that the first decade of life could be a critical time for interventions to prevent autism. The study also confirmed a previous finding that in the brains of people with autism, the patterns of gene activity in the frontal and temporal lobes are almost the same. In people who don't have autism, the two regions develop distinctly different patterns during childhood. The new study suggests that SOX5, a gene with a known role in early brain development, contributes to the failure of the two regions to diverge in people with autism.
nature.com/articles/doi:10.1038/nature20612
Chemistry
Bacteria with a metal diet discovered in dirty glassware
Bacterial chemolithoautotrophy via manganese oxidation, Nature (2020). DOI: 10.1038/s41586-020-2468-5 Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-2468-5
https://phys.org/news/2020-07-bacteria-metal-diet-dirty-glassware.html
Abstract Manganese is one of the most abundant elements on Earth. The oxidation of manganese has long been theorized 1 —yet has not been demonstrated 2 , 3 , 4 —to fuel the growth of chemolithoautotrophic microorganisms. Here we refine an enrichment culture that exhibits exponential growth dependent on Mn(II) oxidation to a co-culture of two microbial species. Oxidation required viable bacteria at permissive temperatures, which resulted in the generation of small nodules of manganese oxide with which the cells associated. The majority member of the culture—which we designate ‘ Candidatus Manganitrophus noduliformans’—is affiliated to the phylum Nitrospirae (also known as Nitrospirota), but is distantly related to known species of Nitrospira and Leptospirillum . We isolated the minority member, a betaproteobacterium that does not oxidize Mn(II) alone, and designate it Ramlibacter lithotrophicus . Stable-isotope probing revealed 13 CO 2 fixation into cellular biomass that was dependent upon Mn(II) oxidation. Transcriptomic analysis revealed candidate pathways for coupling extracellular manganese oxidation to aerobic energy conservation and autotrophic CO 2 fixation. These findings expand the known diversity of inorganic metabolisms that support life, and complete a biogeochemical energy cycle for manganese 5 , 6 that may interface with other major global elemental cycles. Main Beijerinck and Winogradsky discovered biological redox reactions involving carbon, nitrogen, sulfur and iron over a century ago, while pioneering methods for cultivating the microbiota responsible for these reactions: this led to the concept of chemolithoautotrophy 7 , 8 . The known breadth of inorganic electron-accepting and -donating reactions in biology has continued to expand 9 , 10 , 11 , 12 , 13 . For example, the anaerobic respiratory reduction of Mn(IV) oxides to Mn(II) by diverse microorganisms is now understood to be widespread and of broad biogeochemical importance 5 , 6 , 14 , 15 . Over the past century, a multitude of studies and reviews have focused on the details of Mn(II) oxidation catalysed by diverse heterotrophs 2 , 3 , 4 ; however, the physiological roles of these activities generally remain unclear. Despite experimental hints that the oxidation might be coupled to energy conservation (Mn 2+ + 1/2O 2 + H 2 O → Mn(IV)O 2( s ) + 2H + ; Δ G° ′ = −68 kJ per mol Mn (Supplementary Note 1) in some organisms 1 , 16 , 17 , 18 , 19 , whether Mn(II) oxidation drives the growth of any chemolithotrophs has remained an open question. Cultivation of manganese oxidizers We re-examined the possibility that previously unappreciated microorganisms from the environment might oxidize Mn(II) for energy. We coated a glass jar with a slurry of Mn(II)CO 3 and allowed it to dry, before filling it with municipal tap water from Pasadena (California, USA) and leaving it to incubate at room temperature. After several months, the cream-coloured carbonate coating had oxidized to a dark manganese oxide. We serially transferred the material into a defined medium, which led to the establishment of a stable in vitro culture. Unless otherwise noted, this medium was free of alternative organic and inorganic electron donors (except for trace amounts of vitamins); for example, nitrate was used instead of ammonia as a source of nitrogen to preclude the growth of nitrifiers. To distinguish between abiological and biological oxidation, flasks of sterile, defined Mn(II) medium were either inoculated with a subculture of the enrichment or left uninoculated, and incubated under oxic conditions. Because manganese oxides have previously been suggested to contribute to the chemical auto-oxidation of Mn(II) 20 , we inoculated other replicate flasks with a steam-sterilized subculture with oxide products. Even after a year, oxidation had not occurred in uninoculated flasks or flasks containing the steam-sterilized inocula, as predicted by the known chemical stability of MnCO 3 under these conditions 21 , 22 . However, within four weeks, the flasks inoculated with ‘viable material’ had generated dark, adherent manganese oxides (Fig. 1a ). Oxidation required O 2 and occurred at temperatures up to 42 °C; oxidation occurring optimally between 34 °C to 40 °C (Extended Data Fig. 1a ), consistent with catalysis being enzymatic. Mn(II) oxidation activity was also sensitive to exposure to antibiotics, or to overnight pasteurization at 50 °C (Extended Data Fig. 1b ). Phosphate was inhibitory at concentrations above 0.3 mM. When amended with MnCO 3 , the pH of unbuffered medium ranged between 5.7 and 6.3. Although a pH buffer was not required, Mn(II) oxidation was faster in medium buffered with 5 mM 3-morpholinopropanesulfonic acid (MOPS) at its p K a (7.1 at 37 °C). In buffered cultures, the final pH ranged between 6.5 and 6.8. With or without buffer, increases in culture pH during or after oxidation—which might lead to chemical oxidation of unreacted MnCO 3 —were not observed. No growth could be detected in the MOPS-buffered basal medium without addition of MnCO 3 (Extended Data Fig. 2a ). Fig. 1: Bio-oxidation of MnCO 3 produces manganese oxide nodules to which two species associate. a , After incubation, comparison of an uninoculated control flask of basal medium containing bright, unreacted MnCO 3 (left) with the adherent dark oxide products generated in a flask that had been inoculated with viable material (right). b – e , Microscopy of manganese oxide nodules generated in agarose solidified MnCO 3 medium. b , After incubation of tubes inoculated with viable material, the cloud of bright MnCO 3 particles was clarified towards the air-exposed meniscus, concomitant with the generation of larger, discrete dark oxides (enlarged in c ). d , Transmitted light micrograph of an acridine-orange (nucleic acid)-stained manganese oxide nodule from the same agarose tube. e , Epifluorescence micrograph of the same nodule as in d , with surface visible biomass localized to the inner clefts; material in clefts appeared orange before staining. f , Scanning electron micrograph of an manganese oxide nodule produced by the co-culture. g , Epifluorescence microscopy and fluorescence in situ hybridization using species-specific rRNA-targeted probes reveal cell distributions in dissolved manganese oxide nodules. Species A, magenta; species B, green; all DNA stained with DAPI, blue. No third species is present, via independent methods (Extended Data Fig. 3a ). Each panel represents observations made from samples of multiple independent cultivation experiments ( a , n > 100; b – e , n = 7; f , n = 4; and g , n = 2). Scale bars, 5 mm ( b ), 100 μm ( c , d ), 10 μm ( f ), 5 μm ( g ). Full size image An iTag community analysis of ribosomal RNA (rRNA) genes of the initial enrichment culture revealed about 70 different species, representing 11 bacterial phyla (Supplementary Table 1 ). The microorganisms responsible did not generate oxide-forming colonies on MnCO 3 agar plates, but successive rounds of serial dilution to extinction in MnCO 3 liquid medium refined the community to a co-culture of two species (which we designated species A and species B) (Supplementary Table 1 ). Species A belongs to the phylum Nitrospirae and species B is a betaproteobacterium; they occurred at a cell ratio of about 7:1 species A:species B (Extended Data Fig. 3a , Supplementary Table 1 ). Thus far, our attempts to isolate species A have failed. We isolated species B from disrupted oxides as single colonies on succinate and other heterotrophic media (Supplementary Note 2 ), but this species does not oxidize MnCO 3 alone. Either species A is solely responsible for Mn(II) oxidation (Extended Data Fig. 3b ) or the activity is consortial (Extended Data Fig. 3c–e ). Several members of the betaproteobacteria have previously proven recalcitrant to elimination from multispecies cultures: some were seemingly unimportant 9 , 23 , whereas others engaged in metabolite cross-feeding 24 , and in another they are central to a consortium 25 . DNA collected from both the co-culture and from a pure culture of species B produced near-complete genome sequences for both species (Extended Data Fig. 3f ), which facilitated subsequent experiments and analyses. Manganese oxide nodules Mn(II) oxidation yielded morphologically conspicuous nodules of manganese oxide that were about 20–500 μm in diameter (Fig. 1 , Extended Data Fig. 4 ). These nodules formed in both static and shaken liquid medium, often adhering to the glass and to each other, as well as in medium solidified with agarose (Fig. 1b ). The surfaces were dark brown but often reflective, and typically invaginated around deeper depressions that had a rough, dark-orange surface (Fig. 1c, d ). Attenuated total reflectance Fourier-transform infrared spectroscopy (ATR-FTIR) analysis revealed that the nodules of manganese oxide—after being bleached with hypochlorite to remove cellular and other organic carbon—are poorly ordered and similar to birnessite. Our epifluorescence microscopic examination of nucleic-acid-stained nodules of manganese oxide found that the majority of the exposed biomass localized to the invaginations (Fig. 1d, e ), with few cells being observed attached to the substrate or found to be planktonic. In agarose-solidified medium, these nodule-associated cells could be well-separated from the carbonate substrate, dissolving it from a distance (Fig. 1b ). The latter is partially explained by the solubility products of the MnCO 3 precipitate; under the incubation conditions, these solubility products can be expected to include free Mn 2+ ions, manganese bicarbonate and soluble MnCO 3 22 . The mean concentration of dissolved manganese in uninoculated and active MnCO 3 cultures was 0.214 mM (s.d. = 0.107, n = 3) and 0.119 mM (s.d. = 0.081, n = 3), respectively, before falling to 0.010 mM (s.d. = 0.009, n = 3) after oxidation. Soluble Mn(II) chloride does not appear to be used; instead, it appears to inhibit MnCO 3 oxidation when amended to active co-cultures at concentrations >2.0 mM (Extended Data Fig. 1c ). No evidence for motility was observed: the oxides did not accumulate as a band across the interface between counter-opposing gradients of Mn(II) and O 2 in agarose-solidified medium (Fig. 1b ), as is commonly observed for micro-aerophilic iron-oxidizing bacteria 26 . It is not yet understood whether the tight association of the cells with the oxidation product is circumstantial or is more intrinsic to the process, owing to some role of adsorptive, conductive, catalytic and/or other properties of manganese oxides. Additional biomass was revealed upon chemical dissolution of the manganese oxide nodules; we examined this biomass using fluorescence in situ hybridization with specific rRNA-targeted probes (Supplementary Note 3 ). Consistent with the iTag analysis, cells of species A were more abundant than those of species B (Extended Data Fig. 3a ). No stereotypic patterns of association with species B were observed (Fig. 1g , Extended Data Fig. 4a–e ). Cells of both species were often pleomorphic. Cells of species A were typically crescents, 1.07 μm by 0.40 μm (s.d. = 0.17 μm and 0.08 μm, respectively; n = 50); cells of species B were typically rods, 1.22 μm by 0.56 μm (s.d. = 0.20 μm and 0.09 μm, respectively; n = 50) in co-culture—but at high cell densities in pure cultures, the cells of species B elongate and form flocs. Growth rates and yields If one or both species in the co-culture is truly chemolithotrophic, the culture should exhibit (a) exponential increases in the rate of Mn(II) oxidation during and in parallel to (b) Mn(II)-dependent exponential growth. After inoculation, the rates of Mn(II) oxidation in the basal medium increased exponentially, initially doubling every 6.2 days (s.d. = 1.1, n = 4 replicate cultures) (Fig. 2a , Extended Data Fig. 2e–g , Supplementary Note 4 ) before decelerating to every 10.8 days (s.d. = 0.1). Concurrent with Mn(II) oxidation, species A exhibited sustained exponential growth, roughly matched by the less-numerous and perhaps-commensal species B (doubling every 6.1 and 7.7 days, respectively) (Fig. 2b , Extended Data Fig. 2b , Supplementary Note 5 ). The co-culture oxidized Mn(II) at a combined rate of 3.4 to 9.0 × 10 −15 mol Mn(II) per cell per hour. We observed a linear relationship between Mn(II) oxidized by the co-culture and cell yields (Fig. 2c , Supplementary Note 6 ): 6.4 × 10 11 cells of species A and 1.0 × 10 11 cells of species B for a combined yield of 7.4 × 10 11 cells per mol of Mn(II) oxidized. The total amount of DNA extracted from samples also increased exponentially with time and oxidation of Mn(II) (Extended Data Fig. 2c, d ), yielding 3.1 × 10 6 ng DNA per mol of Mn(II) oxidized. On the basis of either of two estimates (both using the known dry weight of a single cell of Escherichia coli 27 ), we estimate the growth yield in the co-culture to be between about 100 and 200 mg dry biomass per mol of Mn(II) oxidized. These growth yields and normalized substrate-oxidation rates are comparable with those observed for nitrite oxidation by chemolithotrophic microorganisms (Extended Data Fig. 3g ), the metabolism of which is predicted to yield a free energy 12 similar to that calculated above for manganese oxidation. Fig. 2: Mn(II) oxidation coupled to co-culture growth of species A and species B. a , Oxidation kinetics. Mn(II) oxidation rates increased exponentially over time in two distinct phases before plateauing. b , Growth kinetics. Exponential growth of species A and species B paralleled Mn(II) oxidation. c , Growth yield. Linear relationship between growth yield and the amount of Mn(II) oxidized. Points in b , c represent data from the three technical replicates for each sampled time point. Extended Data Fig. 2b–n provides analyses of independent cultivation experiments ( n = 9). Source data Full size image Phylogenetic analyses Species A affiliates remotely with the genera Nitrospira , Leptospirillum and other members of the phylum Nitrospirae, yet shares less than 84% 16S rRNA identity with any cultivated organism, and less than 87% identity to all but about 50 16S rRNA gene sequences from organisms that are as yet uncultivated (Fig. 3a , Extended Data Fig. 5a, c , Supplementary Table 2 ). The genome of species A does not encode recognizable genes for the chemolithotrophic oxidation of ammonia, hydroxylamine, nitrite or reduced sulfur compounds. Several closely related sequences have been recovered from drinking water and karst-affected groundwater, marine sites and a subsurface oxic–anoxic transition zone (Fig. 3a , Extended Data Fig. 5a ). Together, these sequences cluster with ‘ Candidatus Troglogloea absoloni’ 28 , an uncultivated cave organism of unknown physiology. Few genome datasets are available for comparison. Species A shares less than 88% 16S rRNA gene identity to the best reference genomes (from metagenome-assembled genomes from groundwater 29 , 30 ), and is the only genome currently available in the Candidatus class ‘Trogloglia’ (Fig. 3a , Extended Data Fig. 5b, c ). Fig. 3: Phylogenetic analysis and metabolic reconstruction of species A (‘ Candidatus Manganitrophus noduliformans’). a , Bayesian phylogram based on 1,532 aligned 16S rRNA nucleotide positions. Sequences clustering within the three previously described classes within the bacterial phylum Nitrospirae are collapsed into separate nodes. Species A clusters with not-yet-cultivated members of the Trogloglia, a distinct class within this phylum. Extended Data Fig. 5a–c provides greater detail and identifiers. Scale bar shows evolutionary distance (0.1 substitutions-per-site average). b , Hypothetical model of e − flow from extracellular Mn(II) to the energy and anabolic systems of species A. The oxidation is hypothesized to be mediated by several expressed outer membrane complexes (orange), with subsequent e − transfers to periplasmic carriers. The bulk of e − flow is towards generating proton motive force via terminal oxidases (TO) (green) during O 2 respiration. The remaining e − flow would be to motive force-dissipating, reverse electron transport complexes (blue), generating the low-potential e − carriers required for CO 2 fixation mediated by the rTCA pathway. Gene-expression values represent the mean of independent cultivation experiments ( n = 7). Supplementary Table 4 provides identifiers and transcript levels for each gene, and Supplementary Notes 7 and 8 provide more detailed explanations of the diagrams. TPM, transcripts per million. Full size image Species B affiliates with heterotrophs from the betaproteobacterial genus Ramlibacter (Extended Data Fig. 6a, b ). It exhibited growth dependent on H 2 and O 2 , encodes genes for hydrogenases, encodes both Sox and DsrMKJOP gene clusters for the oxidation of reduced sulfur, encodes a Calvin–Benson–Bassham cycle and may be capable of anaerobic respiration, such as denitrification and dissimilatory metal reduction. To our knowledge, the potential for facultative lithotrophy has not previously been reported for members of this genus. Transcriptomics of manganese-dependent growth We examined the transcriptomes of the co-culture (in particular, of species A), during different stages of Mn(II) oxidation in replicate cultures ( n = 7) (Extended Data Fig. 3h ). Although both species encode genes for swimming and twitching motility as well as for chemotaxis, these genes were not expressed (Supplementary Table 3 )—matching the observations in agarose-solidified medium (see ‘Manganese oxide nodules’). By contrast, biosynthetic genes for different compatibles solutes (for example, hydroxyectoine, trehalose and betaines) were expressed by both species and—consistent with this—co-cultures oxidized manganese when grown at a range of brackish salinities, up to nearly 40% of that of seawater. We identified candidate genes that might underlie Mn(II) chemolithotrophy. Species A transcribed four gene clusters that encode outer membrane complexes that evoke comparisons with lithotrophic iron oxidizers and respiratory metal reducers. By analogy, these gene clusters might have a role in extracellular electron transfer by ferrying Mn(II)-derived electrons to periplasmic carriers (candidates for which were also expressed; Supplementary Table 4 ), leaving the resultant insoluble oxide outside the cell (Fig. 3b ). In iron-oxidizing microorganisms, an outer membrane c -type cytochrome (Cyc2 or Cyt572) is often used as the initial oxidant and carrier for the Fe(II)-derived electron 31 , 32 . Species A expressed a Cyc2 homologue with a predicted haem-binding site and outer-membrane β-barrel structure (Fig. 3b , Supplementary Table 4 ). In iron-oxidizing anoxygenic phototrophs 33 , 34 and in several neutrophilic iron-oxidizing chemolithotrophs 35 , an alternative mechanism involves a porin–cytochrome c protein complex 36 . Species A expresses genes for three recognizable porin–cytochrome c protein complexes: a porin–dodecaheme cytochrome c with no homologues in the databases (labelled PCC 1) and two distinct porin–decaheme cytochrome c modules (labelled PCC 2 and PCC 3) (Fig. 3b ). During growth in the Mn(II)-oxidizing co-culture, species B expresses an MtrA–MtrB–MtrC-like porin–cytochrome c protein complex, as well as other multihaem cytochrome c proteins with a greater resemblance to the complexes involved in anaerobic reduction of metals by Shewanella sp. 36 (Supplementary Table 5 ). After the transfer of Mn(II)-derived electrons from outside of the cell into the periplasm, their flow through respiratory complexes in the cytoplasmic membrane is central to understanding this mode of energy conservation. On average, the two Mn(II)-derived electrons are generally considered to be of high potential (Mn(II)/Mn(IV), E°′ = +466 mV, Supplementary Note 1 ). However, the energetics of each of the two sequential one-electron transfers can be affected by inorganic and organic binding ligands 22 , 37 , leading to a degree of uncertainty. Of the respiratory complexes, canonical respiratory complex I is unlikely to be used for energy conservation; this leaves canonical or alternative complex III, complex IV or cytochrome bd oxidases as possible candidates for generating a proton motive force during Mn(II) chemolithotrophy. The genome of species A contains host gene clusters for terminal oxidases that could link the quinone pool to O 2 ; many of these gene clusters are strongly transcribed (Supplementary Table 4 ). Although a role in the process cannot be ruled out, the single complex IV (a Cbb3-type cytochrome c oxidase) of species A is not as well-expressed (in the 24th percentile) as four unconventional terminal-oxidase complexes that contain cytochrome bd -like oxidase (Fig. 3b , Extended Data Figs. 7 , 8 , Supplementary Table 4 , Supplementary Note 7 ). The most highly expressed of these terminal oxidases is ‘TO 1’ (in the 99th percentile) (Fig. 3b ), a complex that is generally similar to those observed in ammonia- and nitrite-oxidizing members of the Nitrospirae 38 , 39 but without the hypothesized candidate catalytic and iron–sulfur subunits NxrA and NxrB. The second highest expressed terminal oxidase (‘TO 2’) (in the 83rd percentile) also lacks these subunits, and is highly unusual amongst cultivated organisms in containing extra haem c domains, cytochrome c and two MrpD-like ion-pumping subunits (Fig. 3b , Supplementary Table 4 ) that might—hypothetically—facilitate electron transfer while generating a motive force. In both TO 1 and TO 2, a membrane-attached di-haem cytochrome b may connect extracellular electron transfer and periplasmic electron carriers to these membrane complexes (Fig. 3b , Supplementary Note 7 ). Whether either of these will have a high affinity for O 2 (as canonical cytochrome bd -oxidases do) is not known. Future studies are required to examine the function of these unusual terminal oxidases, their roles in energy conservation and how any energetic challenge of the initial oxidation of Mn(II) to Mn(III) is offset or met by the organism. Mn(II)-oxidation-dependent CO 2 fixation With the demonstration that Mn(II) oxidation drives lithotrophic energy metabolism and growth, we examined whether the co-culture might generate biomass via autotrophic CO 2 fixation with Mn(II)-derived electrons. For this, we grew the co-culture with labelled 13 C-MnCO 3 (and 15 N-nitrate to aid in tracking the synthesis of new biomass) and visualized the isotopic compositions of cells microscopically by species-specific fluorescence in situ hybridization coupled to nanometre-scale secondary-ion mass spectrometry (nanoSIMS) (Fig. 4 ). The results are consistent with the co-culture being autotrophic. Both species A and species B were confirmed to incorporate substantial amounts of 13 C and 15 N isotopes (Fig. 4 , Supplementary Note 9 ). Species A showed a higher enrichment for both isotopes as compared to species B (Extended Data Fig. 9a ), which suggests that it is the main—if not sole—driver of Mn(II)-dependent CO 2 fixation and lithoautotrophic growth in the co-culture (especially when taken together with its greater abundance, as shown in Extended Data Fig. 3a ). Although we cannot rule out the possibility that a degree of anabolic mixotrophy might occur via the uptake of trace contaminating organic carbon, it is likely that species A has an inoperable oxidative TCA cycle: it does not encode a recognizable homodimeric 2-oxoglutarate dehydrogenase complex (Fig. 3b , Supplementary Note 10 ), which is a hallmark deficiency that has been observed in many autotrophs that are unable to mineralize organic carbon (including nitrogen- and iron-oxidizing members of the Nitrospirae, which grow autotrophically using the reverse tricarboxylic acid (rTCA) pathway 38 , 40 , 41 ). Fig. 4: Stable isotope probing of autotrophic CO 2 fixation. We compared cells grown in the basal medium with labelled Mn 13 CO 3 and 15 NO 3 − to cells grown with unlabelled MnCO 3 and 15 NO 3 − . a , d , Fluorescence in situ hybridization (FISH) of cells dissolved from manganese oxide nodules using species-specific rRNA-targeted probes. Species A, magenta; species B, green; all DNA stained with DAPI, blue. b , c , e , f , Nanometre-scale secondary-ion mass spectrometry (nanoSIMS) reveals incorporation of 13 C and 15 N into cells. Coloured scale bars indicate 13 C or 15 N atom per cent. Scale bars, 3 μm. Areas shown in b , c and e , f correspond to those in a and d , respectively. Extended Data Fig. 9a shows analyses of independent nanoSIMS images ( n = 5) from the same culture incubated with inorganic 13 C. Full size image During Mn(II)-dependent growth in the co-culture, species A expresses genes for a complete rTCA pathway (Fig. 3b , Supplementary Table 4 , Supplementary Note 10 ). Although the role of the Calvin–Benson–Bassham cycle in manganese oxidation is speculative, species B expresses genes for this cycle (Supplementary Table 5 ). For species A to fix CO 2 via the rTCA pathway, low-potential electron carriers (for example, NAD(P) + and ferredoxin) 42 need to be reduced with high-potential Mn(II)-derived electrons, possibly via a reverse electron transport chain that—at the very least—involves complex I (Fig. 3b ). Three distinct complex I gene clusters can be found in species A, two of which (complex I 1 and complex I 2) (Fig. 3b , Supplementary Table 4 ) hypothetically might dissipate energy when operated in reverse and generate NAD(P)H ( E °′ = −320 mV) 43 from reduced quinones with a more-positive reduction potential. Similarly, an unusual complex I (complex I 3) (Fig. 3b , Supplementary Table 4 ) might be used to reduce ferredoxin ( E °′ = −398 mV) 43 . This complex encodes five ion-pumping subunits (NuoN, two NuoM and two MrpD-like subunits, but no NuoL), whereas the canonical complex I only encodes three (NuoL, NuoM and NuoN) 44 . Rare and unusual variants of complex I with four ion-pumping subunits (NuoL, two NuoM and NuoN) have recently been identified and postulated to couple the inward flow of five protons to drive the endergonic reduction of ferredoxin from a quinone 45 , 46 . In species A, complex I 3 hypothetically may use electrons from the quinone pool to drive the reduction of ferredoxin via the inward flow of six protons or ions (Fig. 3b ). If the entry point(s) of Mn(II)-derived electrons into the electron transport chain has a reduction potential more positive than that of the quinone pool, then additional respiratory complexes (Fig. 3b ) would have to be involved to accomplish productive reverse electron flow (Supplementary Note 7 ). Discussion Whether chemolithoautotrophic manganese-oxidizing microorganisms exist had been an open question for over a century 1 , 2 , 3 , 4 . This study establishes their existence, and provides insights into the details and dynamics of the process at cellular, physiological, genomic, transcriptomic and isotopic levels. Manganese chemolithoautotrophy extends the known physiologies in the phylum Nitrospirae that leverage meagre differences in redox potential between inorganic electron donors and acceptors 9 , 12 , 47 , 48 , 49 . On the basis of physiology, phylogeny, genomics and other characteristics of species A, we propose the epithet ‘ Candidatus Manganitrophus noduliformans’ for this species (taxonomic proposal in Supplementary Information). The potential effect of Mn(II) oxidation coupled with the seemingly slow exponential growth of the co-culture (Extended Data Fig. 3g ) has current and past environmental implications. Starting with a single cell each of species A and B, unrestricted chemolithotrophic growth at the observed cell doubling times and oxidation rates would be sufficient to generate manganese oxides that equal global manganese reserves within two years (Supplementary Note 11 ). On the basis of phylogenetic inferences, close relatives of species A reside in many subsurface and karst environments (Fig. 3a , Extended Data Fig. 5a ), including at oxic–anoxic transition zones such as the Hanford sediments 50 . At such interfaces, manganese could be cycled between aerobic Mn(II)-oxidizing chemolithoautotrophs and anaerobic, manganese-oxide-respiring (reducing) chemotrophs 5 , 6 , 14 , 15 , thereby stimulating substantial electron flows through the element over even-brief geological time scales. This has implications for the interconnected biogeochemical cycles of carbon, nitrogen, sulfur, iron, hydrogen and oxygen, with which this complete manganese energy cycle could interact. Methods No statistical methods were used to predetermine sample size. The experiments were not randomized and investigators were not blinded to allocation during experiments and outcome assessment. Media and culture enrichment, refinement, and maintenance A fortuitous enrichment culture for Mn(II) lithotrophs was established over the summer and autumn of 2015. A dense slurry of freshly precipitated MnCO 3 (see below) was distributed onto the internal surface of a wide-mouth glass jar and this coating was allowed to dry. The jar was filled completely with unsterilized Pasadena (California, USA) municipal drinking water (typically a blend from surface and aquifer sources) collected and allowed to stand for approximately 10 weeks, open and without agitation in an unoccupied room maintained at about 21 °C (co-ordinates: 34.1367°, −118.1273°). Additional freshly precipitated MnCO 3 was added as a dense suspension after the cream-coloured coating had blackened, and the jar was covered with a loose-fitting lid and allowed to incubate for several more months, after which a small amount (about 5% v/v) of the dark product was used to inoculate flasks of MnCO 3 suspended in municipal tap water. After this, a separate line of cultures was initiated in a defined, deionized water medium incubated in the laboratory at 37 °C. The basal medium used for routine growth and maintenance of this line and in related experiments was adapted and modified from prior formulations 51 , 52 . The medium contained, per litre deionized H 2 O: NaCl, 1 g; MgCl 2 ⋅ 6H 2 O, 400 mg; CaCl 2 ⋅ 2H 2 O, 1 g; KCl, 5 g; Na 2 SO 4 , 142 mg; FeCl 3 ⋅ 6H 2 O, 2 mg; H 3 BO 3, 30 μg; MnCl 2 ⋅ 4H 2 O, 100 μg; CoCl 2 ⋅ 6H 2 O, 190 μg; NiCl 2 ⋅ 6H 2 O, 24 μg; CuCl 2 ⋅ 2H 2 O, 2 μg; ZnCl 2 , 68 μg; Na 2 SeO 3 , 4 μg; Na 2 MoO 4 , 31 μg; riboflavin, 100 μg; biotin, 30 μg; thiamine HCl, 100 μg; l -ascorbic acid, 100 μg; d-Ca-pantothenate, 100 μg; folic acid, 100 μg; nicotinic acid, 100 μg; niacinamide, 100 μg; 4-aminobenzoic acid, 100 μg; pyridoxine HCl, 100 μg; lipoic acid, 100 μg; NAD, 100 μg; thiamine pyrophosphate, 100 μg; cyanocobalamin, 10 μg. As P source, a solution of potassium phosphate (pH 7.2) was added to a final concentration of 0.15 mM. As a nitrogen source, NaNO 3 was added to a final concentration of 1 mM; alternatively, 1 mM NH 4 Cl or other nitrogen sources were used as noted. MOPS at its p K a was added as a buffer to a final concentration of 5 mM, when noted. MnCO 3, or an alternative, heat-stable substrate for growth was added to the basal medium before steam sterilization; alternatively, heat-unstable substrate for growth was added using filter-sterilized stock solutions after the medium had adequately cooled. Freshly precipitated MnCO 3 was used for the initial enrichment culture, for routine Mn(II)-dependent cultures in small volumes, for the serial dilution-to-extinction resolution of complex cultures, and for stock and starter cultures. To prepare this, 25 g MnCl 2 (Sigma-Aldrich no. 221279) was dissolved into 100 ml deionized water, yielding 125 ml of a 1.59 M MnCl 2 solution. Over the course of several minutes, this solution was slowly poured into 3 l of 0.33 M NaHCO 3 while vigorously stirring. After the cessation of stirring, the resultant precipitate was allowed to settle by gravity for about 1 h. Thereafter, the overlying reaction fluid was decanted and discarded. The precipitate was resuspended in 3 l of deionized water, and the stirring, settling, decanting and resuspension steps were repeated at least 10 times. After the final wash, the precipitate was resuspended in deionized water to a final volume of 100 ml, stored in a clean glass bottle, and refrigerated at 4 °C in the dark. Initially, the precipitate appeared white to light pink, but aged to a light tan within around 24 h. Thereafter, the material remained stable for months to years. Alternatively, a hydrated MnCO 3 substrate (Sigma-Aldrich no. 63539) was used for larger culture volumes, and/or for reproducible mass balances, as noted. Cells were typically cultured in 10 ml of medium in 18-mm-diameter culture tubes, or in 100 ml of medium in 250-ml Erlenmeyer flasks at 37 °C, with 25 to 200 mM MnCO 3 , as noted. To prevent dehydration of cultures over the long periods of incubation, cut strips of Parafilm (Heathrow Scientific) were used to seal the bottom edge of the 18-mm plastic test tube enclosures. Cultures were incubated stationary without agitation, or with shaking at 200 rpm, as noted. To refine the number of species in the complex manganese-oxidizing enrichment, 5 successive rounds of serial tenfold dilution-to-extinction series were performed using 9 ml of MnCO 3 nitrate basal medium in 18-mm culture tubes, incubated at 32 °C. Culture tubes in each dilution series were scored as positive for the presence of lithotrophic manganese oxidizers when, after 2–12 weeks of incubation, the small and easily dispersed particles of the MnCO 3 substrate (light cream in colour) were converted to larger Mn oxide nodules or a single continuous Mn oxide coating (dark brownish-black in colour) that typically tightly adhered to the bottom of the glass culture tubes. Mn oxides from the final dilution tube showing such oxidation were used as the inocula for the next serial-dilution-to-extinction series. To examine whether Mn(II) oxidation in the cultures was biological in nature, an active co-culture with Mn oxides was used to inoculate 18-mm culture tubes containing the basal medium with 50 mM MnCO 3 . Cultures were amended with either of two antibiotics, kanamycin (30 μg/ml) or vancomycin (20 μg/ml), or pasteurized overnight at 50 °C before incubation at 32 °C. To examine the effect of incubation temperature on oxidation, cultures were incubated without agitation in different incubators set at a diversity of temperatures; incubation temperatures were regularly and independently confirmed with >2 thermometers. After 2 weeks, 2 ml mixtures from the cultures were sampled and stored at −80 °C for later inductively coupled plasma mass spectrometry (ICP-MS) analysis (see ‘Chemical analysis of Mn’). Reported values were corrected for Mn oxides carried over in the inoculum, as ascertained by the lowest amount determined in the 50-°C pasteurization experiments. To examine the growth of the culture in the absence of Mn(II) substrate, a stationary co-culture (confirmed separately to be viable) was used to inoculate 4 replicates of 250-ml Erlenmeyer flasks containing 120 ml of the basal medium without MnCO 3 . The flasks were incubated at 37 °C with shaking at 200 rpm. After inoculation and after 10 and 21 days, 20 ml mixture from the cultures were sampled and centrifuged at 5,250 g for 10 min; the pellet was stored at −80 °C. DNA was extracted from the thawed pellets using the DNeasy PowerSoil kit (Qiagen) following the manufacturer’s instructions, with the bead beating option using FastPrep FP120 (Thermo Electron) at setting 5.5 for 45 s instead of the 10-min vortex step. DNA concentration was quantified using Qubit dsDNA High Sensitivity Assay Kit (Thermo Fisher Scientific). To examine whether MnCl 2 can be oxidized by, or otherwise affects, growth: an active co-culture, which had oxidized 38% of the 35 mM MnCO 3 initially provided (measured using the ICP-MS method), was used to inoculate (10% v/v) 18-mm culture tubes containing 10 ml of the basal medium with 0.5–20 mM MnCl 2 instead of MnCO 3 . After inoculation, and after days 5 and 10, 0.5 ml of the oxides and culture fluid mixture was sampled and stored at −80 °C for later ICP-MS analysis (see ‘Chemical analysis of Mn’). Reported values were corrected for Mn oxides carried over in the inoculum, as ascertained by the lowest amount determined in the 0.5 mM MnCl 2 culture. For attempts to observe single colony formation by Mn(II) lithotrophs on agar medium, the basal medium was adjusted to contain 200 mM MnCO 3 and 1.5% washed agar (BD Difco), and distributed into Petri dishes after steam sterilization. Genomics predicts that each species in the co-culture may be able to produce compatible solutes (for example, trehalose and hydroxyectoine, by species A), and thus may be able to grow under a range of salinities. To examine the effect of increased salinity on the Mn(II)-oxidizing lithotrophs, the basal medium was amended with NaCl to achieve final salt concentrations of 2 ppt, 2.8 ppt, 3.8 ppt, 4.6 ppt, 5.5 ppt, 9 ppt, 16 ppt, 23 ppt, 30 ppt and 37 ppt (equivalent to 6%, 8%, 11%, 13%, 16%, 26%, 45%, 65%, 85% and 105% of the salinity of seawater, respectively). After inoculation, oxidation in the tubes was monitored visually over time. For isolation and maintenance of strains of species B ( Ramlibacter lithotrophicus ) from the co-culture (Supplementary Note 2 ), plates of agar-solidified (1.5% agar w/v, BD Life Sciences) basal medium were used except that sodium succinate (10 mM, final concentration) or tryptone (0.5% w/v, BD Life Sciences) were used in place of MnCO 3. Viable cells of this bacterium could rarely be retrieved from the co-culture as planktonic cells overlying Mn oxide nodules. For clonal isolation, 200 μl of a dense suspension of the dark Mn oxides produced by the co-culture were spotted onto the surface of succinate agar medium and allowed to dry, after which the Mn oxides were vigorously and heavily streaked over the agar surface and monitored for the development of colonies thereafter. After 3–5 days of incubation, colonies of species B appeared small, leathery and adherent to the agar surface; transfer of cells to new plates or liquid medium was facilitated by the use of a sterile syringe needle for the removal of an entire single colony from the agar surface. In liquid, newly isolated strains of species B can be grown with tryptone (0.5% w/v) or acetate (10 mM, final concentration) (Extended Data Fig. 6c, d ), and can form fabric-like biofilms blanketing at the bottom of culture tubes. Transfer of such material proved challenging, as the fabric-like biomass typically adhered to the insides of both plastic and glass pipette surfaces, leading to the rapid selection for strain variants that do not form flocs. For the examination of anaerobic growth of species B, 10 ml of basal medium was dispensed into 18-mm glass Balch tubes (Bellco) and stoppered with 1-cm butyl rubber stoppers under an N 2 headspace. Autotrophic growth of the isolate using H 2 + O 2 + CO 2 was examined similarly, except that an air headspace, periodically spiked with 1 ml of H 2 + CO 2 (80%/20%, v/v), was used. Colonies of species B did not develop on standard lysogen broth (LB) agar, or on plates of the basal medium amended with 5 g/l yeast extract (BD Biosciences), traits that were used to monitor for culture purity or contamination. For growth of manganese-oxidizing lithotrophs in agarose tubes, the basal medium was amended to contain 150 mM freshly prepared MnCO 3 and 0.38% w/v agarose (Aquapor LE Ultrapure; National Diagnostics). After steam sterilization, 15-ml aliquots of the molten agar medium was dispensed into sterile 18-mm glass culture tubes fitted with plastic caps. After cooling to 45 °C in a water bath, the molten agar was inoculated with 0.5 ml of a dense suspension of Mn oxides from an actively growing lithotrophic culture, gently vortexed and allowed to harden on ice before incubation at 37 °C. Tubes were sealed with parafilm to avoid desiccation over the long periods of incubation, and monitored for both clearing of MnCO 3 as well as formation (or changes in the size, shape and distribution) of Mn oxide nodules. Examination of Mn oxide nodules In all the cases we examined, the generation of the dark, granular product was coincident with the generation of Mn oxides from Mn(II)CO 3 , as determined via colourimetry 53 , ATR-FTIR, reactivity with H 2 O 2 and/or ICP-MS (see ‘Chemical analysis of Mn’). To visualize, with minimal disturbance, cells on Mn oxide nodules generated in agarose tubes, cores of regions where the carbonate cleared and nodules developed were sampled with sterile glass Pasteur pipettes. Agar cores containing undisrupted nodules were extruded carefully into a plastic weigh boat and soaked in the basal medium (without MnCO 3 ) amended to contain 10 μg/μl acridine orange 54 for 30 min. The dye solution was decanted, and the core was soaked for 30 min in buffered medium without dye. The core was carefully transferred to glass microscope slides and gently covered with coverslips before examination via light microscopy. Light microscopy of Mn oxide nodules in agarose cultures was performed using both a Zeiss Stemmi 2000-C stereomicroscope and a Zeiss Axioplan 2 Imaging Microscope fitted with a HBO 100 mercury arc lamp housing and an ebq 100 lamp power supply (Lighting & Electronics Jena). For epifluorescence microscopy of acridine orange stained nodules, a blue (FITC) long-pass filter set was used (excitation, filter D470/40 Lot 33820 (Chroma Technology); emission, long-pass filter OG515 (515 nm cut-on; Schott)). Photodocumentation was with Panasonic Lumix GH3 and G85 microfourthirds-lens-mount cameras, mounted onto the research microscope via a combination of Zeiss to c-mount (1.0 X D10ZNC; Diagnostic Instruments) and ‘c-mount to microfourthirds-mount’ (Fotodiox) adaptors. RAW images were imported into Lightroom 9 (Adobe). Global adjustments to RAW images were made in Lightroom, for example, to improve clarity through the opaque agarose. The global changes to RAW images in this line of experiments involved correction for white balance and adjustments via the texture, clarity, dehazing, sharpening and ‘colour-noise reduction’ sliders. For mineralogical analyses, Mn oxide nodules were collected from cultures and allowed to settle by gravity, washed three times with distilled water, incubated in consumer-grade concentrated bleach for 2 h at room temperature (to remove cellular and other organic carbon), washed three times with distilled water and then dried at 80 °C. Spectroscopic analyses were performed on Nicolet Magna 860 FTIR spectrophotometer (Thermo Nicolet) with attenuated total reflectance (ATR) accessory (Durascope, SensIR Technologies), and Nicolet iS50 FTIR spectrophotometer (Thermo Nicolet) with GladiATR ATR accessory (Pike Technologies). For scanning electron microscopy (SEM) of the Mn oxide nodules, samples were chemically fixed on ice for 3–4 h in the basal medium amended with 2.5% glutaraldehyde and HEPES (25 mM, pH 7.5). Fixed samples were washed twice in the same buffered solution but without glutaraldehyde, and the final pellet was resuspended in the buffered solution with 50% (v/v) ethanol. Dehydration series were performed by increasing the ethanol concentration every 15 min to 70, 90 and finally 100% (v/v). Critical point drying (Quorum Technologies) was performed as follows: ethanol was replaced by liquid CO 2 at a pressure of 55 bar at <10 °C. Fifteen minutes after reaching the critical point, the pressure was released slowly at >35 °C until ambient pressure was reached. The samples were then deposited onto double coated carbon conductive tape (Ted Pella), and in some cases, were sputter coated with 10 nm of Pt/Pd (80/20 by weight, Cressington). SEM was performed using a field emission SEM (1550VP, Zeiss) with SE2 detector at an operating voltage of 10 kV. Community analysis An iTag 16S rRNA gene sequencing approach was used to obtain microbial community profiles. For such, a 2-ml culture sample containing Mn oxide nodules was centrifuged at 6,000 g for 5 min. DNA was extracted from the pellets and quantified as described in ‘Media and culture enrichment, refinement, and maintenance’. The V4–V5 region of the 16S rRNA genes was amplified using primers 515F-Y (5′-GTG YCA GCM GCC GCG GTA A-3′) and 926R (5′-GGA CTA CNV GGG TWT CTA AT-3′) 55 with Illumina adaptor overhangs 56 added to the oligonucleotides. PCR amplification was performed using 7.5 μl of Q5 High Fidelity 2X Master Mix (New England Biolabs), 0.75 μl of each forward and reverse primers (10 μM), 5 μl of PCR-grade water and 1 μl of DNA with a concentration about 1 ng/μl. PCR cycling condition was as following: initial denaturation at 98 °C for 2 min, 25 cycles of 98 °C for 10 s, 54 °C for 20 s, 72 °C for 20 s and a final extension of 72 °C for 2 min before cooling down to 6 °C. Duplicate PCR reactions for each samples were run; after confirming successful and comparable amplification using gel electrophoresis, the duplicates were combined for Illumina Nextera XT barcoding 56 using PCR condition as described above except in 25 μl reaction volume and 10 cycles with annealing at 66 °C. Barcoded samples were quantified using QuantIT PicoGreen dsDNA Assay (Thermo Fisher Scientific) on the C1000 Thermal Cycler with CFX96 Real-Time System (Bio-Rad), combined in equal molar amounts, and purified with QIAquick PCR Purification Kit (Qiagen). Sequencing was performed on the MiSeq platform (Illumina) with paired 250-bp reads after PhiX addition of 15–20% (Laragen). Sequence data were processed using QIIME 57 v.1.8.0. Raw sequences were first joined and quality-trimmed using the default read pair joining and quality-trimming parameters. Processed sequences were clustered into de novo operational taxonomic units (OTUs) at 99% sequence identity using UCLUST 58 v.7.11.0.667, with the most abundant sequence picked as the representative sequence for each OTU. OTUs with only a single sequence (‘singletons’) were removed. Taxonomic identification for the representative sequences were done using SILVA 59 Ref NR 99 database release 119. Cloning of near full-length 16S rRNA genes of species A and species B Near full-length 16S rRNA genes were amplified and cloned from the co-culture. First, DNA was extracted as in ‘Media and culture enrichment, refinement, and maintenance’. Second, the 16S rRNA gene was amplified using primers BACT27F (5′-AGA GTT TGA TYM TGG CTC-3′) and U1492R (5′-GGY TAC CTT GTT ACG ACT T-3′) modified from published versions 60 . PCR was performed using the Expand High Fidelity System (Roche Molecular Systems) with the following conditions: 2.5 μl of 10 × buffer, 0.35 μl of Taq polymerase, 0.55 μl of 10 mM dNTP, 0.50 μl of each forward and reverse primer, 18.6 μl of PCR-grade water and 2 μl of a DNA extract. The cycling condition was as following: 95 °C for 2 min, followed by 30 cycles of 94 °C for 15 s, annealing at 54 °C for 30 s and extension at 72 °C for 45 s, and a final extension step at 72 °C for 4 min before cooling down to 4 °C. Third, the PCR products were immediately purified using QIAquick PCR purification kit (Qiagen), ligated into pGem-T Easy Vector (Promega), and the resulting plasmid was transformed into JM209 competent Escherichia coli cells (Promega) following the manufacturer’s instructions. Over 50 white colonies were observed on X-gal containing plates from 50 μl of transformed cells. Clones were grown overnight at 37 °C in LB medium with 10% glycerol and 0.1 mg/ml ampicillin following the manufacturer’s instructions (Promega). Lastly, PCR was performed using the NEB Taq Polymerase kit (New England Biolabs) with the following conditions: 3.0 μl of 10 × buffer, 0.66 μl of dNTP, 0.30 μl of Taq polymerase, 0.60 μl of each M13 forward and reverse primers (10 μM), 0.30 μl of 10 μg/μl bovine serum albumin, 23.95 μl of PCR water and 0.60 μl of cells. The cycling conditions were the same as the cloning reaction described above. The PCR products were purified using Multiscreen HTS PCR 96-well plates (Millipore). Sanger sequencing was performed on the purified PCR products to confirm their sequence identities using both M13 forward and reverse primers (Laragen). FISH Three oligonucleotide probes were developed or tested in this study to visualize and differentiate cells representing species A and species B in Mn oxide nodules (Supplementary Note 3 , Extended Data Fig. 10a–d ). SILVA 59 Ref NR 99 database release 128 and the PT server function of ARB software 61 v.6.0.2 were used for the development. The species A oligonucleotide probe (NLT499: 5′-ACA GAG TTA GCC GTG GCT-3′) was developed and designed to target the 16S rRNA genes from members of the classes Leptospirillia, Thermodesulfovibrionia, as well as other uncultivated classes within the phylum Nitrospirota 62 . Two oligonucleotide probes were used to target members of the order Betaproteobacteriales, including species B. Previously, a probe (BETA359 or Beta1: 5′-CCC ATT GTC CAA AAT TCC CC-3′) 63 , 64 had been reported, but the optimal hybridization conditions were not. A second probe was designed for this study (BETA867: 5′-AGG CGG TCA ACT TCA CGC-3′). A previously developed probe that targets most all bacteria (EUB338: 5′-GCT GCC TCC CGT AGG AGT-3′) 65 was also used for FISH. All probes were double-labelled 66 : with Cy3 dye for NLT499, FITC dye for BETA359 and BETA867, and Cy5 dye for EUB338 (Integrated DNA Technologies). The clone-FISH method 67 was used to evaluate the specificity and hybridization conditions of these three oligonucleotide probes. Transformed JM209 E. coli (see ‘Cloning of near full-length 16S rRNA genes of species A and species B’) containing either the 16S rRNA gene of species A or species B were grown overnight at 37 °C with shaking at 200 rpm in ampicillin-containing LB medium. Plasmids were isolated using the QIAprep Spin Miniprep Kit (Qiagen) following the manufacturer’s instructions. Isolated plasmids were then transformed into NovaBlue(DE3) E. coli competent cells (EMD Millipore) following the manufacturer’s instructions. 16S rRNA gene sequence of transformed NovaBlue(DE3) cells were confirmed as described in ‘Cloning of near full-length 16S rRNA genes of species A and species B’ by Sanger sequencing (Laragen). Cells for clone-FISH were prepared as previously described 67 . In brief, overnight cultures of optical density at 600 nm (OD 600 ) of about 0.4 were induced with IPTG (1 mM) for 2 h, and chloramphenicol (170 mg l −1 ) was added to the cultures and incubated for 5 h. The cells were then collected by centrifuging at 8,000 g for 10 min, fixed with 4% paraformaldehyde in 1× PBS, washed twice with PBS and stored in PBS:ethanol (1:1) at −20 °C. FISH reactions were performed as previously described 68 prior to epifluorescence microscopy using a BX51 epifluorescence microscope (Olympus) with 100× (UPlanFL N) oil immersion objective. Each point in the dissociation profile represents the mean of fluorescence intensities of at least 100 single cells in 5 microscopic fields evaluated using the software daime 69 v.2.1 with default automatic segmentation settings of the threshold algorithm Isodata, and manually size-filtering resulting regions of interest for single-cell analysis. To prepare Mn oxide nodules for FISH visualization, Superfrost Micro Slide (VWR Scientific) were dipped in 0.2% UltraPure agarose (Thermo Fisher Scientific) at 45 °C. Mn oxide nodules were fixed with paraformaldehyde as above, except stored in PBS at 4 °C for no longer than 2 weeks. Fixed Mn oxide nodules in PBS were immediately pipetted onto the warm slide and cooled to room temperature. After this, the slide was dipped in freshly prepared DCBE buffer (0.05 M sodium dithionite, 0.1 M sodium citrate, 0.1 M sodium bicarbonate and 0.1 M EDTA at pH 7; a modification of DCB buffer 70 ) for 5 min at room temperature to dissolve away the Mn, leaving nodule-associated cells constrained within the agarose. The cells were then permeabilized with protease K (15 μg/ml) for 10 min at room temperature, and lysozyme (10 mg/ml) for 30 min at 37 °C. The slide was washed once in 0.01 M HCl for 15 min and three times in water for 1 min each before drying at 37 °C. The permeabilization and HCl washing is not necessary for FISH reactions to work, and can be omitted. FISH reactions, using NLT499-Cy3, BETA867-FITC and EUB338-Cy5 probes, were performed in 35% formamide hybridization buffer 68 with corresponding salt concentration in the washing buffer 68 . DAPI–citifluor (5 μg/ml) was added to the FISH samples before epifluorescence microscopy using a BX51 epifluorescence microscope (Olympus) with 100× (UPlanFL N) oil immersion objective. Stable isotope probing and nanoSIMS analyses Mn-oxidizing co-cultures were grown in basal medium containing Mn 13 CO 3 and 15 NO 3 − to visualize the assimilation of 13 C and 15 N into the cells of species A and species B. For these, 0.5-ml samples from a stationary culture grown in basal medium with 26 mM unlabelled MnCO 3 was transferred into 18-mm glass Balch tubes containing 9.6 ml of autoclaved basal medium with 13 mM Mn 13 CO 3 (prepared using 13 C-NaHCO 3 , 99 atom % 13 C, Cambridge Isotope Laboratories). For a nitrogen source, the medium contained 1 mM Na 15 NO 3 (98 atom % 15 N, Sigma-Aldrich Isotec, no. 364606). The culture tubes were stoppered under about 20 ml of air headspace with 1-cm butyl rubber stoppers, and incubated at 37 °C without shaking until, by visual examination, no MnCO 3 appeared to remain. Resulting Mn oxide nodules were fixed with paraformaldehyde, dissolved with DCBE buffer (see ‘FISH’) by shaking at 37 °C for 1 h, and filtered onto 0.20-μm polycarbonate filters (Isopore GTBP02500, Millipore) that have been sputter coated with 10 nm of Au. FISH reactions were performed on the filters (as described in ‘FISH’ but without permeabilization and HCl washing) in 25% formamide hybridization buffer. As evaluated in Clone-FISH, no probe cross-reaction was observed in the 25% formamide hybridization buffer used to increase signal intensity. A culture grown in the presence of 13 mM unlabelled MnCO 3 (prepared using NaHCO 3 , Thermo Fisher Scientific, no. S233-500) and 1 mM Na 15 NO 3 was used for comparison and as a control. DAPI–citifluor (5 μg/ml) was added to the FISH samples before epifluorescence microscopy using a Elyra S1 (Zeiss) fitted with a Plan-Apochromat 63×/1.4 Oil immersion DIC objective. In parallel, to both assess and to minimize the dilution of the 13 C and 15 N signal in labelled biomass (for example, by some unlabelled reagents such as paraformaldehyde and reagents for FISH) during post-incubation sample preparations: Mn oxide nodules were dissolved as described in ‘FISH’ with DCBE buffer but without paraformaldehyde fixation, with the resultant cells subsequently stained for epifluorescence microscopy with DAPI–citifluor only (that is, no FISH reagents). However, the dissolution of nodules for analyses required the use of DCBE buffer which contains unlabelled citric acid, bicarbonate and EDTA, residuum from which, if associated with biomass from the nodules, would artefactually lower estimates of the extent of the labelling, especially for 13 C. Finally, filters were washed in ethanol with increasing concentrations (50%, 80% and 100%) each for 3 min before mounting on nanoSIMS holder using double coated carbon conductive tape (Ted Pella). NanoSIMS analysis was performed using a CAMECA NanoSIMS 50L at the Caltech Microanalysis Center. A focused primary Cs + beam of 100–1,000 pA was used for pre-sputtering the sample until 12 C 14 N – counts stabilized, and 2–4 pA was used for image collection with rasters of 512 × 512 pixels. Secondary ions 12 C 12 C − , 13 C 12 C – , 12 C 14 N – , 12 C 15 N – and 32 S − were measured simultaneously for at least 3 image frames. In one instance, a region was measured a second time with 55 Mn 16 O − added to the analysis to confirm the absence of Mn in the sample. Individual ion image frames in the resulting data were aligned using the 12 C 14 N − ion, and then epifluorescence microscopy images were transformed to match that of nanoSIMS ion images using Look@NanoSIMS 71 v.2019-05-14; regions of interest (ROIs) corresponding to cells of species A or species B were defined manually. Final ion counts per ROI were calculated by summation of ion counts over all image frames. Atom per cent of 13 C or 15 N was calculated from the individual ion counts as described in Supplementary Note 9 . Kinetics of Mn(II) oxidation and cell growth Glassware used in the kinetic experiments was acid-washed with 3.7% HCl and combusted at 550 °C to eliminate residue metals and organic carbon. All cultures were incubated at 37 °C with shaking at 200 rpm. The kinetic experiments were performed in 1-l flasks with 0.4 l of the basal medium using commercial MnCO 3 (Sigma-Aldrich) with a final content of 100 mM Mn(II) as determined using the ICP-MS method (see ‘Chemical analysis of Mn’). The basal media contain either 1 mM NaNO 3 ( n = 4) or 1 mM NH 4 Cl ( n = 5) as the nitrogen source. Each culture flask was inoculated with 5 ml of a Mn oxide slurry sampled from a flask of an active co-culture grown with 1 mM nitrate and 1 mM urea as the nitrogen source. Culture material was sampled from each flask daily for the first 36 days, followed by 3 final, well-separated time points over the final 30 days. At each time point, culture flasks were removed from the shaking incubator and swirled. Immediately thereafter, a total of 3 ml of oxides and culture fluid mixture was aseptically sampled from the flask via a 5-ml disposable pipette. Of this, 1 ml of the sample was saved at −80 °C for later ICP-MS analysis (see ‘Chemical analysis of Mn’); 2 ml of the sample was centrifuged at 8,000 g for 5 min and the pellet was stored at −80 °C for later quantitative PCR analyses. DNA was extracted from the pellets and quantified as described in ‘Media and culture enrichment, refinement, and maintenance’. Chemical analysis of Mn Reduced Mn(II) and oxidized manganese (a combination of Mn(III) and Mn(IV)) pools were measured using a previously described method 6 and evaluated (Supplementary Note 4 , Extended Data Fig. 10e ). In brief, 0.1 ml of oxide and culture fluid mixture was mixed with 0.9 ml of 0.5 N HCl. After reacting for at least 10 min, the mixture was centrifuged at 16,100 g for 3 min. The supernatant (acid-soluble fraction, representing Mn(II)) was pipetted out into a separate tube. The pellet (acid-insoluble fraction, representing Mn(III) and Mn(IV)) was then reacted with 1 ml of 0.25 N NH 2 OH ⋅ HCl in 0.25 N HCl. The acid-soluble and acid-insoluble fractions were then centrifuged again at 16,100 g for 3 min to avoid any carryover. From each fraction, 0.1 ml was sampled and then diluted into 10 ml of 2% HNO 3 . The Mn contents were measured using an Agilent 8800 inductively coupled plasma mass spectrometer (Agilent Technologies) with the helium gas collision mode and quantified using a Mn standard solution (Sigma-Aldrich, Supelco 1.19789). Attempts were also made to measure total Mn content using the formaldoxime method 72 , and oxidized Mn(III) and Mn(IV) content using the leucoberbelin blue dye 53 , 73 . However, both methods resulted in underestimates of the manganese content when compared to the ICP-MS method and standards. In large part, this was due to the relatively large Mn oxide nodules being challenging to both dissolve and to react with those reagents to completion. For determining dissolved Mn concentrations in particle-free fluids associated with MnCO 3 and spent cultures, fluids were sampled from uninoculated and inoculated cultures containing 20–100 mM MnCO 3 . From each sample, 0.1 ml was subsampled after centrifuging at 16,000 g for 3 min, subsequently filtered through a 0.22-μm filter, and each fraction dissolved into 10 ml of 2% HNO 3 . The ICP-MS measurements for Mn(II) and the combined Mn(III) and Mn(IV) fractions were performed as described above. Quantitative PCR To obtain standards and test quantitative PCR specificity, plasmids with 16S rRNA gene of either species A or species B were purified from transformed JM209 E. coli as described in ‘FISH’. Purified plasmids were then linearized using 150 units of restriction enzyme SacI in 50-μl reactions containing 1 × NEB Buffer 1 (New England Biolabs) overnight at 37 °C. The restriction digest reactions were heat-inactivated at 65 °C for 20 min and purified using Multiscreen HTS PCR 96-well plates (Millipore). Concentration of the linearized and purified plasmids were quantified using the Qubit dsDNA HS Assay Kit (Thermo Fisher Scientific) to make DNA template standards containing known copies of 16S rRNA genes of species A or species B. To track the growth of all bacteria in Mn-oxidizing cultures, quantitative PCR assays were performed based on an assay developed and optimized previously 74 (Supplementary Note 5 , Extended Data Fig. 10g–j ). All primers and probes were obtained from Integrated DNA Technologies and diluted in 10 mM Tris·HCl (pH 8.0). The forward and reverse primers used, BACT1369F1 (5′-CGG TGA ATA CGT TCC CGG-3′) and PROK1492R1 (5′-GGC TAC CTT GTT ACG ACT T-3′), were one version of the previous degenerate primers BACT1369F and PROK1492R. The TaqMan probe for prokaryotes, TM1389F (TM1389F-FAM-ZEN, 5′-CTT GTA CAC ACC GCC CGT C-3′), was modified on the 5′ with 6-FAM fluorophore, 3′ with Iowa Black FQ dark quencher and internally with ZEN quencher. Also, TaqMan probes that specifically target species A and species B were developed. The specific species A TaqMan probe TM1484R-A (5′-ATC ACC AAT CAT ACC TTG GGT GCC TG-3′) was modified on the 5′ with HEX fluorophore, 3′ with Iowa Black FQ dark quencher and internally with ZEN quencher. The specific species B TaqMan probe TM1484R-B (5′-GTC ACG AAC CCT GCC GTG GTA ATC-3′) was modified at the 5′ with Texas Red-X fluorophore and 3′ with Iowa Black RQ dark quencher. Optimized quantitative PCR reaction mixtures contain 10 μl of PrimeTime Gene Expression Master Mix (Integrated DNA Technologies), 1 μl of each forward and reverse primer (10 μM), 0.5 μl of each of the three TaqMan probes, 5.5 μl of PCR-grade water, and 1 μl of template DNA. The reactions were run in triplicates at 95 °C for 3 min, followed by 40 cycles of 95 °C for 10 s and 62 °C for 30 s, on the C1000 Thermal Cycler with CFX96 Real-Time System (Bio-Rad). The amplification efficiency was calculated as amplification efficiency = 10 (−1/slope) – 1. The upper and lower limits of quantification ranges were determined based on dilution of standards with amplification efficiencies in the log-linear phase between 90–105%. To convert to cell number, the 16S rRNA gene copies quantified in quantitative PCR assays were divided by 1 or 2 for species A or species B, respectively, based on the number of 16S rRNA gene copy number per genome. Genomics To obtain genomic DNA from species B, a flask with 200 ml of the basal medium containing 5 g/l of both tryptone and yeast extract was inoculated with a single colony of isolated species B from a succinate nitrate medium plate. The culture was grown at 37 °C with shaking at 200 rpm until early stationary phase, then collected by centrifuging at 5,250 g for 30 min at room temperature. DNA from the cell pellets was extracted following the bacterial genomic DNA isolation using CTAB protocol version 3 as previously described 75 . Ethanol-precipitated DNA was additionally purified using the PureLink PCR Purification Kit (Thermo Fisher Scientific) following the manufacturer’s instructions. To obtain genomic DNA from the co-culture, Mn oxide nodules were collected from an early stationary phase culture: 2 ml of culture containing about 0.15 g of Mn oxide nodules was centrifuged at 5,000 g for 10 min at room temperature. DNA was extracted from the pellet using the DNeasy PowerSoil kit (Qiagen) as described in ‘Media and culture enrichment, refinement, and maintenance’. DNA extract was purified and concentrated using protocol A of the CleanAll DNA/RNA Clean-up and Concentration Micro Kit (Norgen Biotek) following the manufacturer’s instructions. Purified genomic DNA samples (2–50 ng) were fragmented to the average size of 600 bp via use of a Qsonica Q800R sonicator (power: 20%; pulse: 15 s on/15 s off; sonication time: 3 min). Libraries were constructed using the NEBNext Ultra II DNA Library Prep Kit (New England Biolabs) following the manufacturer’s instructions. In brief, fragmented DNA was end-repaired using a combination of T4 DNA polymerase, E. coli DNA Pol I large fragment (Klenow polymerase) and T4 polynucleotide kinase. The blunt, phosphorylated ends were treated with Klenow fragment (3′ to 5′ exo minus) and dATP to yield a protruding 3′ 'A' base for ligation of NEBNext Multiplex Oligos for Illumina (New England Biolabs) which have a single 3′ overhanging T base and a hairpin structure. After ligation, adapters were converted to the Y shape by treating with USER enzyme and DNA fragments were size selected using Agencourt AMPure XP beads (Beckman Coulter) to generate fragment sizes between 500 and 700 bp. Adaptor-ligated DNA was PCR-amplified with 8 to 12 cycles depending on the input amount followed by AMPure XP bead clean up. Libraries were quantified with Qubit dsDNA HS Kit (Thermo Fisher Scientific) and the size distribution was confirmed with High Sensitivity DNA Kit for Bioanalyzer (Agilent Technologies). Sequencing was performed on HiSeq2500 platform (Illumina) with paired 250-bp reads following the manufacturer’s instructions. Base calls were performed with RTA v.1.18.64 followed by conversion to FASTQ with bcl2fastq v.1.8.4 (Illumina). In addition, reads that did not pass the Illumina chastity filter as identified by the Y flag in their fastq headers were discarded. The resulting reads were uploaded to the KBase platform 76 , trimmed using Trimmomatic 77 v.0.36 with default settings and adaptor clipping profile Truseq3-PE, and assembled using Spades 78 v.3.11.1 with default settings for standard dataset. Manual binning and scaffolding were performed using mmgenome v.0.7.1 79 , using differential coverage from isolated species B versus the species A + species B co-culture, to generate genome bins for species A and species B. Trimmed reads were aligned to either the species A or species B genome bin using bowtie2 80 v.2.3.4.1 with default settings. Finally, the resulting reads were reassembled using Spades 78 v.3.11.1 with --careful setting and manually binned with mmgenome 79 v.0.7.1 once again, excluding contigs <500 bp. Reconstructed genomes were annotated using the IMG 81 Microbial Genome Annotation and the National Center for Biotechnology Information (NCBI) 82 Prokaryotic Genome Annotation Pipelines; β-barrel protein prediction was performed using PRED-TMBB 83 . Phylogenetic analyses For species A phylogenies, 275 publicly available genome assemblies in NCBI assembly database 82 (as of 26 March 2019) were analysed. These fell within the phylum Nitrospirae (taxonomy identifier 40117) 84 , which corresponded to the phylum under the headings ‘Nitrospirota’ and ‘Nitrospirota _ A’ in the Genome Taxonomy Database (GTDB) 62 v.0.2.2. Genome assemblies with estimated completeness of <60% and contamination of >5% (based on CheckM 85 v.1.0.6) were excluded. For 16S rRNA gene phylogeny, 16 s rRNA genes from the genome of species A or species B, as well as the genome assemblies, were retrieved using CheckM 85 v.1.0.6 ssu_finder utility. Sequences of less than 900 bp and containing more than 2 N were excluded. The 16S rRNA gene sequences were aligned using SINA 86 v.1.2.11 and imported into SILVA 59 Ref NR 99 database release 128. Sixty 16S rRNA gene sequences, including 5 different outgroup sequences ( Desulfovibrio vulgaris , Ramlibacter tataouinensis TTB310, Nitrospina gracilis 3/211, Acidobacterium capsulatum and ‘ Candidatus Methylomirabilis oxyfera’), with 1,532 nucleotide positions were exported with bacteria filter in the SILVA database. Bayesian phylogenetic trees were constructed using MrBayes 87 v.3.2.6 with evolutionary model set to GTR + I + gamma, burn-in set to 25% and stop value set to 0.01, and edited in iTOL 88 . For concatenated multilocus protein phylogeny, marker proteins from 40 genomes including the same 5 outgroup species were identified and aligned using a set of 120 ubiquitous single-copy bacterial proteins in GTDB 62 v.0.2.2. The protein alignment was filtered using default parameters in GTDB 62 v.0.2.2 (the full alignment of 34,744 columns from 120 protein markers were evenly subsampled with a maximum of 42 columns retained per protein; a column was retained only when the column was in at least 50% of the sequences, and contained at least 25% and at most 95% of one amino acid). The resulting alignment with 5,036 amino acid positions was used to construct the multilocus protein phylogeny using MrBayes 87 v.3.2.6 as described above, except the evolutionary model was set to invgamma and a mixed amino acid model. For species B phylogenies, 60 publicly available genome assemblies were selected from the NCBI assembly database 82 under the class Betaproteobacteria (taxonomy identifier 28216) 84 , which corresponds to the order Betaproteobacteriales in GTDB 62 v.0.2.2. These 60 genomes assemblies had >93% completeness and <3% contamination based on CheckM 85 v.1.0.6, and represented different genera. The same 5 outgroups were used as for species A, except Ramlibacter tataouinensis TTB310 was replaced with Nitrospira inopinata . Bayesian 16S rRNA gene phylogeny was performed as described for species A with 1,532 aligned nucleoic acid positions. Maximum-likelihood multilocus protein phylogeny was constructed using 5,035 amino acid positions using RAxML 89 v.8.1.7 with protein model GAMMALGF and rapid bootstrapping of 100 replicates. For functional gene phylogeny, protein sequences were aligned using ClustalO 90 v.1.2.4 and maximum likelihood trees were constructed using RAxML 89 v.8.1.7 with protein model GAMMALGF and rapid bootstrapping of 100 replicates. Transcriptomics Glassware treatment and culturing condition for the transcriptomics experiments were as described in ‘Kinetics of Mn(II) oxidation and cell growth’. Three replicates from different, actively Mn-oxidizing cultures were used as inocula for RNA experiments. A 5% v/v inoculum from two active Mn-oxidizing cultures was transferred into triplicate flasks containing the basal medium amended with 100 mM freshly prepared MnCO 3 , for a total of 6 flasks (sample identifiers: Mn03/Mn06/Mn08 and Mn09/Mn10/Mn11). A 1% v/v inoculum from another active Mn-oxidizing culture was transferred into a 7th flask of the same medium (sample identifier Mn12). For collecting biomass before the complete oxidation of Mn(II) (Extended Data Fig. 3h ), shaking of the incubator was paused for 3 min to allow Mn oxide nodules to settle by gravity. The overlaying medium was then decanted off until about 10 ml of culture and oxides remained. First 35 ml and then 15 ml of LifeGuard Soil Preservation Solution (Qiagen) was added, sequentially, to maximize transfer efficiency of Mn oxide nodules from the culture into a new 100-ml centrifuge tube. The culture–LifeGuard mixture was stored at 4 °C for less than a week, a duration for which RNA was expected to remain well-preserved, according to the manufacturer. Initially, a series of different trials were performed (using multiple methods and variations, including the RNeasy Mini Kit (Qiagen), the RNeasy Powersoil Kit (Qiagen), and a previous customized procedure 91 ) to directly extract RNA from Mn oxide nodules. None of these extracted measurable RNA from samples. When an early chemical dissolution step was added, the extraction of RNA from Mn oxide nodules was successful in combination with the following procedures. Before RNA extraction: the culture–lifeguard mixture was first centrifuged at 5,250 g for 10 min at 4 °C. Four hundred ml of freshly prepared 0.22-μm-filtered DCBE solution (for recipe, see ‘FISH’) was added to the pellet and incubated at 37 °C with shaking at 200 rpm for 10 min to dissolve the Mn oxide nodules. The dissolved samples were then centrifuged at 5,250 g for 10 min at 4 °C. The supernatant was decanted, and the remaining 5 ml of material was mixed with 10 ml of 0.22-μm-filtered and autoclaved CBE solution (similar to the DCBE solution, but without sodium dithionite). The mixture was then centrifuged at 5,250 g for 10 min at 4 °C. The supernatant was discarded, and 0.9 ml of RLT solution with 2-mercaptoethanol from the RNeasy Mini Kit (Qiagen) was added to the pellet. The mixture was transferred into glass 0.1-mm PowerBead Tubes (Qiagen) and bead-beat using FastPrep FP120 (Thermo Electron) at setting 5.5 for 45 s. RNA extraction then proceeded using the RNeasy Mini Kit (Qiagen) according to the manufacturer’s instructions. DNA from the extracted RNA was removed using the DNase Max Kit (Qiagen). The RNA extracts were purified and concentrated using protocol C of the CleanAll DNA/RNA Clean-up and Concentration Micro Kit (Norgen Biotek) and stored at −80 °C until sequencing library preparation. For RNA sequencing library preparation, RNA integrity was first assessed using RNA 6000 Pico Kit for Bioanalyzer (Agilent Technologies). RNA sequencing (RNA-seq) libraries were constructed using NEBNext Ultra II RNA Library Prep Kit for Illumina (New England Biolabs) following the manufacturer’s instructions. In brief, 1–10 ng of total RNA was fragmented to the average size of 200 bp by incubating at 94 °C for 15 min in first-strand buffer, cDNA was synthesized using random primers and ProtoScript II Reverse Transcriptase followed by second-strand synthesis using NEB Second Strand Synthesis Enzyme Mix. Resulting DNA fragments were end-repaired, dA-tailed and ligated to NEBNext hairpin adaptors (New England Biolabs). After ligation, adaptors were converted to the Y shape by treating with USER enzyme and DNA fragments were size-selected using Agencourt AMPure XP beads (Beckman Coulter) to generate fragment sizes between 250 and 350 bp. Adaptor-ligated DNA was PCR-amplified followed by AMPure XP bead clean up. Libraries were quantified with Qubit dsDNA HS Kit (Thermo Fisher Scientific), and the fragments were determined to a mean of 350 bp with standard deviation of 70 bp using High Sensitivity DNA Kit for Bioanalyzer (Agilent Technologies). Sequencing was performed on the HiSeq2500 platform (Illumina) with single-end 50- and 100-bp reads following the manufacturer’s instructions. Base calls were performed with RTA 1.18.64 followed by conversion to FASTQ with bcl2fastq 1.8.4 (Illumina). Low-quality reads and TruSeq3-SE adaptors were removed using Trimmomatic 77 v.0.36 with default settings. rRNA was removed using sortmerna 92 v.2.0 with supplied SILVA databases and default settings. Read mapping of the non-rRNA was performed using kallisto 93 v.0.44.0 with 100 bootstraps, fragment mean of 130 bp and standard deviation of 70 bp. The fragment mean was determined to be 230 bp using Agilent Bioanalyzer, after adaptor removal (120 bp), but this input parameter caused an issue in evaluating gene expression of genes smaller than 230 bp in kallisto 93 v.0.44.0. The fragment mean was therefore decreased while not affecting the overall transcript expression (Extended Data Fig. 10f ). Final analysis and normalization of different RNA samples was performed using sleuth 94 v.0.30.0, with the reconstructed genomes of both species (Supplementary Tables 3 , 5 ) or only of species A (Supplementary Table 4 ). Extended Data Fig. 3h includes RNA processing statistics of the processed files where read mapping was done using BBMap ( ) v.37.93 with minid = 0.97 ambiguous = toss settings. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All sequencing data has been deposited at the NCBI under BioProject PRJNA562312 . The cloned 16S rRNA gene sequences of ‘ Candidatus Manganitrophus noduliformans’ (species A) and R. lithotrophicus (species B) from the co-culture have been deposited at GenBank under accession numbers MN381734 and MN381735 , respectively. The iTAG sequences from the different enrichments have been deposited at the Sequence Read Archive (SRA) under accession numbers SRR10031198 , SRR10031199 and SRR10031200 . Genome sequences of the co-culture, from which the genome of ‘ Candidatus Manganitrophus noduliformans’ was reconstructed, have been deposited under BioSample SAMN12638105 with raw sequences deposited at SRA under accession number SRR10032644 ; the reconstructed genome of ‘ Candidatus Manganitrophus noduliformans’ has been deposited at DDBJ/ENA/GenBank under accession number VTOW00000000 . Genome sequences of R. lithotrophicus strain RBP-1 have been deposited under BioSample SAMN12638106 , with raw sequences deposited at SRA under accession number SRR10031379 ; the reconstructed genome of R. lithotrophicus strain RBP-1 has been deposited at DDBJ/ENA/GenBank under accession number VTOX00000000 . Additionally, reconstructed genomes have been deposited in Joint Genome Institute (JGI) Genomes Online Database Study ID Gs0134339 , with Integrated Microbial Genome ID 2784132095 for ‘ Candidatus Manganitrophus noduliformans’ and ID 2778260901 for R. lithotrophicus strain RBP-1. Transcriptome sequence data for the seven biological replicates have been deposited at SRA under accession numbers SRR10060009 , SRR10060010 , SRR10060011 , SRR10060012 , SRR10060013 , SRR10060017 and SRR10060018 . Unique biological materials are available from the corresponding author upon reasonable request. Source data are provided with this paper.
Caltech microbiologists have discovered bacteria that feed on manganese and use the metal as their source of calories. Such microbes were predicted to exist over a century ago, but none had been found or described until now. "These are the first bacteria found to use manganese as their source of fuel," says Jared Leadbetter, professor of environmental microbiology at Caltech who, in collaboration with postdoctoral scholar Hang Yu, describes the findings in the July 16 issue of the journal Nature. "A wonderful aspect of microbes in nature is that they can metabolize seemingly unlikely materials, like metals, yielding energy useful to the cell." The study also reveals that the bacteria can use manganese to convert carbon dioxide into biomass, a process called chemosynthesis. Previously, researchers knew of bacteria and fungi that could oxidize manganese, or strip it of electrons, but they had only speculated that yet-to-be-identified microbes might be able to harness the process to drive growth. Leadbetter found the bacteria serendipitously after performing unrelated experiments using a light, chalk-like form of manganese. He had left a glass jar soiled with the substance to soak in tap water in his Caltech office sink before departing for several months to work off campus. When he returned, the jar was coated with a dark material. "I thought, 'What is that?'" he explains. "I started to wonder if long-sought-after microbes might be responsible, so we systematically performed tests to figure that out." The black coating was in fact oxidized manganese generated by newfound bacteria that had likely come from the tap water itself. "There is evidence that relatives of these creatures reside in groundwater, and a portion of Pasadena's drinking water is pumped from local aquifers," he says. Manganese is one of the most abundant elements on the surface of the earth. Manganese oxides take the form of a dark, clumpy substance and are common in nature; they have been found in subsurface deposits and can also form in water-distribution systems. "There is a whole set of environmental engineering literature on drinking-water-distribution systems getting clogged by manganese oxides," says Leadbetter. "But how and for what reason such material is generated there has remained an enigma. Clearly, many scientists have considered that bacteria using manganese for energy might be responsible, but evidence supporting this idea was not available until now." The finding helps researchers better understand the geochemistry of groundwater. It is known that bacteria can degrade pollutants in groundwater, a process called bioremediation. When doing this, several key organisms will "reduce" manganese oxide, which means they donate electrons to it, in a manner similar to how humans use oxygen in the air. Scientists have wondered where the manganese oxide comes from in the first place. "The bacteria we have discovered can produce it, thus they enjoy a lifestyle that also serves to supply the other microbes with what they need to perform reactions that we consider to be beneficial and desirable," says Leadbetter. The research findings also have possible relevance to understanding manganese nodules that dot much of the seafloor. These round metallic balls, which can be as large as grapefruit, were known to marine researchers as early as the cruises of the HMS Challenger in the 1870s. Since then, such nodules have been found to line the bottom of many of Earth's oceans. In recent years, mining companies have been making plans to harvest and exploit these nodules, because rare metals are often found concentrated within them. But little is understood about how the nodules form in the first place. Yu and Leadbetter now wonder if microbes similar to what they have found in freshwater might play a role and they plan to further investigate the mystery. "This underscores the need to better understand marine manganese nodules before they are decimated by mining," says Yu. "This discovery from Jared and Hang fills a major intellectual gap in our understanding of Earth's elemental cycles, and adds to the diverse ways in which manganese, an abstruse but common transition metal, has shaped the evolution of life on our planet," says Woodward Fischer, professor of geobiology at Caltech, who was not involved with the study.
10.1038/s41586-020-2468-5
Other
Study shows female students perform better on longer tests
Pau Balart et al. Females show more sustained performance during test-taking than males, Nature Communications (2019). DOI: 10.1038/s41467-019-11691-y Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-11691-y
https://phys.org/news/2019-09-female-students-longer.html
Abstract Females tend to perform worse than males on math and science tests, but they perform better on verbal reading tests. Here, by analysing performance during a cognitive test, we provide evidence that females are better able to sustain their performance during a test across all of these topics, including math and science (study 1). This finding suggests that longer cognitive tests decrease the gender gap in math and science. By analysing a dataset with multiple tests that vary in test length, we find empirical support for this idea (study 2). Introduction The successful completion of a cognitive task often requires time. At the workplace, one might even work for eight consecutive hours on cognitively demanding tasks. These observations suggest that the ability to sustain performance during such tasks is a relevant aspect for individual success in current knowledge societies. Though there are some exceptions 1 , 2 , this ability has been largely understudied. In the present article, we study gender differences in the ability to sustain performance on a specific task: completing a cognitive test. We examine the hypothesis that females show more sustained performance during test-taking than males and investigate its potential implications for the gender gaps in test scores. It has been documented that, on average, female students tend to outperform male students on verbal and reading tests, while male students often perform better than female students on math and science tests 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 . Gender gaps in math test scores have received special attention because course enrolment gaps in math and related STEM-field courses are important to understand the male−female differences in socioeconomic outcomes 12 , 13 . Despite the fact that gender gaps in math test scores have been found to narrow or even vanish over recent decades 4 , 9 , 14 , they remain present in large-scale assessments such as the Programme for International Student Assessment (PISA) 15 . Similar to many other cognitive tasks, the completion of a test requires time. Consequently, gender differences in the ability to sustain performance during a test can affect these documented gender gaps in test scores. Previous research provides several reasons to expect that female students show more sustained performance during test-taking than male students. The first reason is the documented gender difference in noncognitive skills. Noncognitive skills comprise a broad category of metrics that encompass socioemotional ability, personality, motivations and behaviour, and they have recently gained attention as important aspects of human capital 16 . Females have been found to have more self-discipline 17 , have fewer behavioural problems 18 , be less overconfident 19 , and show more developed attitudes towards learning 10 . In terms of the Big Five taxonomy, females consistently report higher levels of agreeableness and neuroticism 20 , 21 , 22 , 23 . Female have also been found to report higher levels of conscientiousness 22 , but the size of this gender gap seems to differ among studies 20 , 21 , 23 and to depend on the facet of conscientiousness 24 . A second reason for the hypothesis is that male and female students may follow different strategies while completing a test. We define test-taking strategies as any reason that leads a student to answer the questions in an order different than the order being administered. For instance, a strategy might be to review all the answers before handing in the test. Although gender differences in test-taking strategies have not been directly studied before, females have been found to have an advantage in the neurocognitive ability of planning 25 . Planning includes actions and thoughts for successful task completion, such as self-monitoring, self-correction, and verification of completion 26 . Gender differences in test-taking strategies seem to also be consistent with the existing evidence on male−female differences in time management 27 , 28 . Finally, a third reason for the hypothesis is that females may be better able to maintain their effort during the test. Testing effort and motivation have been found to be an important determinant of test scores 29 , 30 , 31 , 32 , where female students have been found to exert higher effort on non-incentivized tests 31 , 33 , 34 , 35 . All of these reasons are unrelated to the specific cognitive domain evaluated in the test. Therefore, we can expect that females show more sustained performance during test-taking than male in the cognitive domains where they perform both relatively better (reading) and relatively worse (math-science). Gender differences in the ability to sustain performance can have important implications in the interpretation and framing of the previously documented gender differences in test scores. The presence of larger gender gaps in short tests has often been attributed to females being less able to cope with time pressure 36 . For instance, when the relative performance of female students increased after a 15-min time extension on math and computer science exams at Oxford University, dons argued the following in an article in The Telegraph : “female candidates might be more likely to be adversely affected by time pressure” 37 . A female advantage in sustaining performance during test-taking would provide an alternative explanation to this observation: female students might make better use of the extra time on the test because of their ability to sustain performance. In this article, we conduct two studies to analyse the gender difference in sustaining performance during a cognitive test and its potential implications for the gender gaps in test scores. In study 1, we use data from the PISA test and show that 15-year-old females are better able to sustain their performance during the test than 15-year-old males. Extending the approach proposed by previous work 1 , we compare the performance of male and female students at the beginning of and during the test, and we do so separately for math and science questions (domains favourable to males) and for reading questions (a domain favourable to females). Our main finding is that females are better able to sustain their performance during the test regardless of their relative advantage or disadvantage in the domain being assessed. This finding holds for a vast majority of countries, is present across years, is robust to numerous checks and is sizable in terms of the highly studied gender differences in math and science test scores. For example, in more than 50% of the countries where female students had an initial disadvantage in math and science, female students decreased this disadvantage by at least half after 2 h of test-taking. We also exploit the PISA data to distinguish between the three potential reasons for the gender difference discussed above, namely, noncognitive skills, test-taking strategies, and test effort. However, as we explain in detail in the Results section, we do not find solid evidence in favour of any of these reasons. Females’ ability to better sustain their performance may contribute to closing the gender gap in math scores where longer tests are concerned. Therefore, our second study (study 2) uses a database constructed by Lindberg et al. 9 and investigates the relationship between the gender gap in math test scores and the length of the test. Lindberg et al. conducted a meta-analysis on the math gender gap for which they amassed information on male and female performance on more than 400 math tests worldwide. For the purpose of study 2, we extend their dataset with measures of test length. We find that longer tests are associated with female students decreasing the gender gap in math and show that the gender differences in coping with time pressure are an unlikely explanation for this finding. This article shows that females are better able to sustain their performance during an international standardized test, both in the domain in which they score relatively better (reading) and in the domain in which they score relatively worse (math and science). Our findings emphasize a female strength in test-taking that has been largely ignored and that deserves visibility and recognition. Results Study 1: baseline results In Study 1, we use the PISA data to analyse the gender differences in the ability to sustain performance during a test. The PISA is an international triennial test administered by the Organisation for Economic Co-operation and Development (OECD), and it aims to assess the skills and knowledge of 15-year-old students in the domains of math, science, and reading. We explain the specifics of the PISA in the Methods section. Every 3 years, the PISA test focuses on one of the three domains. The PISA 2009 focused on reading, which, as we will explain later, gave the test a balanced distribution between domains favourable to females (reading) and domains favourable to males (math and science). Accordingly, we use this wave to document our baseline results. Figure 1 illustrates the main idea of study 1 by focusing on Ireland. This figure shows the proportion of correct answers against the position of the question on the PISA test separately for males and females. For both sexes, questions had a lower probability of being answered correctly as the position that they occupied moved towards the end of the test. This pattern has been termed the performance decline 1 . As we explain in detail in the Methods section and in Supplementary Tables 3 and 4 , the random ordering of questions among students ensures that this pattern is not driven by differences in question difficulty. Fig. 1 Performance throughout the test for males and females in Ireland. The figure is based on the PISA 2009 and uses Locally Weighted Scatterplot Smoothing (LOWESS) to visualize the relationship between the probability to answer a question correctly and the position of the question in the test. Source data are provided as a Source Data file (study 1) Full size image The key take-away of Fig. 1 , in the context of our research, is that performance decline is weaker for female students. That is, the figure shows that in Ireland, females were better able to sustain their performance throughout the test than male. The proportion of male and female students that correctly answered the first question was equal, while a higher proportion of female students answered the questions correctly as the test went on. For the results presented below, we use ordinary least squares (OLS) to estimate the gender gap at the beginning of the test (the performance difference between female and male students at the origin in Fig. 1 ) and the gender gap in students’ ability to sustain performance (i.e., the difference between female and male students in the linear estimates of the slopes in Fig. 1 ). Details on the methodology that we used can be found in the Methods section. Figure 2 shows the first step of study 1. It reports the estimated gender differences in students’ ability to sustain performance in each country and their corresponding 95% confidence intervals. Positive values indicate countries in which female students were better able to sustain their performance during the test than male students. Figure 2 shows that this was the case for all participating countries, except for Kazakhstan, Miranda (a state in Venezuela), and Macao (China). However, in none of these three exceptions was the gender difference statistically significant. In contrast, the less steep decline in performance experienced by female students was statistically significant at the 5% level in 56 out of the 74 participating countries. Fig. 2 Gender differences in sustaining performance. The figure plots the estimate of the gender difference in sustaining performance during the test for each country participating in the PISA 2009. Positive values indicate countries in which females are better able to sustain their performance during the test than males. Error bars represent the 95% confidence intervals. Source data are provided as a Source Data file (study 1) Full size image To illustrate the interpretation of the results, the point estimate of 0.05 for Ireland implies that given that male and female students perform similarly on the first question of the test, the probability of answering the last question correctly is 5 percentage point higher for Irish female students. The precise estimates for the gender differences per country and their corresponding p values can be found in Supplementary Table 1 (two-sided t test). Supplementary Database 1 reports the precise estimates (with the corresponding standard errors and t statistics) for each figure and table of study 1, including Fig. 2 . The second step, and the main aim of study 1, was to analyse gender differences in performance at the start of and during the test, both in the domain favourable to females (reading) and in the domains favourable to males (math and science). The estimates for the reading domain are displayed in panel (a) of Fig. 3 , while the math and science domains are displayed in panel (b). We have plotted point estimates and the corresponding 95% confidence intervals for each country. The grey lines (with squares that represent point estimates) represent the confidence interval for the female−male gap at the beginning of the test in each country. The black lines (with dots that represent point estimates) represent the confidence intervals for the female−male gaps in terms of the ability to sustain performance during the test, and countries are ordered according to the size of this metric. Positive values indicate that females showed an advantage in the particular metric being considered. Fig. 3 Gender differences in starting performance and in sustaining performance by topic. The figures plot the point estimates of the gender gap in starting performance and in sustaining performance during the test for each country participating in the PISA 2009 for a reading and b math-and-science. Positive values indicate the gender gap favours females. Error bars represent the 95% confidence intervals. Source data are provided as a Source Data file (study 1) Full size image When looking at the reading questions (panel (a) of Fig. 3 ), 64 out of the 74 grey confidence intervals are strictly positive. Consistent with previous research documenting that females perform better at reading than males, we found that they outperformed males in this domain at the beginning of the test. At the same time, female students were better able to sustain their performance in reading in 68 countries. This difference was statistically significant at the 5% level in 36 countries. On the reading questions, females performed better both at the beginning of the test and in sustaining their performance during the test. Consistent with the previous literature on gender gaps in math and science, for 58 out of the 74 participating countries, the grey confidence intervals are strictly negative (panel (b) of Fig. 3 ), which indicates that male students outperformed female students in initial performance in math and science. In contrast, in most of the countries, the black confidence intervals exhibit positive values, which implies that female students were better at sustaining performance in math and science during the test. Point estimates have a positive value in 68 countries and are statistically significant at the 5% level in 41 of them. The numerical estimates per country and the corresponding p values can be found in Supplementary Table 2 (two-sided t test). Despite male students having an initial advantage in the math and science domains, there was not a single country in which they were significantly better able to sustain their performance during the test. This finding suggests that longer cognitive tests exacerbate the gender gap in reading and shrink it in math and science. In line with the literature on the gender gap in math and science, female students scored lower at the beginning of the test in math and science by a statistically significant degree in 58 countries. According to our estimates, however, this gender gap was completely offset or even reversed in more than 20% of these countries after 2 h of test-taking. In more than 50% of these countries, female students decreased their initial disadvantage by at least one half at the end of the test. Supplementary Table 5 provides a country-by-country overview of the point in the test at which females closed the gender gap in math and science. Robustness checks for study 1 are available in Supplementary Notes 1 and 3 , Supplementary Figs. 1 – 5 and 9 – 20 , and Supplementary Tables 11 and 12 . We show, among other things, that our findings stand up to the use of different PISA waves (2006−2015) and different estimation methods. Study 1: potential determinants of the gender difference The combination of the two graphs in Fig. 3 provides evidence that the female ability to better sustain performance does not correspond to the gender gaps that exist in the domains being assessed. This leads us to disregard gender differences in domain-specific cognitive skills or the stereotype threat associated with them 38 , 39 , 40 , 41 as an explanation of our findings. Given the discussion of the literature in the Introduction, we consider the following three potential explanations for our findings: (i) gender differences in noncognitive skills; (ii) gender differences in test-taking strategies; and (iii) gender differences in test effort. We will discuss each of these explanations in turn. Noncognitive skills are often defined as relatively enduring patterns of thoughts, feelings and behaviour, and this category includes “personality traits, goals, character, motivations, and preferences that are valued in the labour market, in school, and in many other domains” 42 . An advantage of the PISA is that student background questionnaires are used to construct validated measures of students’ noncognitive skills. In particular, the PISA measures that we considered are constructed with a minimum of 4 and a maximum of 11 separate items. These measures were validated in two ways: the separate items underlying the measures have a Cronbach’s alpha that is well above 0.7, and the measures that are thought to be related show strong correlations. The technical report of each PISA wave details how the measures were constructed and documents the results for the two validity exercises described above (see Chapter 16 of refs. 43 , 44 , 45 ). Each PISA wave collects a different set of noncognitive skills. To provide us with a broad range of measures, we extracted the validated measures from the 2006, 2009, and 2012 PISA waves. In particular, we retrieved information on the following: Students’ interest in each specific domain, i.e., science, reading, and math, by using the PISA 2006, PISA 2009, and PISA 2012, respectively; Students’ motivation towards the science and math domains by using the PISA 2006 and PISA 2012, respectively; Students’ attitudes towards school and learning by using the PISA 2009 and PISA 2012; Students’ self-efficacy and self-concept (which captures beliefs about proficiency in math) by using the PISA 2012; Students’ intentions about their future studies and career by using the PISA 2006 and PISA 2012; and The four well-known noncognitive skills of conscientiousness, openness (in problem solving), neuroticism and locus of control (in math), by using the PISA 2012. We tested whether the validated measures were able to mediate the gender difference by including them in our model. For instance, as girls have been shown to have more developed attitudes towards learning, it might be the case that controlling for this attitude would mediate the gender difference. Further details on this methodology can be found in Supplementary Note 2 , which also provides a detailed overview of all the measures used. The country estimates for all the analyses of noncognitive skills are available in Supplementary Database 2 . Supplementary Table 6 documents for each measure the underlying items and the average gender difference across all PISA countries. For most of the noncognitive skills, we find gender differences that are consistent with the previous literature. Therefore, our data confirm that most of the measures above are possible candidates to mediate the finding of study 1. Note, however, that the measures for conscientiousness, openness, and internal locus of control favour male students in our data, which makes it less likely that these three constructs mediate our findings. For the measures of openness and locus of control, this might be explained by their focus on the domain of problem solving and mathematics, respectively. However, below, we consider alternative measures for these three constructs, where we found gender differences favourable to females. Our results indicate that none of the validated measures were able to mediate the gender gap in sustaining performance. For instance, female students reported a higher interest in reading, and students with a higher interest in reading were also better able to sustain their performance during the test in 42 countries (statistically significant at the 5% level). However, we found that after controlling for this, the baseline gender difference was still present and statistically significant at the 5% level in 47 countries. Where none of the validated measures were able to mediate the gender difference, one might argue that two relevant skills of the Big Five taxonomy were not controlled for, specifically, agreeableness and extraversion. To partially address this, we drew on two individual items for these remaining traits (“I get along well with most of my teachers” for agreeableness and “I make friends easily at school” for extraversion). These two items did not offer validated measures, but they were the best proxies available to us. Additionally, to complement the validated measures above, we collected information from the items that measure openness and locus of control. As the PISA does not use these items to construct validated measures, we performed a principal component analysis and used its first component as a measure of the two skills. Supplementary Table 7 provides an overview of all of these individual items and shows their similarity to some of the items used in validated scales, such as the Big Five Inventory 46 . The final column of Supplementary Table 7 documents that female students report higher levels of agreeableness, openness (on one of the three items), and internal locus of control. Similarly, as before, these measures did not mediate the gender difference. However, as these proxies are not validated, we do not exclude the possibility that this finding was driven by a lack of proper measures. All previous measures were based on self-reports. Recent research has proposed and validated a non-self-reported measure for conscientiousness: careless answering behaviour in a survey 47 , 48 , 49 . Following this research, we calculate the proportion of questions that the students did not provide an answer to in the student background questionnaire to construct a non-self-reported measure of conscientiousness. Our data indicate that female students show higher levels of conscientiousness on this measure; the proportion of questions that the students did not provide an answer to was roughly 0.9 percentage points lower for females ( p value = 0.00, two-sided t test). As before, we found that this measure was unable to explain the gender difference, which further corroborates the findings above. A second explanation of our findings could be gender differences in test-taking strategies. We define test-taking strategies as any reason that leads a student to answer the questions in a different order than the order proposed by the test. For instance, certain students might be more inclined to first take a quick look at every question on the test and then answer the questions that they think are easy. We repeated the baseline analysis with data from the most recent PISA wave (2015). For this wave, the test was given on the computer in 58 countries, and navigation between question units was restricted. As such, we could be sure that the position of the question unit in the test was the actual position in which the unit was answered. Our findings reveal that the gender differences for this analysis are very similar; therefore, we can disregard the possibility that test-taking strategies are an important determinant for the gender difference. Further details on this analysis and its results are available in Supplementary Note 2 and Supplementary Fig. 6 . We continue by analysing the role of test effort and test motivation more generally. The computer-based nature of the PISA 2015 allowed us to analyse two proxies for effort: the time spent per question and the number of actions per question. The time spent per question is measured in minutes, while the number of actions per question is a composite measure of the number of clicks, double-clicks, key presses, and drag/drop events. The PISA interface provides some tools to generate an answer, e.g., a calculator. This fact allows us to consider the number of actions as a proxy for test effort. Consistent with it being a measure of effort, in 48 out of the 58 countries we found a statistically significant positive correlation between the number of actions and answering a question correctly. With respect to time, more able students generally take more time to complete the test 50 . We study the gender differences in the evolution of these inputs during the test to check whether female students were better able to maintain their effort during the test. Panel (a) of Fig. 4 shows that the time spent per question during the test did not show an obvious pattern between sexes. Depending on the country, either female or male students decreased the amount of time spent per question more quickly, with most of the estimates being statistically insignificant. Panel (b) of Fig. 4 reveals that for most of the countries, the number of actions per question during the test decreased more quickly for females than for males. Similar to the analyses of noncognitive skills reported above, Supplementary Fig. 7 documents that the gender difference was robust to controlling for these two proxies for effort. Fig. 4 Gender differences in sustaining time spent per question and number of actions per question. The figures plot the estimates of the gender gap in sustaining a time spent per question and b the number of actions per question for each country participating in the PISA 2015. Positive values indicate the gender gap favours females. Error bars represent the 95% confidence intervals. Source data are provided as a Source Data file (study 1) Full size image In light of these results, the gender difference in ability to sustain performance does not seem to be driven by a difference in the inputs used to provide correct answers (i.e., domain-specific cognitive ability, time spent on an item, or actions taken to answer an item), but rather by the efficacy of the mental process that translates these inputs into a correct answer. Although we are unable to empirically test this hypothesis with the available data, it is consistent with the existence of a gender difference that arises when considering the temporal dimension of performance: boredom. Males have been found to experience higher levels of boredom on activities with a long duration, which might cause impaired performance after some time of test-taking 51 , 52 , 53 , 54 , 55 , 56 . We elaborate on this explanation at the end of Supplementary Note 2 , where Supplementary Fig. 8 and Supplementary Table 8 document some suggestive evidence in favour of this explanation. Overall, we are unable to provide clear evidence on the important determinants for the gender difference in ability to sustain performance. Although our results rule out the importance of test-taking strategies and many noncognitive skills, it might be that the relevant skills were not (properly) measured by the PISA. Moreover, our data do not allow us to directly test the hypothesis related to boredom. We conclude that this topic remains open for future research. Study 2 The findings in Study 1 imply that longer tests could reduce the gender gap in math, whereas shorter tests might exacerbate it. We test this implication by using an existing dataset from Lindberg et al. 9 . They amassed information on male and female performance on 441 math tests to conduct a meta-analysis on gender differences in mathematics performance 9 . We were able to collect the number of questions for 203 of the 441 tests in this dataset, which we used as a proxy for test length. Further details on the dataset and the methodology of study 2 is provided in the Methods section. Table 1 shows the OLS estimates of regressing the standardized math gender gap on a constant and the number of questions on a test. It confirms that longer tests are associated with a smaller gender gap in math. Column (1) suggests that males perform approximately 0.2 standard deviations better than females on short tests. However, females are on par with males if the test reaches 125 questions. Column (2) shows that this result is robust to excluding an extreme test with 240 questions. Although these two columns directly use the data from the original study 9 , we also compiled information on the performance of male and female students on the tests ourselves. In columns (3) and (4), we can see that the results are robust to our own calculation of the math gender gap and to reducing the weight to one-half for the studies (observations) that we coded differently than Lindberg et al. 9 . Supplementary Note 4 and Supplementary Table 9 provide evidence that our results for study 2 are robust to additional checks. Table 1 Relationship between the gender gap in math and the number of questions Full size table That longer tests reduce the gender gap in math might also be explained by gender differences in test performance under time pressure 36 . As illustrated in the Introduction, the change in testing time at Oxford University was framed in these terms 37 . In contrast to the Oxford case, however, the results from study 2 are unlikely to be explained by a reduction in time pressure. To see this, note that the proxy that we considered for test length was the number of questions, and increasing the number of questions on a test does not necessarily relax the time pressure. The results from Study 2 could be attributed to a reduction in time pressure if the increase in the number of questions was accompanied by a more than proportional increase in the testing time. We also collected information on the maximum time to complete the test and find that this is not the case in our data. By taking natural logarithms and performing an OLS regression of the maximum time to complete the test on the number of questions, we observe that a 1% increase in the number of questions is associated with a 0.25% increase in the testing time ( p value = 0.03, two-sided t test). This implies that the available testing time increases less than proportionally to the number of questions. As such, time pressure is likely to be higher on tests with more questions, which makes females’ higher ability to sustain performance a more likely explanation of the findings in study 2. Additional support in favour of this explanation arises from comparing the results of study 1 and study 2, which we document in Supplementary Note 4 and Supplementary Table 10 . In particular, we show that the relationship between the math gender gap and test length in study 2 is strongly present in countries in Europe, Australia, and the Middle East and not present at all in Asian countries. We then confirm that the gender difference in study 1 mimics this pattern, where the gender differences are smaller in Asian countries. Discussion In this article, we documented that females are better able to sustain their performance during the cognitive task of completing a test. This result is present worldwide, robust to numerous checks, and sizable in terms of the highly studied gender gaps in test scores. In particular, for more than 20% of the countries where male students had an initial advantage in math and science, this gap was completely offset or even reversed after 2 h of test-taking. PISA scores receive an enormous amount of attention from policymakers. In many countries they are considered to be key indicators for the design and evaluation of educational policies. These facts emphasize the importance of our findings despite the low-stakes nature of the tests analysed. However, a natural question to ask is whether the gender difference in maintaining performance is also present in different tests with higher stakes. We address this question and provide three pieces of preliminary evidence that suggest that our finding is still present when higher stakes are at play. First, we find that the significant negative relationship between the math gender gap and length of the test seen in study 2 persists, even if we only consider tests with stakes (see Supplementary Table 10 ). Second, we consider country differences in the testing culture. According to recent research, test takers in Shanghai have higher intrinsic motivation than test takers in the US 57 , while institutional promotion and motivational messages regarding international standardized tests are more prevalent in Asian countries 58 . If higher stakes reduce gender differences when it comes to sustaining performance, we should observe less of a gender difference in Asian countries. We found that this is indeed the case, but the gender difference is not entirely eliminated; in 60% of the Asian countries, it is present and statistically significant. Considering the specific case of Shanghai 57 , we found that male students from Shanghai significantly outperform female students at the beginning of the test in math and science by more than 3 percentage points, but female students significantly reduce this gender gap as the test continues and make it negligible by the end of the test (see Supplementary Table 5 ). Third, with the PISA data, we constructed a measure of subjective stakes by calculating the average number of unanswered questions per country, which we expected to be high if the test were considered to have low stakes. Our idea is that as the PISA test does not penalize incorrect answers, not giving an answer to a question cannot be an option for a student interested in performing well on the test. However, a cross-country regression of the gender differences in study 1 on the incidence of non-response does not reveal a significant positive relationship. Therefore, by considering three different approaches, we find evidence to suggest that the gender difference in the ability to sustain performance might be smaller but is not absent in tests with higher stakes. Further discussion and results are available in Supplementary Note 5 . Our results contribute to the debate on the size of the math gender gap and on why it might differ across studies. For instance, although some studies have found that a gender gap in math is present at the elementary school level 8 , other studies have shown that the math gender gap is small to non-existent, with the exception of high school-aged students 9 . According to our findings, the length of the test is a moderating factor that may help to explain these heterogeneous findings in previous research. Promoting gender equality in STEM course enrolment and career choice is on the policy agenda of many governments worldwide. Gender-balanced test scores might help to achieve this objective. It has been found, for example, that arbitrary changes in early test scores that are unrelated to a student’s ability in the evaluated domain can affect future enrolment decisions 59 . With this in mind, our findings point to test length as a tool for reducing the gender gaps in test scores. Study 2 shows evidence that this tool might work for math tests, as it documents that longer tests have smaller gender gaps in math. As females are also able to better sustain their performance in reading, we expect shorter tests to have smaller gender gaps in reading. However, caution is needed for at least two reasons. First, study 2 does not exploit exogenous variation in test length. This makes a causal interpretation of the results challenging. Second, no changes in test length can be made without considering the potential consequences that they might have on test validity. Future experimental research should focus on these issues. The most notable implication of our study consists of emphasizing a female strength in test-taking that has largely been ignored and that deserves visibility and recognition. Gender differences in test performance in math and science have generally been perceived as a female weakness. Our findings could serve as a counterbalance to the gender stereotypes shaped by this perception. Moreover, these stereotypes may be unintentionally reinforced by the negative framing of compensation policies 60 , 61 . According to our study, a change in the design of a test that has frequently been framed in terms of compensating for a female weakness, i.e., an extension in testing time, which occurred at Oxford University, could be framed in terms of rewarding a valuable skill in which female students perform relatively better. Methods This study was conducted using publicly available data from third party institutions. The Ethics Boards of Erasmus University Rotterdam and Universitat de les Illes Balears approved the analysis of these data. Data in study 1 The PISA is a triennial international test administered by the OECD, and it aims to evaluate 15-year-old students’ skills and knowledge in math, science, and reading. Every 3 years, the PISA is focused on one of these three domains, which means that around one-half of the questions in the test are from this specific domain. For our baseline results, we use the data of the 74 countries that participated in the PISA 2009 for which the main topic of evaluation was reading. This provides a quite balanced distribution between the domain in which females perform better (reading) and the domains in which males perform better (math and science). Therefore, it allows us to separately analyse the gender differences in performance during the test in domains in which female or male students score relatively better. We use the data on each students’ answer to every single question administered. By using the codebooks, we can see which question was placed on which position of the test. We also use the PISA 2006, PISA 2012, and PISA 2015, which focus on science, math, and science, respectively. All four PISA waves share two main characteristics that are important for investigating the gender difference in performance during the test. First, the PISA uses multiple versions of the test (booklets). As shown in Supplementary Table 3 , the 2009 PISA has 20 different booklets: 13 standard booklets, and 13 easier booklets, where 6 of the booklets belong to both categories. Each country opts for either the set of 13 standard or 13 easier booklets. We include all of them in our investigation as we are interested in analysing the gender differences within a country. All booklets contain four clusters of questions (test items), and the total test consists of approximately 60 test items. Each cluster of questions represents 30 min of testing time, which means that each student undergoes 2 h of testing. Students take a short break after 1 h of typically 5 min. For both the standard and easier booklets, there are 13 clusters of test items (7 reading, 3 science, and 3 mathematics), and they are distributed over the different set of 13 booklets according to a rotation scheme. Each cluster appears in each of the four possible positions within a booklet once 44 . This means that one specific test item appears in four different positions of four different booklets. For the PISA 2015, the rotation scheme was somewhat more complicated, but the two characteristics necessary for identification remained (see Supplementary Note 1 for further details on the PISA 2015). Second, for all four PISA waves, these booklets are randomly assigned to students 43 , 44 , 45 , 50 . This random assignment ensures that the variation in question position that results from the ordering of clusters is unrelated to students’ characteristics. Balancing tests confirm this random allocation. Supplementary Table 4 shows the results of separate regressions where background characteristics are regressed on booklet and country dummies for the PISA 2009 (country dummies are included as only within a country are the same set of booklets being randomized). Almost all booklet dummies enter these regressions as insignificant, and for all regressions, the F-test for joint significance of the booklet dummies does not reject the null-hypothesis. In our estimation, we include question fixed effects to exploit the exogenous variation in item ordering within a question across students. Two other important characteristics of the PISA are its international reach and its sampling procedure. First, worldwide participation allows us to analyse whether the gender difference is systematically present across countries and to investigate the external validity of our results. Second, the PISA uses a two-stage stratified sample design. The first-stage sampling units consist of individual schools being sampled from a comprehensive national list of all PISA-eligible schools. The second-stage sampling units are the students. Once schools are selected, a complete list of all 15-year-old students in the school is prepared. If this list contains more than 35 students, 35 of them are randomly selected (for the PISA 2015, this number was 42 students). All of them were selected if this list contained less than 35 students 44 . Although gender is exogenous by nature, the PISA sampling process might have caused male and female students to not be equally represented across schools with similar qualities. In our estimations, we control for the quality of the school via the inclusion of school fixed effects. Methodology in study 1 We first discuss the methodology proposed by previous work 1 . This methodology is used to analyse student performance during a cognitive test, and involves estimating the following equation for each country separately (we refrain from using country subscripts in the notation): $$y_{ij} = \alpha _0 + \alpha _1\,Q_{ij} + u_{ij},$$ (1) where y ij is a dummy for whether student i answered question j correctly and Q ij is the position of question j in the version of the test answered by student i and is normalized between 0 and 1, which denotes the first and last question of the test, respectively. α 1 describes whether the probability to answer a question correctly is affected by the position of the question on the test. Previous work 1 has estimated Eq. ( 1 ) with data from the PISA 2003 and 2006, and showed that α 1 is negative for each country, a finding which is referred to as the performance decline. The constant ( α 0 ) of Eq. ( 1 ) represents the score of the average student at the start of the test ( Q ij = 0). We investigate the gender differences in the performance during the test while making a distinction between the domain that favours females (reading questions R j ) and the domains that favour males (math and science (non-reading) questions N j ). Recall that the PISA 2009 was used to document our baseline results as it had an equal division between these two domains. Given that clusters of questions vary in order between booklets and that booklets are randomly handed out to students, we propose estimating the following two models per country: (2) $$ y_{hij} = \gamma _0^R R_j + \gamma_0^N N_j + \gamma _1^R R_j F_i + \gamma _1^N N_j F_i +\gamma _2^R R_j Q_{ij} + \gamma _2^N N_j Q_{ij} + \gamma _3^R R_jF_i Q_{ij} + \gamma _3^N N_jF_i Q_{ij} + {\mathbf{J}}_j +{\mathbf{H}}_h+v_{hij},$$ (3) where h is a subscript for the school, F i is a gender dummy that equals 1 if student i is a female, and J j and H h are question and school fixed effects, respectively. By focusing on Eq. ( 2 ), we are interested in estimating β 3 , which tells us whether the female students are better able to sustain their performance during the test than the male students. β 3 is the estimate that is plotted in, among others, Fig. 2 . Equation ( 3 ) introduces and interacts topic dummies with the variables for the question order and the gender dummy. Note that it does not include a constant or a separate coefficient for Q ij , as the variables R j and N j include all questions on the PISA test. Equation ( 3 ) has the exact same interpretation as Eq. ( 2 ), with the coefficients separated by topic R j and N j . \(\gamma _1^R\) and \(\gamma _1^N\) measure the gender differences at the start of the test in reading and non-reading questions, respectively, whereas \(\gamma _3^R\) and \(\gamma _3^N\) measure the gender differences in the ability to sustain performance per topic. As the gender dummy takes a value 1 for females, positive values of \(\gamma _1^T\) and \(\gamma _3^T\) indicate that female students have an initial advantage and a higher ability to sustain their performance in topic T ∈ { R , N }. \(\gamma _1^T\) and \(\gamma _3^T\) are the estimates that are plotted in, among others, Fig. 3 . Equation ( 3 ) delivers the main insights of our paper. It allows us to analyse the impact of the gender differences in performance during the test on the widely studied gender gaps. In particular, to evaluate the gender gaps at the beginning, during, and end of the test, we define the following: Gender gap at the start of the test = E [ y |female, start of test, topic = T ] − E [ y |male, start of test, topic = T ] = \((\gamma _0^T + \gamma _1^T) - \gamma _0^T = \gamma _1^T\) Gender gap at the end of the test = E [ y |female, end of test, topic = T ] − E [ y |male, end of test, topic = T ] = \((\gamma _0^T + \gamma _1^T + \gamma _2^T + \gamma _3^T) - (\gamma _0^T + \gamma _2^T) = \gamma _1^T + \gamma _3^T\) Both equations include question and school fixed effects. As described above, the order of (clusters of) questions differs between booklets, which, in turn, are randomly handed out to students. Conditional on question fixed effects, our strategy exploits within question variation across students. As such, the identifying assumption becomes that there is random variation in the position of a question among different students. This assumption is likely to hold due to the random allocation of booklets to students. Moreover, by including school fixed effects, we control for school quality. As the PISA first samples schools and then randomly samples students within schools, it might be the case that male and female students are not equally represented across schools with similar levels of quality. Imagine a country having two schools: the first school has 80% males and is of high quality, and the second school has 80% females and is of low quality. We would find that male students perform better on the PISA test. However, this is actually a school characteristic, which we control for by the inclusion of school fixed effects. Our baseline results are estimated on the item level, but in Supplementary Note 1 , we also estimate Eq. ( 2 ) on the cluster level while excluding the question fixed effects. As such, the unit of analysis exactly matches the unit of randomization: y hij represents the average performance within cluster j , and Q ij is the position of the cluster in the test. In our main specification, we consider skipped questions to be incorrectly answered and unreached questions as missing. We perform robustness checks concerning the way that we dealt with unreached questions in Supplementary Note 3 . We use OLS to estimate Eqs. ( 2 ) and ( 3 ) and check for robustness with a probit model in Supplementary Note 3 . Throughout the paper, we cluster standard errors at the student level, which corrects for the heteroscedasticity that arises due to the binary nature of a dependent variable. Our baseline results are robust to clustering on the cluster level of the booklets, the item level, and the school level (the results are obtained from the Source Data file of study 1). We present the results without using the PISA sample weights, since absolute comparisons between countries are not our main interest. However, we verify that our baseline results are unchanged when using the weights (the results are obtained from the Source Data file of study 1). Data and methodology in study 2 To explore the implication that longer cognitive tests reduce the gender gap in math test scores, we extend an existing dataset from Lindberg et al. 9 . They conducted a meta-analysis on the gender gap in math test scores. Their meta-analysis involved the identification of possible studies that investigated the performance on math tests. By using computerized database searches, they generated a pool of potential articles. After careful selection, the final sample of studies included data from 441 math tests 9 . For every test, the standardized gender gap (mgp) was calculated and stored in a dataset. The standardized gap was calculated by subtracting the mean performance of females ( X females ) from the mean performance of males ( X males ) and dividing this by the pooled standard deviation \(\left( {{\mathrm{mgp}} = \frac{{X_{{\mathrm{males}}} - X_{{\mathrm{females}}}}}{{\sigma _{\mathrm{p}}}}} \right)\) . For every test in their dataset, we attempted to collect the following information from the original articles: the number of questions, the maximum time allowed to complete the test, and the stakes of the exam. If this information was not available in the original studies, we sent the authors an email asking for the information. For 243 out of the 441 tests included in the original dataset, we found evidence that they had to be completed within a certain time limit. Only these tests are of interest; without a limit of time, there is no reason that a test should measure sustained performance. Tests without a time limit are, for example, tests that are conducted at home or not during class time. For 203 of the 243 tests, we were able to collect the number of questions, and for 175 exams, we collected the maximum time allowed to complete the test. Sample attrition does not seem to be a problem for two reasons. First, when we compare the average size of the gender gap on tests with a time limit to the gender gap on tests without a time limit and for which we did not observe information about the time limit, we find that they are not significantly different. Second, for tests with a time limit, observing the number of questions does not correlate with the size of the gender gap. We investigated whether the standardized math gender gap on tests is related to the length of the test as measured by the number of questions. To this end, we performed a univariate regression with OLS in which we explain the standardized math gender gap (mgp) on test i with a constant and the number of questions on the test (noq): $${\mathrm{mgp}}_i = \delta _0 + \delta _1{\mathrm{noq}}_i + w_i.$$ (4) Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The Source Data file and the Supplementary Databases are available in a public repository hosted by Open Science Framework (OSF) with code V5KQY . The source data underlying Figs. 1, 2, 3a, b and 4a, b, Supplementary Figs. 1 , 2a, b , 3a, b , 4 – 7 , 8a, b , 9 , 10 , 11a, b , 12 – 16 , 17a, b , 18 – 20 , and Supplementary Tables 1 , 2 , 4 , 5 , 8 , 11 , and 12 are provided as a Source Data file (Study 1). The source data underlying Table 1 and Supplementary Tables 9 and 10 are provided as a Source Data file (Study 2). The original PISA data are publicly available on the OECD website and can be downloaded via the following url: ( ). These data can be shared and adapted under the following license: Creative Commons Attribution-NonCommercial-ShareAlike 3.0 IGO. Code availability All the statistical analysis was conducted using StataMP 14 software. The Code File to replicate all the results reported in the figures and tables of the article using Stata MP14 is available via the public repository hosted by OSF, code V5KQY . Study 1 requires some processing of the publicly available PISA data. This process is shortly described in the readme file of the Source Data file of Study 1. The Stata code to transform the publicly available PISA data into the long format provided in the Source Data file is available upon request from the corresponding author.
A team of researchers from Universitat de les Illes Balears and Erasmus University Rotterdam, has found that female students score better than male students on tests over two hours long. In their paper published in the journal Nature Communications, the researchers describe their study of test results of students taking the Programme for International Student Assessment (PISA) and what they found. For many years, have shown female students have achieved higher PISA scores on reading and verbal tests, while boys did better on math and science. These findings have led to efforts by school administrators to improve the way students are taught, particularly girls who tend to shun STEM fields. But now, it appears there is more to the test results than was previously thought. To learn more about the differences in test taking between genders, the researchers studied PISA results from 2006 to 2015 in a new way. They noted that prior research had shown that students give their best answers at the beginning of a test and grow progressively worse as the test continues. They report that when they tracked the progression of test taking with the students, they found that the performance of the males degraded more so than did the females. This led to fewer overall performance differences for longer tests. As an example, they noted that during the early stages of math tests, the boys were outperforming girls by up to 6 percent. But by the end of the test, performance was nearly equal. The researchers suggest their findings indicate that girls are able to maintain their concentration on tests for longer periods than boys. To confirm their findings, the researchers also looked at the data from a prior study that involved performance by students who had completed 441 tests of varying lengths. They report that they found the same trend—longer tests had a smaller gender gap. The researchers suggest that girls have more self-discipline and tend to take a more serious approach to learning. They also note that prior studies have shown that women are better at planning, and might be using that ability when working on tests that take a lot of time.
10.1038/s41467-019-11691-y
Space
Study: Diamond from the sky may have come from 'lost planet'
Farhang Nabiei et al. A large planetary body inferred from diamond inclusions in a ureilite meteorite, Nature Communications (2018). DOI: 10.1038/s41467-018-03808-6 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-03808-6
https://phys.org/news/2018-04-diamond-sky-lost-planet.html
Abstract Planetary formation models show that terrestrial planets are formed by the accretion of tens of Moon- to Mars-sized planetary embryos through energetic giant impacts. However, relics of these large proto-planets are yet to be found. Ureilites are one of the main families of achondritic meteorites and their parent body is believed to have been catastrophically disrupted by an impact during the first 10 million years of the solar system. Here we studied a section of the Almahata Sitta ureilite using transmission electron microscopy, where large diamonds were formed at high pressure inside the parent body. We discovered chromite, phosphate, and (Fe,Ni)-sulfide inclusions embedded in diamond. The composition and morphology of the inclusions can only be explained if the formation pressure was higher than 20 GPa. Such pressures suggest that the ureilite parent body was a Mercury- to Mars-sized planetary embryo. Introduction Asteroid 2008 TC 3 fell in 2008 in the Nubian desert in Sudan 1 , and the recovered meteorites, called Almahata Sitta, are mostly dominated by ureilites along with various chondrites 2 . Ureilite fragments are coarse grained rocks mainly consisting of olivine and pyroxene, originating from the mantle of the ureilite parent body (UPB) 3 that has been disrupted following an impact in the first 10 Myr of the solar system 3 . High concentrations of carbon distinguishes ureilites from all other achondrite meteorites 3 , with graphite and diamond expressed between silicate grains. There are three mechanisms suggested for diamond formation in ureilites: (i) shock-driven transformation of graphite to diamond during a high-energy impact 4 , (ii) growth by chemical vapor deposition (CVD) of a carbon-rich gas in the solar nebula 5 , and (iii) growth under static high-pressure inside the UPB 6 . Recent observation 7 of a fragment of the Almahata Sitta ureilite (MS-170) revealed clusters of diamond single crystals that have almost identical crystallographic orientation, and separated by graphite bands. It was thus suggested that individual diamond single crystals as large as 100 μm existed in the sample, which have been later segmented through graphitization 7 . The formation of such large single-crystal diamond grains along with δ 15 N sector zoning observed in diamond segments 7 is impossible during a dynamic event 8 , 9 due to its short duration (up to a few seconds 10 ), and even more so by CVD mechanisms 11 , leaving static high-pressure growth as the only possibility for the origin of the single-crystal diamonds. Owing to their stability, mechanical strength and melting temperature, diamonds very often encapsulate and trap minerals and melts present in their formation environment, in the form of inclusions. In terrestrial diamonds, this has allowed to estimate the depth of diamond formation, and to identify the composition and petrology of phases sampled at that depth. Therefore, diamonds formed inside the UPB can potentially hold invaluable information about its size and composition. In this study, we investigated the Almahata Sitta MS-170 section using transmission electron microscopy (TEM) and electron energy-loss spectroscopy (EELS). We studied the diamond–graphite relation and discovered different types of inclusions that were chemically characterized by energy dispersive X-ray (EDX) spectroscopy, crystallographically by electron diffraction, and morphologically by TEM imaging. The composition and mineralogy of these inclusions points to pressures in excess of 20 GPa inside the UPB, which in turn implies a planetary body ranging in size between Mercury and Mars. Results Diamond–graphite relationship The diamond matrix shows plastic deformation as evidenced by the high density of dislocations, stacking faults and a large number of {111} deformation twins (Supplementary Fig. 1 ). Despite no sign of graphitization for uninterrupted twins, the deformation twins that intersect an inclusion transform to graphite (Fig. 1 , Supplementary Fig. 2 ), while keeping their original morphology. Thus, the diamond–graphite grain boundary forms parallel to the {111} planes of diamond (Supplementary Note 1 ). Fig. 1 Graphitization of diamond along twinning directions. a The high-angle annular dark-field (HAADF) STEM image shows two twinning regions indicated as twin 1 and twin 2. Twin 1 is intersecting with two inclusions (indicated by orange arrows) and graphitized, while twin 2 is purely diamond. b The graphite-diamond EELS map (from the dashed blue rectangle in panel a ) indicates that the graphitization is confined to the twinning region and around the inclusions (red = graphite, blue = diamond) Full size image The sample shown in Fig. 2 consists of several diamond segments with close crystallographic orientations, and are separated by graphite bands. Inclusion trails can be seen extending from one diamond segment into the next, while disappearing in the in-between graphite band (Fig. 2b ). This is undeniable morphological evidence that the inclusions existed in diamond before these were broken into smaller pieces by graphitization. Similar to the graphitized twins, the graphite bands in Fig. 2 have grain boundaries parallel to {111} planes of diamond (Supplementary Fig. 3 and Supplementary Note 1 ). Thus, the most likely cause of graphitization is the shock event where the diamond matrix has been severely deformed 12 , 13 . Elevated temperature during the shock, as well as stress concentration around the inclusion promotes the graphitization process 13 , 14 . Fig. 2 Inclusion trails imaged inside diamond fragments. a HAADF-STEM image from diamond segments with similar crystallographic orientation. Dashed yellow lines show the diamond–graphite boundaries. b High-magnification image corresponding to the green square in a . Diamond and inclusion trails are cut by a graphite band. The dashed orange line shows the direction of the inclusion trails Full size image Iron–sulfur type inclusions in diamond The overwhelming majority of inclusions are iron-rich sulfides, found either as isolated grains with sizes up to a few 100 nanometers, or as trails of small particles ranging from 50 nm down to a few nanometers (Fig. 3 and Supplementary Fig. 4 ). All the inclusions are faceted indicating that they were trapped as solid crystalline phases rather than melts. However, they show evidence of transformation to low-pressure phases during decompression, similarly to those found in deep terrestrial diamond inclusions 15 . Both chemical and crystallographic analysis (Supplementary Table 1 and Supplementary Fig. 5 ) show that the sulfide inclusions have dissociated to three phases (Fig. 2c ): FeS-troilite, (Fe,Ni)-kamacite, and minor amounts of (Fe,Ni) 3 P-schreibersite. The latter either dissociates to a separately detectable phosphide phase in larger inclusions (Fig. 3 and Supplementary Fig. 4 ), or concentrates at grain boundaries in smaller inclusions (Supplementary Fig. 4 ). It is noteworthy that troilite, kamacite, and schreibersite are never found as isolated mono-mineralic inclusions in the diamonds, but always together inside a very sharply defined polyhedral arrangement; two arguments promoting the idea that these inclusions crystalized as a single-Fe–Ni–S–P phase during diamond formation, that later decomposed into different phases. This is further confirmed by the constant and stoichiometric bulk chemical composition of these inclusions. In order to avoid any sampling bias in such multicomponent inclusions, the composition was measured only on those grains that were completely embedded inside the diamond host determined by electron tomography, leaving aside those that had been partially cut during focused ion beam (FIB) preparation. We found an average molar (Fe + Ni)/(S + P) ratio of 2.98±0.36 from 29 sulfide inclusions (Supplementary Table 2 ), which corresponds to an (Fe,Ni) 3 (S,P) initial mineralogy. (Fe,Ni) 3 P-schreibersite and (Fe,Ni) 3 S have the same space group (tetragonal I \(\bar 4\) ) and their lattice parameters are very close 16 , 17 , allowing them to form a solid solution at high pressures as (Fe,Ni) 3 (S,P) 16 , 17 across the entire compositional S–P join. Fig. 3 Electron micrograph and compositional maps of diamond inclusions in ureilite. HAADF-STEM images ( a , b , c , and d ) and associated Fe and S elemental maps ( e , f , g , and h ) of inclusions in diamond. All chemical (EDX) maps show Fe (light blue) and S (red) distribution. Kamacite and troilite phases appear as light blue and reddish-pink respectively Full size image The pressure stability of the Fe 3 (S,P) phase depends 18 on its composition (Supplementary Note 2 and Supplementary Fig. 6 ), and ranges from 21 GPa for the Fe 3 S to room pressure for Fe 3 P, allowing to use the P/(S + P) ratio as an internal thermo-barometer. Phosphorus has no effect on the stability for P/(S + P) between 0 and 0.2, Fe 3 (S,P) is only stable above 21 GPa 18 (Supplementary Fig. 6 ) just like Fe 3 S. The average P/(S + P) of the inclusions observed here is 0.12±0.02 (Supplementary Table 2 ), and therefore these can only have formed above 21 GPa. Similarly, the inclusions contain nickel, with Ni/(Fe + Ni) = 0.068 ± 0.011, which could also have an effect on the stability pressure of (Fe,Ni) 3 (S,P), with Ni 3 S (isostructural with Fe 3 S 19 ) stable only above 5.1 GPa. We lack the experimental work to evaluate the pressure effect of Ni substitution for Fe, but assuming a linear dependence of pressure stability on Ni content, the (Fe,Ni) 3 (S,P) inclusions would only form above ~20 GPa (Supplementary Note 2 and Supplementary Fig. 7 ). It is noteworthy that pressure-composition phase diagrams are often concaved downward, and there could be, just as with S–P substitution, no effect on pressure at those low Ni concentrations, so that 20 GPa is actually a lower bound for the inclusions’ formation pressure (Supplementary Fig. 7 ). Chromite and phosphate inclusions in diamond A second type of inclusions, Cr 2 FeO 4 chromite, are rare (with only a few identified in the samples) but rather large with grains a few hundred nanometers across (Supplementary Fig. 8 ). The mineralogy of chromite grains is well preserved and chemical analysis confirms a stoichiometric Cr 2 FeO 4 chromite (Supplementary Note 3 ), with no Mg or Al substitution for Fe and Cr, respectively. While chromite is often observed in meteorites, Mg- and Al-free end-members are only found in iron meteorites 20 , 21 , 22 . It has been proposed that such end-members must form in a metallic melt with low Cr and O concentration close to the Fe–FeS join 22 , 23 . Therefore, these chromites must have formed in an iron-rich environment. Finally, rare Ca–Fe–Na phosphate inclusions were found, roughly ~20 nanometer or smaller (Supplementary Fig. 8 ), which were only characterized chemically due to their small size (not structurally due to overlap with the surrounding diamond). These inclusions are chemically similar to the ones observed in iron meteorites where they are the most common companions of pure Cr 2 FeO 4 chromites 24 (Supplementary Note 3 ). Iron–sulfur type inclusions in graphite Whereas the polyhedral shapes and consistent bulk composition of inclusions in diamond shows that these phases were a single-homogeneous solid phase at the time of diamond formation, the morphology of inclusions in neighboring graphitized bands shows evidence of melting (Fig. 2a and 4 , Supplementary Fig. 9 ). Indeed, Fe- and S-bearing phases of varying composition and arbitrary shapes are dispersed in the graphitized areas and between graphite layers (Fig. 2a and 4 , Supplementary Fig. 9 ), which provides an evidence for melting of inclusions at the time of graphitization, and yet another indication that graphitization is subsequent to diamond formation. This also provides an explanation for the transformation of original (Fe,Ni) 3 (S,P) solid solution to kamacite, troilite and schreibersite phases while keeping the polyhedral shape and bulk composition of the initial parental phase. Graphitization is likely caused by a shock event, which is followed by separation from the parent body and, therefore a pressure drop. That same shock event should melt the inclusions, which then recrystallize after the pressure drop as kamacite, troilite and schreibersite, which are the equilibrium phases at low pressures. The volume change during melting would also add to the strain concentration around them, which in turn facilitates the graphitization process. Fig. 4 Electron micrograph and chemical map of an inclusion in a graphitized region. a Bright-field (BF) STEM image and b chemical (EDX) map from graphite growth in diamond matrix around an inclusion. Blue dashed lines indicate the diamond–graphite boundary. The yellow arrows point out the Fe–S-rich regions in graphite. Notice the clear rounded form of the inclusion in graphitized part indicating partial melting Full size image Discussion The segment sizes of diamonds are not measured in this study; however, the segments we used for sample preparation were all over 10 μm in diameter. Our results also confirm the previous suggestion that the large diamond crystallites are later segmented through graphitization during a shock event. Thus, considering previous studies using electron backscatter diffraction 7 , we can conclude that there were diamond grains as large as 100 μm in this particular meteorite. The surprisingly large size of diamond grains and specifically δ 15 N sector zoning 7 is incompatible with formation by shock metamorphism. Indeed, laboratory shock experiments are generally done in nanoseconds and natural shocks by impact in the solar system have durations ranging from microseconds up to at most a few seconds 10 . The typical grain size for shock produced diamond is in the order of few nanometers up to few tens of nanometers 8 , 9 , 25 . Diamond composite aggregates can reach several hundreds of microns in exceptional cases like Ries and Popigai craters where graphitic precursors are known 9 , 26 . However, the crystallite size in these aggregates never exceeds 150 nm 8 , 9 , 25 . In contrast, the diamond grain size we observe in Almahata Sitta MS-170 samples are 2–4 orders of magnitudes larger than the shock produced diamonds 7 . Such large diamonds are even less likely to grow by CVD in the solar nebula 11 . Moreover, the existence of inclusions in these diamonds and the pressure required to form them (above 20 GPa) clearly rules out the CVD growth mechanism. Therefore, we can distinguish two distinct types of diamond in ureilites: Multigrain diamond resulting from shock events producing clumps of nm-sized individual diamonds 4 , and large diamonds up to 100 µm in diameter growing at high-static pressure inside the proto-planet 7 subsequently broken down to equally oriented segments of several tens of micrometer in diameter. Ureilites are unique samples from the mantle of a differentiated parent body. It has been shown that temperature inside the UPB was higher than the Fe–S eutectic temperature 27 , 28 (~1250 K at ambient pressure 29 , ~1350 K at 21 GPa 16 ). Therefore, an Fe-S melt must have percolated and segregated to form a sulfur-bearing metallic core 27 , 28 , but the temperature was never high enough for complete melting of silicates and metallic iron 30 , and the core formation process continued until the UPB’s mantle reached 20–30 vol% of melt fraction 31 . The composition of chromite inclusions in diamonds shows that they have formed from iron-rich composition without any interaction with silicates. Otherwise, chromite would have accommodated Mg and Al in its composition similarly to the previously reported chromites in ureilite meteorites 32 . This corroborates the formation of the sulfide, chromite, and phosphate inclusions in a metallic liquid. Moreover, the Fe–C binary system also has a eutectic point (~1400 K at ambient pressure) 33 . Fe–C and Fe–S liquids are immiscible at ambient pressure, but the miscibility gap closes by increasing the pressure above 4–6 GPa (depending on the composition) 34 , 35 , 36 . Therefore, for a carbon-rich body such as the UPB, we can expect to have a single-Fe–S–C liquid at high pressures. It has been recently shown that large terrestrial diamonds have formed from an Fe–S–C (with Ni and P) liquid 37 . Fe 3 S and diamond are the first solids to crystallize (liquidus phases) on the iron-poor side of the Fe–S and Fe–C eutectics, respectively; it is therefore likely that they can simultaneously crystallize from a cooling Fe–S–C liquid above 20 GPa inside the UPB. Although an experimental study of the Fe–S–C ternary system is required to examine this possibility, the distribution of iron–sulfur inclusions in the diamonds supports this idea. The arrangement of small inclusions in vein-like trails (Fig. 2 ) is consistent with the formation from a liquid phase at the same time or immediate aftermath (depending on the UPB’s thermal history) of the solidification of the UPB, rather than from the transformation of graphite to diamond at depth. This is corroborated by the widespread distribution of (Fe,Ni) 3 (S,P) inclusions in diamond which is unlikely to take place by diffusion inside a graphitic precursor. There is considerable debate on the size of the UPB 3 , 38 , 39 . A body of at least ~1000 km in diameter was recently suggested to account for the pressure required to form diamond (above 2 GPa) in the depths of its mantle 7 . Here we show that these diamonds contain inclusions that can only form above ~20 GPa, which can only be attained in a large planetary body. If the diamonds formed at the core-mantle boundary, the UPB would be Mars-sized. The lower-bound for its size is for them to form at the center of the UPB, and a 20 GPa center is consistent with a Mercury-sized body. Although this is the first compelling evidence for such a large body that has since disappeared, their existence in the early solar system has been predicted by planetary formation models 40 . Moon- to Mars- sized planetary embryos have formed either by runaway 41 and oligarchic growth 42 of planetesimals or by pebble accretion 43 in the first million years of the solar system. Mars-sized bodies (such as the giant impactor that formed the Moon 44 , 45 ) were common 43 , and either accreted to form larger planets, or collided with the Sun or were ejected from the solar system 46 , 47 . This study provides convincing evidence that the ureilite parent body was one such large “lost” planet before it was destroyed by collisons 48 . Methods Focused ion beam sample preparation Samples for TEM investigations were prepared using the conventional in situ lift-out technique in a Zeiss NVision 40 dual beam instrument. The polished surface of the MS-170 section from Almahata Sitta meteorite was coated with ~15 nm carbon to increase the conductivity during FIB milling. After identifying the target diamond grains (secondary electron detector at 5 kV), they were coated with ~2 μm amorphous carbon (ion beam induced deposition) in order to protect the interesting area during the ion milling. The diamond grain was milled with Ga + ions at 30 kV starting with 27 nA current and going down to 700 pA, until we obtained a ~1 μm thick slice. This slice was then transferred and attached to a cupper grid with a carbon deposition. To make the slice electron transparent (to ~100 nm in thickness), it was thinned down with low-beam currents (ranging from 700 pA down to 80 pA). At the end the slice was polished with Ga + ions at 5 kV and 2 kV using 30 pA and 25 pA beam currents, respectively. Five thin sections were prepared for transmission electron microscopy (TEM) studies. Energy electron loss spectroscopy EELS analysis was performed on a FEI Titan Themis TEM operated at 80 kV. The carbon K-edge was recorded by electron spectroscopic imaging (ESI) in scanning TEM (STEM) mode with dual-channel EELS for near-simultaneous low-loss and core-loss acquisition with a dispersion of 0.1 eV/channel, an entrance aperture of 2.5 mm and a camera length of 115 mm resulting in a convergence angle of 3.78 mrad and a collection semi-angle of 5.1 mrad satisfying the magic angle condition (MAC). The MAC allows the determination of the ratio of sp 2 / sp 3 bonding in carbon (R-ratio) independent of specimen orientation 49 , 50 . The spectrum fitting method which uses either Gaussian or Lorentzian (or the combination of both) functions to fit the peaks accounting for the π* and σ* states was applied to determine the R-ratio maps. The sp 2 / sp 3 ratio of a highly oriented pyrolytic graphite (HOPG) was used as a reference to normalize the meteorite R-ratio maps. A channel-to-channel gain variation and dark current correction were done for all EEL spectra. This allows concluding that the carbon phases in presence are either pure cubic diamond or pure graphite. Then, reference spectra are obtained for pure diamond ( S D ) and pure graphite ( S G ). Each pixel spectrum ( S Px ) from the EELS maps is linearly fitted with the two reference spectra as below: $$S_{{\rm px}} = k_1S_{\rm G} + k_2S_{\rm D}.$$ The graphite/(diamond + graphite) ratio is obtained as k 1 /( k 2 + k 1 ). This has been illustrated as RGB map (Fig. 1 ) where red, green, and blue are corresponding to the graphite ratio, zero, and the diamond ratio, respectively. Weak-beam imaging and electron diffraction The weak-beam dark-field imaging technique was used to observe dislocations and stacking faults in the diamond. This technique allows the observation of defects with sharper contrast compared to the background. We first tilted the specimen to satisfy a systematic row two-beam diffraction condition (direct beam and g reflection excited). From this setting, the beam was tilted slightly to excite the 3 g reflection. The g reflection was selected by the objective aperture to acquire a weak-beam dark-field ( g ,3 g ) image, whose signal is very sensitive to deformation fields around the dislocation core and stacking faults. This imaging technique, as well as electron diffraction analysis were conducted on a FEI Tecnai Osiris machine at 200 kV. Nano-diffraction was done using the smallest condenser aperture and largest spot-size in “nano-beam” mode. Although, the resulting beam was not completely parallel, the diffraction spots were sharp and small enough for accurate indexing. STEM imaging and EDX analysis STEM imaging and EDX analysis were performed on a FEI Tecnai Osiris microscope at 200 kV. This microscope is equipped with four window-less silicon drift (SSD) EDX detectors and Esprit 1.9 acquisition software from Bruker. The large effective area of the 4 detectors significantly increases the count rate of photons. However, it also suffers from shadowing effects that might affect the accuracy of the quantification. To avoid that, quantification was done on the EDX maps acquired at 20° sample tilt and only the two detectors facing the sample (the other two detectors were switched off). To determine the Fe/S ratio a troilite reference in equilibrium with kamacite inside an inclusion was used. The identity of troilite was confirmed by electron diffraction. The measured error was below 4%. STEM tomography Tilt series of high-angle annular dark-field (HAADF) STEM images were acquired form different regions of the section. The HAADF detector had a collection angle larger than 63.8 mrad or 100.1 mrad, corresponding to camera lengths of 91 mm and 58 mm, respectively, in order to reduce the contribution of diffraction contrast in the images. The electron beam convergence angle was set to 10 mrad in order to increase the depth of focus. Tilt series were acquired using a tomography sample holder (Fischione model 2040) on a FEI Titan Themis microscope operated at 300 kV. Large magnification series were obtained from −72 to 72 degrees with a step size of 2 degrees. This was used to observe the faceted shapes of inclusions. Then, several other tilt series acquisitions were obtained at lower magnification from −54 to 54 degrees with 2 degree intervals. The purpose of these tilt series was to identify the position of inclusions inside the diamond matrix. These identified uncut inclusions were taken for EDX quantification. All reconstructions and visualizations were done using the Inspect3D and Chimera software packages, respectively. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
Fragments of a meteorite that fell to Earth about a decade ago provide compelling evidence of a lost planet that once roamed our solar system, according to a study published Tuesday. Researchers from Switzerland, France and Germany examined diamonds found inside the Almahata Sitta meteorite and concluded they were most likely formed by a proto-planet at least 4.55 billion years ago. The diamonds in the meteorite, which crashed in Sudan's Nubian Desert in October 2008, have tiny crystals inside them that would have required great pressure to form, said one of the study's co-authors, Philippe Gillet. "We demonstrate that these large diamonds cannot be the result of a shock but rather of growth that has taken place within a planet," he told The Associated Press in a telephone interview from Switzerland. Gillet, a planetary scientist at the Federal Institute of Technology in Lausanne, said researchers calculated a pressure of 200,000 bar (2.9 million psi) would be needed to form such diamonds, suggesting the mystery planet was as least as big as Mercury, possibly even Mars. Scientists have long theorized that the early solar system once contained many more planets—some of which were likely little more than a mass of molten magma. One of these embryo planets—dubbed Theia—is believed to have slammed into a young Earth, ejecting a large amount of debris that later formed the moon. Inclusion trails imaged inside diamond fragments. a HAADF-STEM image from diamond segments with similar crystallographic orientation. Dashed yellow lines show the diamond–graphite boundaries. b High-magnification image corresponding to the green square in a. Diamond and inclusion trails are cut by a graphite band. The dashed orange line shows the direction of the inclusion trails. Credit: Nature Communications (2018). DOI: 10.1038/s41467-018-03808-6 "What we're claiming here," said Gillet, "is that we have in our hands a remnant of this first generation of planets that are missing today because they were destroyed or incorporated in a bigger planet." Addi Bischoff, a meteorite expert at the University of Muenster, Germany, said the methods used for the study were sound and the conclusion was plausible. But further evidence of sustained high pressure would be expected to be found in the minerals surrounding the diamonds, he said. Bischoff wasn't involved in the study, which was published in the journal Nature Communications.
10.1038/s41467-018-03808-6
Biology
Neuroscientists identify neural patterns birds use to learn their songs
Growth and splitting of neural sequences in songbird vocal development, DOI: 10.1038/nature15741 Journal information: Nature
http://dx.doi.org/10.1038/nature15741
https://phys.org/news/2015-11-neuroscientists-neural-patterns-birds-songs.html
Abstract Neural sequences are a fundamental feature of brain dynamics underlying diverse behaviours, but the mechanisms by which they develop during learning remain unknown. Songbirds learn vocalizations composed of syllables; in adult birds, each syllable is produced by a different sequence of action potential bursts in the premotor cortical area HVC. Here we carried out recordings of large populations of HVC neurons in singing juvenile birds throughout learning to examine the emergence of neural sequences. Early in vocal development, HVC neurons begin producing rhythmic bursts, temporally locked to a ‘prototype’ syllable. Different neurons are active at different latencies relative to syllable onset to form a continuous sequence. Through development, as new syllables emerge from the prototype syllable, initially highly overlapping burst sequences become increasingly distinct. We propose a mechanistic model in which multiple neural sequences can emerge from the growth and splitting of a common precursor sequence. Main Sequences of neural activity have been observed during various behaviours, including navigation 1 , 2 , 3 , 4 , short-term memory 5 , 6 , 7 , decision making 8 , 9 , and complex movements 10 , 11 , suggesting that neural sequences are a fundamental form of brain dynamics 12 , 13 . However, the circuit mechanisms underlying the generation of neural sequences and their development during learning are not well understood. The songbird is a good model system to address such questions because the song produced by adults is learned during development 14 , 15 , 16 , 17 , 18 . Furthermore, adult song is associated with neural sequences in nucleus HVC 19 , 20 , 21 , 22 , 23 , 24 , a premotor cortical area necessary for the production of stereotyped adult song 25 , 26 , 27 , 28 , 29 , 30 . Most projection neurons in HVC generate a brief burst of spikes at one specific time in the song motif and different neurons are active at different times in the song 19 , 20 , 21 , 22 , 23 , 24 , 30 ; thus, distinct syllable types are produced by largely non-overlapping neural sequences in HVC. Here we ask how these different neural sequences are constructed during vocal development. Zebra finches acquire their stereotyped song through a gradual learning process 14 , 31 . Young birds initially produce a highly variable ‘subsong’ 31 , akin to human babbling 15 . Birds then enter the protosyllable stage as they begin to incorporate syllables of a characteristic ~100 ms duration 32 , 33 , 34 , 35 . This is followed by the gradual emergence of multiple syllable types 32 , 33 , 36 , and a final ‘motif’ stage in which syllables are produced in a reliable sequence. While HVC activity is not required for subsong 27 , 34 , 35 , it is required for song components in all later stages, including protosyllables, emerging syllable types, and adult song 25 , 26 , 27 , 28 , 34 , 35 . Developmental progression of HVC activity To elucidate the mechanisms by which neural sequences in HVC develop, we recorded from populations of HVC projection neurons in juvenile and adult birds ( n = 1,149 neurons, 35 birds; Extended Data Fig. 1a ). At all stages of vocal development, HVC projection neurons generated brief bursts of spikes during singing ( Fig. 1a–c , Extended Data Fig. 1b, c ). In the subsong stage ( n = 12 birds; defined by exponential distribution of syllable durations, before the emergence of protosyllables) roughly half the neurons generated bursts not temporally locked to syllable onsets ( Extended Data Fig. 1d ), while the other half produced bursts that tended to occur at a particular latency relative to subsong syllable onsets ( Fig. 1a and Extended Data Fig. 1e–i ; 19/39 neurons exhibited syllable locking). The fraction of neurons locked to syllable onsets exhibited a gradual and significant increase throughout vocal development ( Fig. 1f ; correlation with song stage: r = 0.22, P < 10 −10 ; see Methods) until, in adult birds, virtually every projection neuron generated bursts precisely locked to syllables, as previously described 19 , 20 , 21 , 22 , 23 , 24 . Figure 1: Singing-related firing patterns of HVC projection neurons in juvenile birds. a , Neuron recorded in the subsong stage, before the formation of protosyllables (RA-projecting HVC neuron, HVC RA ; 51 dph; bird 7). Top, song spectrogram with syllables indicated above. Bottom, extracellular voltage trace. b , Neuron recorded in the protosyllable stage (HVC RA ; 62 dph; bird 2). Protosyllables indicated (grey bars). c , Neuron recorded after motif formation (HVC RA ; 68 dph; bird 8). d , Neuron bursting exclusively at bout onset (X-projecting HVC neuron, HVC X ; 61 dph; bird 2). e , Neuron bursting exclusively at bout offset (HVC RA ; 65 dph; bird 2). f , Developmental change in the fraction of neurons locked to syllable onsets (grey) and fraction of neurons with rhythmic bursting (black) (mean ± s.e.m.; n = 39, 135, 565, 378 and 32 neurons, respectively). g , Mean period of the HVC rhythmicity as a function of song stage ( n = 3, 70, 356, 298 and 25 neurons, respectively). *** P < 0.001, post-hoc comparison with the adult stage. Spectrogram vertical axis 500–8,000 Hz. Scale bars for panels a – c , 0.5 mV, 200 ms; panels d – e , 1 mV, 500 ms. Inset in panels a – c show zoom of bursts indicated by an asterisk; scale bar, 5 ms. PowerPoint slide Full size image Song development is characterized by a gradual change in song rhythm 33 , 37 , 38 . The subsong stage, which has little evidence of rhythmic song structure, ends with the emergence of a rhythmically produced protosyllable (5–10 Hz) 32 , 33 , 34 , 35 . This is followed by a subsequent increase in the period between repetitions of the same sound, attributable to the addition of new song syllables 33 . HVC exhibited parallel changes in rhythmicity. In the subsong stage, most projection neurons did not burst rhythmically ( Fig. 1a, f ; 3/39 neurons were rhythmic). In the protosyllable stage, roughly half of the projection neurons generated rhythmic bursts (5–10 Hz) ( Fig. 1b, f ; 70/135 neurons were rhythmic; period 169 ± 6.4 ms, mean ± s.e.m.). Such bursts were typically locked to rhythmic protosyllables, but were also commonly observed during portions of the song with less rhythmic syllable onsets, particularly early in the protosyllable stage ( Extended Data Fig. 2a–d ). On average, both the fraction of rhythmic HVC neurons and the period of the HVC burst rhythm gradually increased during the emergence of new syllable types and the formation of the song motif ( Fig. 1f, g ; correlation between song stage and fraction of rhythmic neurons: r = 0.28, P < 10 −10 ; correlation between song stage and period of burst rhythm: r = 0.57, P < 10 −10 ). A substantial fraction of projection neurons (285 of 1,117 neurons) in juvenile birds generated bursts related to song bouts—defined as epochs of continuous singing bounded by periods of silence (see Methods). Bout-related neurons generated brief bursts of spikes immediately before bout onset (‘bout-onset’ neurons; 137/285 neurons) or after bout offset (98/285 neurons) ( Fig. 1d, e and Extended Data Fig. 2e–l ; an additional 50/285 neurons were active both before and after bouts). Growth of a neural protosequence We next wondered how the activity of HVC projection neurons is coordinated across the neural population during protosyllables. Multiple recordings in the same bird revealed that different neurons were active at different times with respect to protosyllable onsets ( Fig. 2a, b and Extended Data Figs 1n and 9k ; n = 3 birds, 54 neurons), with latencies spanning the duration of the protosyllable and the intervening gap (>90% burst coverage; Extended Data Fig. 2t ). These findings suggest that protosyllables are generated by a rhythmic protosequence—a repeating motor program comprised of a continuous sequence of bursts in HVC. Figure 2: Rhythmic sequences in HVC during the protosyllable stage. a , Three neurons recorded from bird 2 during protosyllable stage (top: HVC X ; 63 dph; bottom: simultaneous recording two neurons; both HVC X ; 64 dph; scale bar, 0.5 mV). b , Raster plot of 28 HVC projection neurons aligned to protosyllable onsets (sorted by latency; 57–64 dph, bird 2). Antidromically identified HVC RA neurons indicated by circles at right. c , Distribution of burst latencies relative to syllable onset in subsong stage (top), protosyllable stage (middle), and multi-syllable/motif stages (bottom), across all birds ( n = 19, 104 and 814 neurons, respectively). Black triangles indicate median burst times. PowerPoint slide Full size image We next examined the developmental emergence of this rhythmic protosequence. In the subsong stage ( Fig. 2c ; n = 19 neurons, 12 birds), bursts had a significantly earlier distribution of latencies compared to the broader distribution of burst latencies in the protosyllable stage ( n = 104 neurons, 13 birds; P = 0.02; 63% versus 43% of bursts before syllable onset in the subsong stage and protosyllable stage, respectively). Even though the range of latencies was narrower in subsong birds, different neurons recorded in the same bird were locked to syllable onsets at different latencies ( Extended Data Fig. 1f–i ). This suggests the existence of transient sequential activity, initiated just before syllable onset, but decaying within a few tens of milliseconds. This sequential activity appears to grow during the protosyllable stage to form longer sequences that can persist for more than a hundred milliseconds, throughout the duration of the protosyllable ( Fig. 2b, c ). Sequence splitting during syllable formation We next wondered how distinct sequences in HVC, each corresponding to a distinct adult syllable type, emerge during vocal learning. Here we hypothesize that new syllable types can emerge by the gradual splitting of a single protosequence. In this view, we imagine that the neural sequences underlying newly emerging syllable types would initially be largely overlapping, with neurons shared across the emerging syllables. Splitting would be associated with an increasing number of neurons selective for a particular emerging syllable type, and a decreasing fraction of shared neurons. To test this hypothesis, we recorded from HVC projection neurons ( n = 769) in 6 juvenile birds while they acquired multiple syllable types. As a first example, we will describe changes in the HVC population activity in a bird ( n = 375 projection neurons; bird 1) that developed two acoustically distinct syllable types (labelled β and γ) over the course of several days ( Fig. 3a, b ; β and γ eventually form adult syllables B and C, respectively). During the protosyllable stage (56–59 days post-hatch, dph), the majority of projection neurons participated in a rhythmic protosequence ( Extended Data Fig. 1n ; n = 14/16 neurons; for example, Fig. 3c ). After the emergence of syllable types β and γ (62–72 dph), many neurons were selectively active only during β or during γ, but not both ( Fig. 3d, f ; of 105 neurons active during either β or γ, 41 were β-specific and 42 were γ-specific). The bursts of these syllable-specific neurons exhibited a wide range of latencies, with spiking activity of neurons in each group spanning the entire duration of each syllable ( Fig. 3g ). Notably, we also observed a substantial population of neurons that were significantly active during both β and γ ( n = 22 ‘shared’ neurons; Fig. 3e–g ). Simultaneous recordings revealed the co-occurrence, in different neurons, of shared and specific firing patterns ( Fig. 3f , Extended Data Fig. 3a, b ). Figure 3: Shared and specific sequences during the emergence of multiple syllable types. All data are from bird 1. a , Song examples during the emergence of syllables β (red) and γ (blue). Panels show, from top to bottom, subsong stage (46 dph), rhythmic repetition of protosyllable α (grey bars; 58 dph), rhythmic repetition of variants of the protosyllable (β and γ; 60 dph), and further acoustic differentiation of β and γ (red and blue bars; 62 dph). b , Scatter plot of syllable duration versus mean pitch goodness (each dot is one syllable rendition; n = 400 syllables per day; unclassified syllables grey). c , Neuron recorded during protosyllable stage (HVC X ; 56 dph). d , β-specific neuron (HVC X ; 64 dph). e , Shared neuron active during both β and γ (HVC RA ; 68 dph). f , Simultaneously recorded pair of HVC X neurons: shared neuron (top) and γ-specific neuron (bottom; 71 dph). g , Raster of 105 projection neurons early in syllable differentiation showing shared and specific sequences. HVC RA neurons indicated by circles at right. h , Same as g but for 100 neurons recorded after differentiation of β and γ into adult syllables B and C. Scale bars for panels c – f , 0.5 mV, all have the same time scale. PowerPoint slide Full size image Shared neurons exhibited a number of striking characteristics. These neurons burst rhythmically with the same inter-burst interval as neurons recorded in the protosyllable stage ( Fig. 3e, f ; Extended Data Fig. 3f–j ). Shared neurons were active, as a population, at a wide range of latencies within emerging syllables ( Fig. 3g ), and crucially, for a given shared neuron, the bursts during β occurred at a similar latency as the bursts during γ ( Fig. 3g , Extended Data Fig. 4a–d ). Thus, the population of shared neurons generated the same continuous burst sequence during both β and γ. This shared sequence occurred even at times when there was a significant acoustic difference between the shared syllables ( Extended Data Fig. 5 ). We also found that the fraction of shared neurons later in development (81–112 dph) was significantly lower compared to the earlier recordings ( Fig. 3h ; 10 shared and 90 specific neurons; P = 0.03). Thus, the refinement of β and γ into the adult syllables B and C coincides with a decrease in the fraction of shared neurons, producing a gradual splitting of these representations into increasingly non-overlapping ‘daughter’ neural sequences. The tendency of bird 1 to alternate between syllables β and γ means that syllable-specific neurons had an inter-burst interval, and thus a period, that was twice as long as that observed in the earlier protosyllable stage ( Fig. 3c–f , Extended Data Fig. 3f–j ). Therefore, the increase in the period of neural activity through skipping or alternating cycles of an underlying rhythm seems to be a basis for the increase in song period during vocal learning 33 . Although our key findings are described above for bird 1, a similar pattern of HVC coding by shared and specific neurons was seen in a total of 6 birds for which recordings were made during the emergence of multiple syllable types (birds 1–6; 185 shared neurons and 496 specific neurons for 8 syllable pairs analysed). Across three birds in which neurons were also recorded in later song stages, there was a significant decrease in the fraction of shared neurons during syllable development ( n = 5 syllable pairs; P = 3 × 10 −6 ; birds 1, 2 and 4). Neurons exhibiting an increased burst period by skipping cycles of an underlying rhythm were observed in 4 of the 6 birds (birds 1, 3, 4 and 6). Splitting in other learning strategies Behavioural studies have shown that new syllable types can emerge using several distinct developmental strategies 32 , 33 , 36 , 39 , 40 . The bird described above (bird 1) used the ‘serial repetition’ strategy 32 and ‘sound differentiation in situ ’ 33 to develop two new syllables by alternating increasingly different variants of the protosyllable. Alternatively, birds can acquire multiple syllables simultaneously to form an entire motif (‘motif strategy’) 32 , or form new syllables at bout edges (onset or offset) 39 , 40 . We wondered if the splitting of neural sequences underlies these other strategies as well. Neural recordings were obtained in three birds (birds 1, 2 and 5) that exhibited bout-onset syllable formation. We focus here on bird 2 in which projection neurons were recorded throughout song development (57–84 dph). Tracking of syllable structure ( Extended Data Fig. 6 ) revealed that syllables A and B of the adult song derived from a common, rhythmically repeated protosyllable (labelled α; Fig. 4a, b ), and that syllable B arose from the first repetition of α at bout onset ( Fig. 4c, d ). The bout-onset syllable emerged as a distinct syllable type (labelled β) by fusion of this first α with a brief vocal element ε at bout onset ( Fig. 4c, d and Extended Data Fig. 6a–e ). Figure 4: Shared and specific sequences during the emergence of a new syllable at bout onset. All data are from bird 2. a , Schematic of syllable formation. b , Scatter plot of mean pitch goodness of syllables α (red) and β (blue) through development ( n = 100 syllables per day; horizontal jitter added to improve data visibility). c , Bout-onset neuron active before element ε (HVC RA ; 64 dph). d , New syllable β formed by fusion of ε and α. Neuron shared between α and β (HVC RA ; 65 dph). e , Neuron shared between α and β (HVC X ; 70 dph). f , A-specific neuron (HVC RA ; 80 dph). g , B-specific neuron (HVC RA ; 73 dph). h , Population raster plot of 43 projection neurons recorded early in the emergence of syllable β showing shared and specific sequences. i , Raster plot of 32 neurons recorded after differentiation of β and α into adult syllables B and A. Scale bars for panels c – g , 0.5 mV, all have the same time scale. PowerPoint slide Full size image To examine the neural mechanisms underlying the emergence of the new syllable β at bout onsets, we analysed the firing patterns of 125 HVC projection neurons. Before the emergence of syllable β, the majority of recorded projection neurons participated in a rhythmic protosequence ( Fig. 2b ; n = 28/35 neurons; 57–64 dph). A different subset of neurons was active at bout onsets ( Fig. 4c ; 4 of 35 neurons). After the reliable emergence of β at bout onsets, roughly half of projection neurons generated bursts during both syllables α and β (65–72 dph; Fig. 4d, e ; n = 22 ‘shared’ neurons; 21 ‘specific’ neurons). These shared neurons produced nearly identical sequences during these two syllables ( Fig. 4h , Extended Data Fig. 4c ). Later in song development (73–84 dph), we observed a smaller fraction of shared neurons ( n = 4 ‘shared’ neurons; P = 5 × 10 −4 ), and a correspondingly larger fraction of syllable-specific neurons ( Fig. 4f, g, i ; n = 28 ‘specific’ neurons), consistent with a gradual splitting of the protosequence into increasingly non-overlapping ‘daughter’ sequences. Evidence for sequence splitting during bout-onset differentiation was also observed in birds 1 and 5 ( Extended Data Fig. 7 ). Note that the bout-onset differentiation in bird 1 occurred after the earlier emergence of the syllables β and γ ( Fig. 3 ), suggesting that new syllables may emerge in a hierarchical process—that is, by the splitting of sequences that are themselves the product of an earlier splitting process ( Extended Data Fig. 7 ). We were able to examine the question of whether neural sequence splitting also underlies the ‘motif strategy’ of song learning in two birds (birds 3 and 4; Extended Data Figs 8 and 9 ). In both birds, neural recordings showed the existence of rhythmically bursting neurons in the protosyllable stage ( Extended Data Figs 8e and 9e, f ). After the emergence of multiple syllable types, every syllable in the emerging motif had at least one neuron that was shared with another syllable at similar latencies ( Extended Data Figs 8f–j and 9g–o ), consistent with the view that all of these syllables arose from the simultaneous splitting of a common protosequence. Mechanistic model and discussion We propose a mechanistic model of learning in the HVC network to describe how sequences emerge during song development. This model is based on the idea that sequential bursting results from the propagation of activity through a continuous synaptically connected chain of neurons within HVC 21 , 41 , 42 , 43 , 44 , 45 , 46 , 47 . It also captures non-uniformities such as increased burst density at syllable onsets, as formulated in a perspective of HVC function emphasizing vocal gestures 22 . Modelling studies have shown that a combination of two synaptic plasticity rules—spike-timing dependent plasticity (STDP) and heterosynaptic competition—can transform a randomly connected network into a feedforward synaptically connected chain that generates sparse sequential activity 43 , 44 . We hypothesize that the same mechanisms can drive the formation of a rhythmic protosyllable chain, and subsequently split this chain into multiple daughter chains for different syllable types. To test this hypothesis, we constructed a simple network of binary units representing HVC projection neurons 44 . The model neurons are initially connected with random excitatory weights, representing the subsong stage. We hypothesize that a subset of HVC neurons receives an external input at syllable onsets and serves as a seed from which chains grow during later learning stages 43 , 45 . Before learning, activation of these seed neurons produced a transiently propagating sequence of network activity that decayed rapidly (within tens of milliseconds; Fig. 5a ). Figure 5: A neural model of sequence formation and splitting in HVC. a – d , Top, network diagrams of participating neurons (darker lines indicate stronger connections; magenta boxes indicate seed neurons). Bottom, raster plot of neurons showing shared and specific sequences. Neurons sorted by relative latency. Magenta arrows indicate groups of seed neurons. a , Subsong stage: activation of seed neurons produces a rapidly decaying burst of sequential activity. b , Protosyllable stage: rhythmic activation of seed neurons induces formation of a protosyllable chain. c , Alternating activation of red and blue seed neurons and synaptic competition drives the network to split into two chains (specific neurons, red and blue; shared neurons, black). d , Network after chain splitting. e , Distribution of model burst latencies during subsong, protosyllable stage and chain splitting stage (early and late combined). PowerPoint slide Full size image In the next stage, the network is trained to produce a single protosyllable by activating seed neurons rhythmically (100 ms period). The connections are modified according to the learning rules described above 43 , 44 . As a result, connections were strengthened along the population of neurons sequentially activated after syllable onsets, resulting in the growth of a feedforward synaptically connected chain that supported stable propagation of activity ( Fig. 5b ). We found that this single chain could be induced to split into two daughter chains by dividing the seed neurons into two groups that were activated on alternate cycles of the rhythm ( Fig. 5c, d and Supplementary Video 1 ). Local inhibition 48 and synaptic competition were also increased (see Methods). During the splitting process, we observed neurons specific to each of the emerging syllable types, as well as shared neurons that were active at the same latencies in both syllable types ( Fig. 5c ). Just as observed in our data, over the course of development the distribution of burst latencies in the model continued to broaden ( Fig. 5e ), and the fraction of shared neurons decreased ( Fig. 5c, d ). The average period of rhythmic bursting in model neurons increased during chain splitting as neurons became ‘specific’ for one emerging syllable type and began to participate only on alternate cycles of the protosyllable rhythm ( Fig. 5d and Extended Data Fig. 10g, h ). Our model can reproduce other strategies by which birds learn new syllable types. We implemented bout-onset differentiation in the model by also including a population of seed neurons activated at bout onsets (see Figs 1d and 4c , and Extended Data Fig. 10a ). This caused the protosyllable chain to split in such a way that one daughter chain was reliably activated only at bout onsets, while the other daughter chain was active only on subsequent syllables ( Extended Data Fig. 10a–d and Supplementary Video 2 ). Our model was also able to simulate the simultaneous emergence of a three-syllable motif (‘motif strategy’) by dividing the seed neurons into three subpopulations ( Extended Data Fig. 10e–h ). Our data and modelling support the possibility of syllable formation by mechanisms other than sequence splitting. For example, in several birds, a short vocal element emerged at bout onsets that did not seem to differentiate acoustically from the protosyllable (and thus was not bout-onset differentiation; for example, ‘E’ in bird 1, Extended Data Fig. 7a ; or ‘C’ in bird 2, Extended Data Fig. 6a, b ). We found that, by using different learning parameters, our model allows bout-onset seed neurons to induce the formation of a new syllable chain at bout onset, rather than inducing bout-onset differentiation ( Extended Data Fig. 10i–k ). In summary, our model of learning in a simple sequence-generating network captures transformations that underlie the formation of new syllable types via a diverse set of learning strategies. Possible role of sequence splitting The process of splitting a prototype neural sequence allows learned components of a prototype motor program to be reused in each of the daughter motor programs. For example, one of the earliest aspects of vocal learning is the coordination between singing and breathing 35 , specifically, the alternation between vocalized expiration and non-vocalized inspiration typical of adult song 49 . The protosequence in HVC would allow the bird to learn the appropriate coordination of respiratory and vocal musculature. Duplication of the protosequence through splitting would result in two ‘functional’ daughter sequences, each already capable of proper vocal/respiratory coordination, and each suitable as a substrate for rapid learning of a new syllable type. This proposed mechanism resembles a process thought to underlie the evolution of novel gene functions: gene duplication followed by divergence through independent mutations 50 . Similarly, for the acquisition of complex behaviours, the duplication of neural sequences by splitting, followed by independent differentiation through learning, may provide a mechanism for constructing complex motor programs. Methods Animals We used juvenile male zebra finches ( Taeniopygia guttata ) 44–112 days post-hatch (dph) singing undirected song ( n = 32 birds). Animals were not divided into experimental groups; thus, randomization and blinding were not necessary. No statistical methods were used to predetermine sample size. Birds were obtained from the Massachusetts Institute of Technology zebra finch breeding facility (Cambridge, Massachusetts). The care and experimental manipulation of the animals were carried out in accordance with guidelines of the National Institutes of Health and were reviewed and approved by the Massachusetts Institute of Technology Committee on Animal Care. All the juvenile birds were raised by their parents in individual breeding cages until 38 ± 5.2 dph (mean ± s.d.) when they were removed and were singly housed in custom-made sound isolation chambers (maintained on a 12:12 h day-night schedule). For a subset of the birds (birds 1, 2 and 4), additional tutoring was carried out after removal from the breeding cages to facilitate song imitation. This was done by playback of the tutor song through a speaker (20 bouts per day). Additional tutoring was done for 12 days for bird 1, 7 days for bird 2, and 18 days for bird 4. Bird identification key: bird 1, to3965; bird 2, to3779; bird 3, to3017; bird 4, to5640; bird 5, to3396; bird 6, to2309; bird 7, to3412; bird 8, to3567; bird 9, to2462; bird 10, to2331; bird 11, to2427; bird 12, to3352. To compare the activity of HVC projection neurons in juvenile birds with that of adult birds, we also included neurons recorded in adults (>120 dph, n = 3 birds) which included a reanalysis of previously published HVC recordings performed in adult male zebra finches singing directed song 20 . Song recordings Songs were recorded with Sound Analysis Pro 51 or a custom-written MATLAB software (A. Andalman), which was configured to ensure triggering of recordings on all quiet vocalizations of juvenile birds 27 . The vertical axis range for all spectrograms is 500–8,000 Hz. Classification of song stages We classified each day of juvenile singing into one of four song stages: subsong stage, protosyllable stage, multi-syllable stage, and motif stage ( Extended Data Fig. 1a ). Subsong stage (48 ± 4 dph, median ± inter-quartile range, IQR) is defined as having a syllable duration distribution well-fit by an exponential distribution 34 , 35 , with an upper limit for the Lilliefors goodness-of-fit statistic of 6. Following the subsong stage, birds enter the protosyllable stage (58 ± 10 dph, median ± IQR) characterized by the presence of syllables with consistent timing reflected in a peak in the distribution of syllable durations 32 , 33 , 34 , 35 . The onset of the protosyllable stage was defined here as the first day in which the syllable duration distribution deviated from an exponential distribution (Lilliefors goodness-of-fit statistic greater than 6). Following the protosyllable stage, birds transition to the multi-syllable stage (62 ± 12 dph, median ± IQR) in which multiple distinct syllable types are visible in the song spectrogram and as multiple clusters in a scatter plot of syllable features 52 (for example, Fig. 3a, b ; 62 dph). The motif stage (73 ± 21 dph, median ± IQR) was defined by the production of a sequence of syllables in a relatively fixed order 31 . Finally, songs recorded in birds older than 120 dph were assigned as adult stage. A slightly older cutoff than the typical definition of adulthood in zebra finches (~90 dph) 14 was used, because some of our birds in the 90–120 dph range continued to undergo some small developmental changes, as has been reported 31 . Syllable segmentation and bout extraction Syllable segmentation of the juvenile song was done based on the song power in a spectral band between 1 and 4 kHz, as described previously 27 , 34 , 35 . In a few cases, cutoff frequencies of the band-pass filters were adjusted to avoid the inclusion of high-frequency inspiratory sounds 35 , 53 . Introductory notes were removed manually to avoid including HVC neurons that are rhythmically active during these elements 54 . Song bouts were defined as continuous sequences of syllables separated by gaps no longer than 300 ms 35 . Bout onset was defined as the onset of the first syllable in the bout, and bout offset was defined as the offset of the last syllable in the bout. Syllable segmentation based on the song rhythmicity (‘phase segmentation’) For bird 3 (‘motif strategy’), it was difficult to segment syllables consistently using previous methods based on setting a threshold on the sound amplitude 27 , 34 , 35 . To overcome this limitation, we segmented syllables based on the phase of the rhythmicity in the song (‘phase segmentation’). The peak of the song rhythm, defined as the spectrum of the sound amplitude during singing 38 , exhibited a peak around 9 Hz ( Extended Data Fig. 8c ). To estimate the instantaneous phase of this rhythm, we first band-pass filtered the sound amplitude ( Extended Data Fig. 8c, d ; second-order IIR resonator filter with peak at 9 Hz and −3 dB half-bandwidth of 3 Hz; MATLAB command iirpeak). The band-pass filtered signal was then processed using the Hilbert transform (MATLAB command hilbert) to compute the instantaneous amplitude and phase ( Extended Data Fig. 8d ). Next, we set a threshold on this instantaneous amplitude to find the rhythmic part of the song. Finally, within this rhythmic part, song was segmented by detecting threshold crossings of the instantaneous phase ( Extended Data Fig. 8d , bottom). Phase segments that contain no sounds or calls were manually removed. Similarly, phase segmentation (band-pass filter with peak at 10 Hz and half-bandwidth of 3 Hz) was used to segment the song during the protosyllable stage for bird 4 ( Extended Data Fig. 9a, e, f ). Note that this method is best suited for segmenting songs that have strong rhythmic modulation of song amplitude, but in which syllable boundaries are not strongly rhythmic. This appeared to be typical of birds employing the ‘motif strategy’ 32 . Syllable classification and labelling Protosyllables were defined by their characteristic durations as has been described previously 34 , 35 . In short, to identify the protosyllables, we first subtracted the best-fit exponential distribution (using 200–400 ms) from the syllable duration distribution, and fitted a Gaussian distribution to this residual. Protosyllables were defined as syllables having durations within two standard deviations from the mean of this Gaussian distribution. We labelled protosyllables using the Greek letter ‘α’ in all our birds for consistency. To label the emerging syllables in the juvenile song, we used the Greek letters β, γ, δ, and ε. In contrast, to label the syllables in the adult motif, we used the capital letters of the Latin alphabet A, B, C, etc. For birds in which the song learning trajectory was tracked developmentally, we labelled the syllables such that the correspondence between the juvenile syllables and adult syllables is straightforward: for example, α becomes A, β becomes B, γ becomes C, δ becomes D, and ε becomes E. Note that this labelling scheme leads to a slightly unconventional labelling of adult song in the sense that a motif can have letters in a reverse order (for example, CBA in Fig. 4f, g ; Extended Data Fig. 6a ), or a motif might not have a syllable A (for example, EDCB in Extended Data Fig. 7a ). Syllable labelling was done manually by visual inspection of the song spectrogram; this was done blind with respect to the neural activity. The existence of multiple distinct syllable types were confirmed by calculating the syllable duration and acoustic features commonly used to analyse birdsong syllables 51 , 55 , and visualizing the clusters of syllables in a two-dimensional space 52 ( Fig. 3b , Extended Data Figs 8b and 9d ). In some cases, syllable order was used as an additional indicator of syllable identity (for example, Extended Data Fig. 7a , 70 dph; Extended Data Fig. 8a , 51 dph; Extended Data Fig. 9a , 59 dph). In bird 1, syllables β and γ were labelled manually by visual inspection of the song spectrogram ( Fig. 3a ). Since characterizing shared neurons and specific neurons depends on the reliable labelling of syllables, we took a conservative approach and only labelled syllables that were clearly identifiable and did not label the syllables that were ambiguous (fraction of syllables labelled as β or γ during 62–66 dph: 70 ± 5.5%, mean ± s.d.). We then estimated the error rate of our labelling procedure by plotting the labelled syllables ( n = 200 syllables per type on each day) in a two-dimensional space of syllable duration and mean pitch goodness ( Fig. 3b ), and obtained a decision boundary using linear discriminant analysis. We used mismatch between manual labelling and feature-based labelling to estimate the error rate for syllables β and γ. The error rate during the first five days of syllable differentiation (62–66 dph), when the labelling was most difficult, was only 1.1% on average (range: 0.25–3.0%). For the second round of differentiation in bird 1, syllable order was used to assist in the labelling of syllables in early stages when syllables ‘B’ and ‘D’ were not easily distinguishable based on acoustic differences. Because these syllables underwent bout-onset differentiation, the first β after bout onset was labelled ‘D’; later renditions of β in the bout were labelled ‘B’ ( Extended Data Fig. 7a ). In bird 2, several emerging syllables could be easily distinguished based on syllable durations ( Extended Data Fig. 6d ). Specifically, syllables whose durations were 110–160 ms, and 180–250 ms were defined as α and β, respectively. Syllables that were 10–75 ms in duration were labelled γ if they were followed by a β, and labelled ε otherwise. Chronic neural recordings Single-unit recordings of HVC projection neurons during singing were carried out using a motorized microdrive described previously 56 , 57 . Single-units were confirmed by the existence of the refractory period in the inter-spike interval (ISI) distribution ( Extended Data Fig. 1b ). Neurons that were active only during distance calls and not during singing 20 were excluded from the analysis. In addition, neurons recorded for less than 5 s of singing were excluded since the short recording duration did not allow us to reliably quantify the activity pattern of these neurons. Antidromic identification of HVC projection neurons was carried out with a bipolar stimulating electrode implanted in RA and Area X (single pulse of 200 μs every 1 s; current amplitude: 50–500 μA) 19 , 20 , 57 , 58 , 59 . A subset of antidromically identified projection neurons was further validated with collision testing 19 , 20 , 57 , 58 , 59 . A different subset of single units were identified as putative projection neurons based on sparse bursting, but could not be antidromically identified because they did not respond to antidromic stimulation or were lost before antidromic identification could be carried out (211 of 1,149 neurons). These neurons were included in the data set as unidentified HVC projection neurons (HVC p ). Analysis of neural activity Spikes were sorted offline using custom MATLAB software (D. Aronov). Definition of bursts HVC projection neurons exhibited bursts of action potentials during singing ( Fig. 1a–c ). The bursting nature of these neurons was evident in the inter-spike interval (ISI) distribution during singing, which exhibited two peaks with an inter-peak minimum near 30 ms ( Extended Data Fig. 1b ). We defined a ‘burst’ as a continuous group of spikes separated by intervals of 30 ms or less. Thus, by definition, bursts are separated from other spikes by intervals greater than 30 ms. Note that single spikes separated by more than 30 ms from both the preceding spike and the following spikes were also counted as a burst. Burst time was defined as the centre of mass of all the spikes within the burst. Burst width was defined as the interval between the first and the last spike in a burst ( Extended Data Fig. 1c , top). Firing rate during burst was defined as the reciprocal of the mean inter-spike interval in a burst ( Extended Data Fig. 1c , bottom). For the calculation of burst width and firing rate during bursts, bursts composed of a single spike were excluded. Syllable-related neural activity To analyse the temporal relation between neural activity and song syllables, we aligned the spike times to syllable onsets and constructed a rate histogram (1 ms bin, smoothed over 20 bins; range: ± 0.5 s from syllable onsets). The peak in this rate histogram was found between 50 ms before syllable onset and 200 ms after syllable onset. To test the significance of this peak, surrogate histograms were created by adding different random time shifts to the spike times on each trial 60 . Random time shifts were drawn from a uniform distribution over ± 0.5 s. The peak of this surrogate histogram was recorded, and this shuffling procedure was repeated 1,000 times; P values were obtained by analysing the frequency with which the peaks of surrogate data were larger than that of the real data, and P < 0.05 was considered significant. To visualize the population activity associated with protosyllables, we constructed a population raster plot by choosing 20 protosyllable renditions for which each neuron was most active. Different neurons were plotted in different colours ( Fig. 2b , Extended Data Figs 1n and 9k ). For all the other population raster plots associated with identified syllables, 20 random renditions were chosen for display. For all population raster plots, syllable duration from each rendition was linearly time-warped to the mean duration of the syllable. Spike times were warped by the same factor. Bout-related neural activity A subset of HVC projection neurons exhibited bout-related activity: bursting before bout onsets and/or after bout offsets ( Fig. 1d, e and Extended Data Fig. 2e–l ). To quantify the pre-bout activity, we generated histograms aligned to bout onsets ( Extended Data Fig. 2f, g ) and found the peak in the histogram in a 300 ms window before bout onset. We considered a neuron to be exhibiting ‘pre-bout activity’ if the size of this peak was significant ( P < 0.05) compared to peaks obtained from the shuffled surrogate histograms (identical to the procedure described earlier in the section Syllable-related neural activity). To eliminate the possibility of including syllable-related activity as bout-related activity, we did not consider a neuron to be exhibiting pre-bout activity if the neuron showed a peak in the bout-onset aligned histogram and a peak at a similar latency (less than 25 ms apart) in the syllable-onset aligned histogram. We considered a neuron to be exhibiting ‘post-bout activity’ if there was a significant peak in the bout-offset aligned histogram ( Extended Data Fig. 2j, k ) in a 300 ms window after bout-offset. Quantification of the rhythmic neural activity To quantify the rhythmic neural activity of HVC projection neurons, we used four different methods: inter-burst interval, spike-train autocorrelation, spectrum of the spike train, and cepstrum of the spike train. Only spikes that were produced during singing (that is, between the onset of the first syllable and the offset of the last syllable in the bout) were used for the calculation of these measures. (1) Inter-burst interval. Intervals between burst times were calculated and the peak between 80–1,000 ms was found. (2) Spike-train autocorrelation. To quantify the second-order statistics of the firing pattern of HVC neurons, spike-train autocorrelation, expressed as a conditional firing rate 61 , was calculated, and the peak between 80–1,000 ms was found. The width of the centre peak indicates the width of bursts, and multiple side lobes with regular intervals indicate rhythmic bursting. (3) Spectrum of the spike train. Rhythmicity of the single-unit activity was also quantified in the frequency domain using multi-taper spectral analysis of spike trains treated as point processes 62 . We used the Chronux software to calculate the spectrum for the spike trains 63 , 64 . First, bouts of singing were segmented into non-overlapping analysis windows of 1.5 s long, and then the spectrum for each window was calculated using multi-taper spectral analysis with time-bandwidth product NW = 3/2 and the number of tapers K = 2. To obtain the mean spectrum for a given neuron, spectra calculated from all the analysis windows were averaged. Finally, we found the peak in the mean spectrum within the range 2–15 Hz. (4) Cepstrum of the spike train. HVC projection neurons typically exhibited brief rhythmic bursts with precise inter-burst intervals ( Fig. 1b, c ). Thus, the spectrum of the spike train tended to have peaks at multiples of the fundamental frequency. To represent these burst trains that have regular intervals in a more compact way, we calculated the cepstrum (a technique commonly used in speech processing to extract the period of glottal pulses) of the spike train, defined as the inverse Fourier transform of the log spectrum 65 , and found the peak in the cepstrum between 80–1,000 ms. To assess the significance of the peaks in these four measures, we compared the distribution of peak amplitude obtained from the real data with that of the surrogate data obtained by shuffling the bursts times. For this shuffling procedure, we first identified all the bursts during a bout of singing as described above. We then randomly placed bursts sequentially in an interval that has the same duration as the song bout; when spikes from two bursts were closer than 30 ms, we repeated the random placement until they were spaced by more than 30 ms. Note that this randomization procedure only shuffles the burst times and preserves both the number of bursts and the ISIs within bursts. Then, all four metrics listed above were calculated by applying the same method to these surrogate spike trains. This shuffling was repeated (1,000 times for the IBI and autocorrelation, 100 times for the spectrum and cepstrum) and the P values of the peak were calculated by analysing the frequency at which the peaks from the surrogate spike trains were larger than the peak obtained from real data. A neuron was considered to exhibit ‘rhythmic’ bursting if it had significant peaks in at least two of the four metrics. The period of the rhythm was defined as the location of the largest peak of spike-train autocorrelation between 80–1,000 ms. Quantification of the probabilistic neural activity during the protosyllable stage ( Extended Data Fig. 2p ) Although many HVC projection neurons recorded in the juvenile bird exhibited rhythmic bursts, these bursts did not occur reliably on every cycle of the rhythm, but instead participated probabilistically ( Fig. 2a ). To quantify the degree of participation, we first extracted the protosyllables based on syllable duration (see earlier section Syllable classification and labelling) and examined the fraction of protosyllables in which at least one spike occurred (time-window from 30 ms before protosyllable onset to 10 ms after protosyllable offset). The fraction of protosyllables in which the neuron was active was obtained for all the HVC projection neurons recorded during the protosyllable stage that showed a significant rhythmic bursting ( Extended Data Fig. 2p ). Analysis of simultaneously recorded pairs of neurons ( Extended Data Fig. 2q, r ) To test whether probabilistic bursting of neurons in the protosyllable stage is coordinated across many neurons, we analysed the correlation between pairs of simultaneously recorded neurons ( Fig. 2a , bottom). This analysis was restricted to pairs of neurons that were rhythmically bursting ( n = 11 pairs, 3 birds). Bursting activity of each neuron was converted to a binary string corresponding to its participation in each protosyllable (for the definition of protosyllables, see earlier section Syllable classification and labelling). The activity of a neuron was assigned a ‘1’ for a protosyllable if the neuron exhibited activity in a time-window from 30 ms before protosyllable onset to 10 ms after protosyllable offset, and ‘0’ if it did not. Only activity during protosyllables was analysed to avoid including the highly variable subsong syllables, which are likely generated by circuits outside HVC 27 , 34 . For simultaneously recorded pairs of neurons, this procedure resulted in two binary strings corresponding to the protosyllable-related activity of each neuron. We then calculated the coefficient of determination r 2 by taking the square of the Pearson’s correlation coefficient r between the two binary strings. The distribution of coefficient of determination is shown in Extended Data Fig. 2q (median r 2 = 0.072, 11 pairs). We also carried out a mutual information analysis to quantify whether the activity of one neuron was predictive of the set of protosyllables for which the other neuron was active. Using the same binary representation described above, we calculated the joint probability distribution describing the four possible states of activity (neither neuron spikes, neuron A spikes, neuron B spikes, both neurons spike). The mutual information was computed from this joint distribution ( Extended Data Fig. 2r , median mutual information = 0.056 bits, 11 pairs). Both the correlation and mutual information were extremely low, suggesting that different projection neurons participated on relatively independent sets of protosyllables. These findings suggest that individual projection neurons participate probabilistically and largely independently in an ongoing rhythmic protosequence within HVC. Analysis of coverage by HVC projection neuron bursts ( Extended Data Fig. 2s, t ) We wondered whether projection neuron bursts effectively span the entire duration of juvenile song syllables, or whether bursts are highly localized to specific times, leaving other times in the syllable unrepresented 22 . It is clear from the syllable aligned raster plots that some syllables were completely covered by bursts (for example, Fig. 3h , syllable ‘C’), while other syllables showed some gaps in the burst coverage (for example, Fig. 4i , syllable ‘A’). To further quantify this aspect of the HVC representation during singing, we analysed the fraction of time within the syllables of juvenile birds that were ‘covered’ by the recorded projection neurons bursts (‘covered fraction’). This analysis was restricted to syllables with more than 10 associated bursts. We first determined the region of the song syllable covered by each HVC projection neuron burst. We generated a histogram of syllable -onset or -offset aligned spike times recorded from a single neuron over every recorded rendition of the song syllable. Initial identification of candidate burst events was determined by smoothing the histogram (9 ms sliding square window, 1 ms steps), and setting a threshold to define a window in which to analyse burst spikes (2 Hz for protosyllable stage birds; 10 Hz threshold for older juveniles). To eliminate low-probability spike events, we only considered bursts for which spiking activity (at least one spike) occurred in the candidate burst window on at least 25% of the renditions for that syllable. Bursts were included only if they occurred between 30 ms before syllable onset and 10 ms after syllable offset. For candidate bursts that met these criteria, all spikes occurring in the burst window were considered as contributing to that burst. Based on earlier measurements of postsynaptic currents and potentials of HVC and RA neurons 66 , each HVC spike in the burst window was conservatively assumed to exert a postsynaptic effect lasting no more than 5 ms. Thus, each spike in the data set was replaced with a 5 ms postsynaptic square pulse (beginning at the spike time). We considered a region of the syllable to be ‘covered’ by this burst if at least three of these post-synaptic pulses overlapped at that time within the burst, across renditions of the syllable. This procedure yielded a small ‘patch’ of time covered by the burst. The patches associated with each different neuron were combined with a logical ‘OR’ operation to determine the total coverage time of the syllable (again in a window from 30 ms before syllable onset to 10 ms after syllable offset). The covered time was divided by the duration of the syllable window to determine the covered fraction. Only syllables that had more than 10 neurons bursting within the syllable window were analysed. This criterion excluded syllables from bird 3 (shown in Extended Data Fig. 8 ), from which relatively few neurons were recorded. While most syllables had nearly complete burst coverage (>90%), one syllable had coverage of only 73% ( Extended Data Fig. 2t ), which could potentially be due to the relatively smaller number of neurons recorded in this bird. Thus, we asked whether the measured coverage is consistent with sparse sampling of the recorded bursts from a large number of uniformly placed bursts. To simulate this, we calculated the covered fraction for 1,000 surrogate data sets in which the ‘covered patches’ for each burst were randomly shuffled within the syllable. A random offset was added to the time of each patch, and a circular shift was used, allowing the patches to wrap around the edges of the syllable window. The distribution of covered fractions was determined over all shuffled surrogate data sets, and the 2.5–97.5 percentiles (95% confidence interval) of this distribution were determined (shown as vertical grey bars in Extended Data Fig. 2t ). For all syllables, the observed covered fraction was consistent with that expected for random sampling from a uniform underlying distribution of burst times. Shared and specific neurons To examine whether a given HVC projection neuron was active during multiple syllable types (‘shared’ neuron) or was active only during a specific syllable type (‘specific’ neuron), we first constructed a syllable-onset aligned histogram (1 ms bin, smoothed over 20 bins) for each syllable type. Spike times were linearly time warped 67 to the mean duration of that syllable to reduce the trial-to-trial variability in the spike timing associated with the variation in the syllable duration. Next, we found the peak in the firing rate histogram in the interval between 30 ms before syllable onset and 10 ms after syllable offset. We visually inspected the syllable-aligned histograms, and adjusted the interval if necessary to avoid the same burst being detected twice (that is, being associated with an offset of one syllable and an onset of the next syllable). The significance of this peak was determined by comparing it with the peak size obtained from the shuffled histogram using the same method described earlier (in Syllable-related neural activity section). We defined ‘shared’ and ‘specific’ neurons in the context of a particular syllable differentiation process (for example, β and γ from bird 1 in Fig. 3 ; α and β from bird 2 in Fig. 4 ; B and D from bird 1 in Extended Data Fig. 7 ). ‘Specific’ neurons were defined as neurons that had a significant peak in the syllable-aligned histogram for only one syllable type, whereas ‘shared’ neurons were defined as neurons that had significant peaks for both syllable types. We took a conservative approach and only considered a neuron to be shared if the peak was significant for both syllable types. However, some neurons classified as specific had weak activity for the other syllable that did not reach significance (for example, Extended Data Fig. 6f ). In other words, we believe this method likely underestimated the fraction of neurons with shared activity. Our method likely underestimated the incidence of shared neurons for another reason as well. Specifically, we defined shared and specific neurons in the context of a particular pair of syllables undergoing differentiation. For example, in a bird that exhibited hierarchical differentiation (bird 1; Extended Data Fig. 7 ), we saw examples of neurons that were B-specific when considering B-C differentiation but shared when considering B-D differentiation. Thus, when considering all the syllables in the motif, our definition of shared and specific neuron based on syllable pairs will underestimate the fraction of shared neurons and overestimate the fraction of specific neurons. Quantification of the similarity of latencies in shared neurons ( Extended Data Fig. 4a–d and Extended Data Fig. 8i, j ) To test whether shared neurons were active at similar latencies for multiple syllable types, we first calculated the latency of the peak in the syllable onset- or offset-aligned histograms. We then plotted the latency of the peak for one syllable against that of another syllable ( Extended Data Fig. 4a–d ). When a shared neuron was active for three or more syllables, two syllables associated with two highest firing rates were chosen. To quantify whether shared neurons were active at similar latencies for two syllable types, we calculated the Pearson’s correlation coefficient r between the two latencies across shared neurons, and the P value under the null hypothesis that r = 0. For the bird whose song was segmented based on the phase of the rhythm (bird 3, Extended Data Fig. 8 ), we asked whether bursts of shared neurons during different syllables occurred at similar phases of the rhythm. To quantify the phase of the neural activity, we first detected the burst times during singing, and for each burst, we assigned an instantaneous phase extracted from the song using the Hilbert transform (see the section on phase segmentation above). Then, the mean phase of all the bursts produced during a particular syllable type was calculated ( φ i , where i = 1, 2, …, 5 indicates syllables). Finally, the two syllable types were chosen for which the neuron participated most reliably, and the difference between the mean phases for these two syllables (|Δ φ | = |φ m − φ n |, where m and n are syllable indices) was obtained ( Extended Data Fig. 8i ). We tested the significance of this value by comparing the value of |Δ φ | against that obtained from the shuffled data where the pairing of phases were randomized across all shared neurons ( Extended Data Fig. 8j ; 1,000 shuffles). P values were obtained by analysing the frequency with which |Δ φ | of surrogate data was smaller than that of the real data, and P < 0.05 was considered significant. Quantification of the activity level difference in shared neurons ( Extended Data Fig. 4i, j ) To quantify the difference in the activity level for multiple syllable types in the shared neurons, we calculated the ‘bias’ defined as follows: where r i is the peak firing rate in the syllable-aligned histogram for syllable i . Bias of 0 indicates equal activity level for both syllable types, whereas bias of 1 indicates exclusive activity for only one of the syllable types ( Extended Data Fig. 4j ). Analysis of acoustic features associated with bursts of shared neurons ( Extended Data Fig. 5 ) We wondered if the bursts of shared neurons were associated with different acoustic signals in the shared syllables at the time of the bursts. (An alternative possibility is that shared neurons burst only at times within the emerging syllable types when the acoustic signals are identical.) An example of a neuron analysed here is shown in Extended Data Fig. 5a (from the same data shown in Fig. 3e ). This neuron bursts just after the onset of both syllables β and γ. We analysed the acoustic differences in a 0–50 ms analysis window after the burst time, but were most interested in acoustic differences in a narrower premotor window (10–40 ms), as this corresponds to the premotor latency for which one expects HVC neurons to exert an effect on vocal output 29 , 58 , 68 . For each neuron analysed, all syllables in which the neuron generated a burst were identified. The analysis was carried out for every syllable rendition on which the neuron burst, and was restricted to only those syllables. Syllables had previously been labelled by type (that is, β and γ). We first directly visualized the spectral differences between the two syllable types using a sparse contour representation 69 , 70 , which is suitable for constructing an ‘average’ spectrogram. The analysis was carried out on the sound signal extracted from a 50 ms window after each burst. In many cases, this spectral representation revealed consistent differences between the different syllable types in this analysis window ( Extended Data Fig. 5b, c ). One complication is that some of the shared neurons burst before syllable onsets or immediately before syllable offsets such that the 10–40 ms window after the bursts was obscured by silent gaps (9 of 24 HVC RA neurons and 59 of 120 HVC X neurons were obscured). These neurons were excluded from the analysis of acoustic difference. We further quantified differences in the acoustic signals by extracting time varying acoustic and spectral features in a window 0–50 ms after burst time (see subsection Definition of bursts). We used 8 acoustic features previously established to analyse birdsongs (Wiener entropy, spectral centre of gravity, spectral width, pitch, pitch goodness, sound amplitude, amplitude modulation, frequency modulation) 51 , 55 . The 8-dimensional vector of features was calculated in 1 ms steps over the 50 ms analysis window ( Extended Data Fig. 5d, e ). Because each syllable was labelled, we could determine if the feature trajectories were significantly different for syllables labelled β and those labelled γ, and make this determination at every time step in the analysis window ( Extended Data Fig. 5d, e ; s.e.m. indicated by shaded region around mean trajectory). Rather than quantify the difference in these trajectories one feature at a time, we used Fisher’s discriminant analysis 71 to project the 8-dimensional acoustic feature vector onto a single dimension that gives maximum separability between the two syllable types. The projected direction is determined independently at each time point, and the feature vectors of all syllable renditions are projected, at each time point, to yield a distribution of projected samples. For most neurons, the different syllable types produce visibly different distributions of projected samples ( Extended Data Fig. 5f ) indicating distinct acoustic structure. The separability of the distributions (in one dimension) of projected samples for different syllable types was quantified using the d -prime metric ( d ′), corresponding to the distance between the means of the distributions, normalized by the pooled variance 70 : Because the features evolve in time, this analysis is carried out independently at each 1 ms step in the 50 ms analysis window, and the d ′ was plotted as a function of time ( Extended Data Fig. 5g ). Statistical significance of the d ′ trajectory was assessed by randomizing the syllable labels and rerunning the d ′ analysis on shuffled data sets ( N = 1,000 shuffles). For each randomization, the peak value of d ′ in 10-40 ms premotor window was recorded; significance threshold was set as the 95 percentile of the distribution of these peak values. A shared neuron was determined to have significant acoustic difference between the shared syllables only if the d ′ trajectory remained above this significance threshold for the entire premotor window of 10–40 ms after the burst. Note that, in the simulated data, none of the 1,000 surrogate runs generated a d ′ trajectory that met this stringent criterion. Statistics Results are expressed as the mean ± s.d. or s.e.m. as indicated. For χ 2 tests, if the contingency table included a cell that had an expected frequency less than 5, Fisher’s exact test was used 72 . All tests were two-sided, and P < 0.05 was considered significant. Bonferroni correction was used to account for multiple comparisons. Figure 1f . The statistical significance of developmental changes in the fraction of HVC neurons that were syllable-aligned was assessed in two different ways: (1) Each stage was compared with the adult stage using the χ 2 test followed by a post-hoc pairwise test. (2) To quantify the developmental trend in the fraction of syllable-locked neurons, we calculated Pearson’s correlation coefficient r between the binary value for each neuron (0, unlocked; 1, locked) and song stage (subsong: 1, protosyllable: 2, multi-syllables: 3, motif: 4, adult: 5). The P value was calculated under the null hypothesis that r = 0. The significance of the developmental trend for rhythmic bursting was calculated similarly. Similar results were obtained for correlation between these metrics and the age at which each neuron was recorded, rather than song stage. Figure 1g . The statistical significance of developmental changes in the period of the HVC rhythm was also assessed in two different ways: (1) Each song stage was compared with the adult stage using the Kruskal–Wallis test followed by a post-hoc pairwise test. (2) To quantify the developmental trend in the period of the HVC rhythm, we calculated Pearson’s correlation coefficient r between burst period and song stage. Similar results were obtained for correlation between burst period and the age at which each neuron was recorded. Figure 2c . The Wilcoxon rank-sum test was used to test whether the median of the syllable-onset aligned latency distribution was different between subsong and protosyllable stages. Figures 3g, h and 4h, i . To test whether the fraction shared neurons differed between early and late stages of syllable differentiation, we used the χ 2 test on a 2 × 2 contingency table (shared/specific, early/late). Regarding across all birds, to calculate whether the fraction of shared neurons differed between early and late stages of syllable differentiation over all birds ( n = 5 syllable pairs in 3 birds), we used the Cochran–Mantel–Haenszel test for repeated tests of independence 73 . Extended Data Fig. 1a . To quantify the relation between song stage and age, we calculated Spearman’s rank correlation coefficient ρ and the P value under the null hypothesis that ρ = 0. Extended Data Fig. 1c . We computed the statistical significance of developmental changes in burst width (top) and firing rate during bursts (bottom) by using the Kruskal–Wallis test followed by a post-hoc pairwise test to compare each stage with the adult stage. Extended Data Fig. 2m–o . To test whether fraction of syllable-locked neurons ( Extended Data Fig. 2m ), fraction of rhythmic neurons ( Extended Data Fig. 2n ), and period of HVC rhythm ( Extended Data Fig. 2o ) significantly differed between HVC RA and HVC X , we used χ 2 test for all the pairwise comparisons with Bonferroni correction for multiple comparisons. Extended Data Fig. 4a–d . To calculate the relation between latencies of bursts associated with shared neurons, we calculated the Pearson’s correlation coefficient r together with the P value under the null hypothesis that r = 0. Extended Data Fig. 5m, n . To test whether the mean d′ metric was different between HVC RA and HVC X , we used the Wilcoxon rank-sum test. Only neurons with d′ trajectories that were significant (continuously from 10–40 ms) were included in this comparison. Neural model of chain formation and splitting Code used to simulate the model is available as Supplementary Information . To illustrate a potential mechanism of chain splitting, we chose to implement the model as simply as possible. We modelled neurons as binary units and simulated their activity in discrete time steps 44 ; at each time step (10 ms), the i th neuron either bursts ( x i = 1) or is silent ( x i = 0). Network architecture A network of 100 binary neurons is recurrently connected in an all-to-all manner, with W ij representing the synaptic strength from presynaptic neuron j to postsynaptic neuron i . Self-excitation is prevented by setting W ij = 0 for all i at all times 44 . During learning, the strength of each synapse is constrained to be within the interval [0, w max ], while the total incoming and outgoing weights of each neuron are both constrained by the “soft bound” W max = m * w max where m represents a target number of saturated synapses per neuron 44 (see section Synaptic plasticity rule for details). Note that w max represents a hard maximum weight of each individual synapse, while W max represents a soft maximum total synaptic input or output of any one neuron. Synaptic weights are initialized with random uniform distribution such that each neuron receives, on average, its maximum allowable total input, W max . Network dynamics The activity of each neuron in the network was determined in two steps: calculating the net feedforward input that comes from the previous time step; then determining whether that is enough to overcome the recurrent inhibition in the current time step. First, the net feedforward input to the i th neuron at time step t , , was calculated by summing the excitation, feedforward inhibition, neural adaptation, and external inputs: where [ z ] + indicates a rectification (equal to z if z > 0 and 0 otherwise). is the excitatory input from network activity on the previous time step. is a global feedforward inhibitory input 44 , where β sets the strength of this feedforward inhibition. is an adaptation term 44 where α is the strength of adaptation, and y i is a low-pass filtered record of recent activity in x i with time constant τ adapt = 40 ms; that is ; B i ( t ) is the external input to neuron i at time t . For seed neurons, this term consists of training inputs (see section on Seed neurons). For non-seed neurons, it consists of random inputs with probability p in = 0.01 in each time step and size W max /10. Finally, θ i is a threshold term used to reduce the excitability of seed neurons, making them less responsive to recurrent input than are other neurons in the network. For seed neurons, θ i = 10 and for non-seed neurons, θ i = 0. Including this term improves robustness of the training procedure by eliminating occasional situations in which seed neuron activity may be dominated by recurrent rather than external inputs. In these cases, external inputs may fail to exert proper control of network activity. Second, we determined whether the i th neuron will burst or not at time step t by examining whether the net feedforward input, , exceeds the recurrent inhibition, A I_rec ( t ). We implemented recurrent inhibition by estimating the total input to the network at time t : and feeding it back to all the neurons. Parameter γ sets the strength of the recurrent inhibition. We assume that this recurrent inhibition operates on a fast time scale 48 (that is, faster than the duration of a burst). Thus, the final output of the i th neuron at time t becomes: where Θ[ z ] is the Heaviside step function (equal to 1 if z > 0 and 0 otherwise). To induce splitting, γ was gradually stepped up to γ split following a sigmoid with time constant τ γ and inflection point t 0 : Seed neurons A subset of neurons was designated as seed neurons, which received external training inputs used to shape network activity during learning 43 , 45 . The external training inputs activate seed neurons at syllable onsets, reflecting the observed onset-related bursts of HVC neurons during the subsong stage ( Fig. 1a ). The pattern of these inputs was adjusted in different stages of learning, and each strategy of syllable learning was implemented by different patterns of seed neuron training inputs. Alternating differentiation ( Fig. 5a–e ). Ten neurons were designated as seed neurons and received strong external input ( W max ) to drive network activity. In the subsong stage, seed neurons were driven (by external inputs) synchronously and randomly with probability 0.1 in each time step corresponding to the random occurrence of syllable onsets in subsong 27 , 34 . This was done only to visualize network activity; no learning was implemented at the subsong stage. During the protosyllable stage, seed neurons were driven synchronously and rhythmically with a period T = 100 ms. The protosyllable stage consisted of 500 iterations of 10 pulses each. To initiate chain splitting, the seed neurons were divided into two groups and each group was driven on alternate cycles. The splitting stage consisted of 2,000 iterations of 5 pulses in each group of seed neurons (1 s total per iteration, as in the protosyllable stage). Motif strategy ( Extended Data Fig. 10e–h ). This was implemented in a similar manner as alternating differentiation, except that 9 seed neurons were used, and for the splitting stage, seed neurons were divided into 3 groups of 3 neurons, each driven on every third cycle. Bout-onset differentiation ( Extended Data Fig. 10a-d ). Seed neurons were divided into two groups: 5 bout-onset seed neurons and 5 protosyllable seed neurons. At all learning stages, external inputs were organized into bouts consisting of four separate input pulses, and bout-onset seed neurons were driven at the beginning of each bout. Then, 30 ms later, protosyllable seed neurons were driven three times with an interval of T = 100 ms. In the protosyllable stage, inputs to all seed neurons were of strength W max . In the splitting stage, the input to protosyllable seed neurons was decreased to W max /10. This allowed neurons in the bout-onset chain to suppress, through fast recurrent inhibition, the activity of protosyllable seed neurons during bout-onset syllables. Each iteration of the simulation was 5 s long, consisting of 10 bouts, described directly above, with random inter-bout intervals. The protosyllable stage consisted of 100 iterations, and the splitting stage consisted of 500 iterations. Bout-onset syllable formation ( Extended Data Fig. 10i–k ). Input to seed neurons was set high (2.5* W max ), and maintained at this high level throughout development. This prevented protosyllable seed neurons from being inhibited by neurons in the bout-onset chain. Furthermore, strong external input to the protosyllable seed neurons terminated activity in the bout-onset chain through fast recurrent inhibition, thus preventing further growth of the bout-onset chain, as occurs in bout-onset differentiation. As in bout-onset differentiation, each iteration of the simulation was 5 s long, consisting of 10 bouts with random inter-bout intervals. The protosyllable stage consisted of 100 iterations, and the splitting stage consisted of 500 iterations. Synaptic plasticity rules As in previous models 43 , 44 , we hypothesized two plasticity rules in our model: Hebbian spike-timing dependent plasticity (STDP) to drive sequence formation 74 , 75 , and heterosynaptic long term depression (hLTD) to introduce competition between synapses of a given neuron 43 , 44 . STDP is governed by the antisymmetric plasticity rule with a short temporal window (one burst duration): where the constant η sets the learning rate. hLTD limits the total strength of weights for neuron i , and the summed weight limit rule for incoming weights is given by: and for outgoing weights from neuron j: At each time step, total change in synapse weight is given by the combination of STDP and hLTD: where ε sets the relative strength of hLTD. Model parameters: subsong ( Fig. 5a ) In our implementation of the subsong stage, there was no learning. Subsong model parameters were: β = 0.115, α = 30, η = 0, ε = 0, γ = 0.01. Model parameters: alternating differentiation ( Fig. 5b–d ) After subsong, learning progressed in two stages: the protosyllable stage and the splitting stage. Parameters that remained constant over development were: β = 0.115, α = 30, η = 0.025, ε = 0.2. To induce chain splitting, w max , the maximum allowed strength of any synapse, was increased from 1 to 2, m was decreased from 10 to 5, and γ was increased from 0.01 to 0.18 following a sigmoid with time constant τ γ = 200 iterations and inflection point t 0 = 500 iterations into the splitting stage. No change in parameters occurred before the chain-splitting stage. Model parameters: bout-onset differentiation ( Extended Data Fig. 10a–d ) Parameters that remained constant over development were: β = 0.13, α = 30, η = 0.05, ε = 0.14. To induce chain splitting, w max was increased from 1 to 2, m was decreased from 5 to 2.5, and γ was increased from 0.01 to 0.04 following a sigmoid with time constant τ γ = 200 iterations and inflection point t 0 = 250 iterations into the splitting stage. Model parameters: motif strategy ( Extended Data Fig. 10e–h ) Parameters that remained constant over development were: β = 0.115, α = 30, η = 0.025, ε = 0.2. To induce chain splitting, w max was increased from 1 to 2, m was decreased from 9 to 3, and γ was increased from 0.01 to 0.18 following a sigmoid with time constant τ γ = 200 iterations and inflection point t 0 = 500 iterations into the splitting stage. Model parameters: formation of a new syllable at bout onset ( Extended Data Fig. 10i–k ) Parameters that remained constant over development were: β = 0.13, α = 30, η = 0.05, ε = 0.15. To induce chain splitting, w max was increased from 1 to 2, m was decreased from 5 to 2.5, and γ was increased from 0.01 to 0.05 following a sigmoid with time constant τ γ = 200 iterations and inflection point t 0 = 250 iterations into the splitting stage. Shared and specific neurons Neurons were classified as participating in a syllable type if the syllable onset-aligned histogram exhibited a peak that passed a threshold criterion. The criteria were chosen to include neurons where the histogram peak exceeded 90% of surrogate histogram peaks. Surrogate histograms were generated by placing one burst at a random latency in each syllable. (For example, in the protosyllable stage, the above criterion was found to be equivalent to having 5 bursts at the same latency in a bout of 10 protosyllables.) During the splitting phase, neurons were classified as shared if they participated in both syllable types, and specific if they participated in only one syllable type. Visualizing network activity We visualized network activity in two ways: network diagrams, and raster plots of population activity (for example, Fig. 5a–d top and bottom panels, respectively). In both cases, we only included neurons that participated in at least one of the syllable types (see earlier section Shared and specific neurons for participation criteria). Network diagrams . Neurons are sorted along the x axis based on their relative latencies. Neurons are sorted along the y axis based on the relative strength of their synaptic input from specific neurons (or seed neurons) of each type (red or blue). Lines between neurons correspond to feedforward synaptic weights, and darker lines indicate stronger synaptic weights. For clarity of plotting, only the strongest six outgoing and strongest nine incoming weights are plotted for each neuron. Population raster plots . Neurons are sorted from top to bottom according to their latency. Groups of seed neurons are indicated by magenta arrows. Shared neurons are plotted at the top and specific neurons are plotted below. As for network diagrams, neurons that did not reliably participate in at least one syllable type were excluded. Further details for Fig. 5a–d . Panels show network diagrams and raster plots at four different stages. Figure 5a shows subsong stage (before learning), Fig. 5b shows end of protosyllable stage (iteration 500), Fig. 5c shows early chain splitting stage (iteration 992), Fig. 5d shows late chain-splitting stage (iteration 2,500). Further details for Extended Data Fig. 10a–d . Extended Data Fig. 10a shows early protosyllable stage (iteration 5), Extended Data Fig. 10b shows late protosyllable stage (iteration 100), Extended Data Fig. 10c shows early chain splitting stage (iteration 130), Extended Data Fig. 10d shows late chain splitting stage (iteration 600). Code availability Code used to simulate the model is available as Supplementary Information .
Male zebra finches, small songbirds native to central Australia, learn their songs by copying what they hear from their fathers. These songs, often used as mating calls, develop early in life as juvenile birds experiment with mimicking the sounds they hear. MIT neuroscientists have now uncovered the brain activity that supports this learning process. Sequences of neural activity that encode the birds' first song syllable are duplicated and altered slightly, allowing the birds to produce several variations on the original syllable. Eventually these syllables are strung together into the bird's signature song, which remains constant for life. "The advantage here is that in order to learn new syllables, you don't have to learn them from scratch. You can reuse what you've learned and modify it slightly. We think it's an efficient way to learn various types of syllables," says Tatsuo Okubo, a former MIT graduate student and lead author of the study, which appears in the Nov. 30 online edition of Nature. Okubo and his colleagues believe that this type of neural sequence duplication may also underlie other types of motor learning. For example, the sequence used to swing a tennis racket might be repurposed for a similar motion such as playing Ping-Pong. "This seems like a way that sequences might be learned and reused for anything that involves timing," says Emily Mackevicius, an MIT graduate student who is also an author of the paper. The paper's senior author is Michale Fee, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research. Bursting into song Previous studies from Fee's lab have found that a part of the brain's cortex known as the HVC is critical for song production. Typically, each song lasts for about one second and consists of multiple syllables. Fee's lab has found that in adult birds, individual HVC neurons show a very brief burst of activity—about 10 milliseconds or less—at one moment during the song. Different sets of neurons are active at different times, and collectively the song is represented by this sequence of bursts. In the new Nature study, the researchers wanted to figure out how those neural patterns develop in newly hatched zebra finches. To do that, they recorded electrical activity in HVC neurons for up to three months after the birds hatched. When zebra finches begin to sing, about 30 days after hatching, they produce only nonsense syllables known as subsong, similar to the babble of human babies. At first, the duration of these syllables is highly variable, but after a week or so they turn into more consistent sounds called protosyllables, which last about 100 milliseconds. Each bird learns one protosyllable that forms a scaffold for subsequent syllables. The researchers found that within the HVC, neurons fire in a sequence of short bursts corresponding to the first protosyllable that each bird learns. Most of the neurons in the HVC participate in this original sequence, but as time goes by, some of these neurons are extracted from the original sequence and produce a new, very similar sequence. This chain of neural sequences can be repurposed to produce different syllables. "From that short sequence it splits into new sequences for the next new syllables," Mackevicius says. "It starts with that short chain that has a lot of redundancy in it, and splits off some neurons for syllable A and some neurons for syllable B." This splitting of neural sequences happens repeatedly until the birds can produce between three and seven different syllables, the researchers found. This entire process takes about two months, at which point each bird has settled on its final song. "This is a very natural way for motor patterns to evolve, by repeating something and then molding it, but until now nobody had any good data to understand how the brain actually does that," says Ofer Tchernichovski, a professor of psychology at Hunter College who was not involved in the research. "What's cool about this paper is they managed to follow how brain centers govern these transitions from simple repetitive patterns to more complex patterns." Evolution by duplication The researchers note that this process is similar to what is believed to drive the production of new genes and traits during evolution. "If you duplicate a gene, then you could have separate mutations in both copies of the gene and they could eventually do different functions," Okubo says. "It's similar with motor programs. You can duplicate the sequence and then independently modify the two daughter motor programs so that they can now each do slightly different things." Mackevicius is now studying how input from sound-processing parts of the brain to the HVC contributes to the formation of these neural sequences.
10.1038/nature15741
Biology
Sound-like bubbles whizzing around in DNA are essential to life
Mario González-Jiménez et al. Observation of coherent delocalized phonon-like modes in DNA under physiological conditions, Nature Communications (2016). DOI: 10.1038/ncomms11799 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms11799
https://phys.org/news/2016-06-sound-like-whizzing-dna-essential-life.html
Abstract Underdamped terahertz-frequency delocalized phonon-like modes have long been suggested to play a role in the biological function of DNA. Such phonon modes involve the collective motion of many atoms and are prerequisite to understanding the molecular nature of macroscopic conformational changes and related biochemical phenomena. Initial predictions were based on simple theoretical models of DNA. However, such models do not take into account strong interactions with the surrounding water, which is likely to cause phonon modes to be heavily damped and localized. Here we apply state-of-the-art femtosecond optical Kerr effect spectroscopy, which is currently the only technique capable of taking low-frequency (GHz to THz) vibrational spectra in solution. We are able to demonstrate that phonon modes involving the hydrogen bond network between the strands exist in DNA at physiologically relevant conditions. In addition, the dynamics of the solvating water molecules is slowed down by about a factor of 20 compared with the bulk. Introduction The processes important to the biological function of DNA (replication, transcription, denaturation and molecular intercalation) have in common that they start with the breaking of the hydrogen bonds between the bases of the nucleic acid. Driven by the torsional stress of the molecule 1 , the destablization of the weak bonds leads to the splitting of a section of the double helix of DNA into single strands, forming a gap in the nucleic acid known as a transcriptional bubble 2 . The DNA molecule is not a static object, but vibrates and fluctuates on timescales from femtoseconds to nanoseconds. The origin of the rupture of the hydrogen bonds in DNA is thought to be low-frequency vibrational modes propagating along its length in the form of phonon-like modes that expand and contract the space between the bases 3 , 4 , 5 . Because adenine–thymine and guanine–cytosine base pairs differ in their hydrogen bonding and stacking, the physical properties of DNA that influence these waves, such as helical structure 6 or elasticity 7 , depend on the local DNA sequence 8 . It has been suggested that DNA has regions where specific sequences favour the resonance between low-frequency vibrational modes 9 , 10 , promoting the temporary splitting of a significant number of bases, even at physiological temperatures 11 , 12 , 13 . During the short period of time that the bubbles exist 14 , the bases are exposed to the surrounding solvent, which has two effects. On one hand, bubbles expose the nucleic acid to reactions of the bases with mutagens in the environment 15 , while molecular intercalators can take advantage of the gap and may insert themselves between the strands of DNA 11 . On the other hand, bubbles allow helicases access to DNA to stabilize the bubble, followed by splitting the strands to start the transcription and replication process 16 . For this reason, it is believed that DNA directs its own transcription. Despite the importance of the low-frequency vibrational modes of DNA, evidence for their existence is only indirect 17 . The relatively weak absorption of far-infrared radiation by DNA compared with the extremely strong absorption by water has limited investigation by infrared spectroscopy 18 . However, a recent report suggested a non-thermal response of cellular gene expression to terahertz radiation 19 . The presence of low-frequency vibrational modes in DNA has been shown through Raman 20 , 21 , Brillouin 22 , inelastic X-ray 23 and inelastic neutron-scattering 24 measurements. However, all of these studies have been performed in unnatural solid DNA preparations (humidified films, fibres and so on) that modify the dynamics and are likely to introduce low-frequency lattice-phonon modes associated with crystal packing. Furthermore, inelastic neutron-scattering experiments have shown great sensitivity of the speed of sound to relative humidity. No spectroscopic experiments have been carried out under relevant physiological conditions in aqueous solution except over a very narrow frequency range (10–35 cm −1 ) 25 , 26 , 27 . It seems highly likely that exposure of the nucleic acid to bulk liquid water will lead to strong damping of phonon-like modes, resulting in localization. Therefore, the importance of bubbles and the associated low-frequency phonon-like vibrational modes under physiological conditions remains unproven and unlikely. In recent work, we have shown that the technique of ultrafast optical Kerr effect (OKE) spectroscopy could be used to determine the presence of delocalized phonon-like modes in a protein and prove their relevance to biological function 28 . This technique measures the low-frequency depolarized Raman spectrum in the time domain and obtains a spectrum through numerical Fourier transformation 29 , 30 , 31 . Spontaneous scattering techniques such as Raman spectroscopy and inelastic neutron scattering when applied to liquids and solutions become unreliable at low frequencies (<1 THz) due to a very strong Rayleigh peak from elastic scattering 32 . The spectral resolution of inelastic X-ray and neutron scattering is poor ( ∼ 15 cm −1 or 0.5 THz), prohibiting spectral characterization especially below ∼ 1 THz (ref. 23 ). OKE spectroscopy has proven to be far superior at low frequencies, as it does not suffer from a large Rayleigh peak and its high signal to noise allows a detailed analysis of the spectra 29 , 30 . Furthermore, the OKE signal from liquid water is relatively weak, allowing it to be subtracted from the total signal to reveal the spectrum of the solvated biomolecule. Thus, OKE is the only technique that can reliably determine the terahertz and sub-terahertz spectra of biomolecules in physiologically relevant aqueous solution. Here we use OKE spectroscopy to investigate the low-frequency vibrational modes of DNA to determine the presence and possible role of phonon-like modes in nucleic acids in aqueous solution. We present the OKE spectra of two DNA oligomers in phosphate buffer solution, at various degrees of denaturation as a function of increasing temperature. Two phonon-like modes associated with the hydrogen bonds of the double strand of DNA are identified. To confirm the assignments given to the observed bands, the OKE spectra of two different oligomers designed to form a double helix only when they are dissolved together are measured. Since nucleotides in solution stack even at low concentrations while not forming hydrogen bonds between them 33 , the OKE spectra of cytosine are measured to investigate the influence of stacking on the nucleic acid spectra. Results OKE spectra of 20mers Experiments were carried out on two relatively short palindromic DNA sequences containing mostly cytosine and guanine: d(GGCGGCCCGCGCGGGCCGCC) 2 (CG 20mer), and mostly adenine and thymine: d(TTATTAAATATATTTAATAA) 2 (AT 20mer). OKE spectra for 10 mM solutions of the CG and AT oligomers in phosphate buffer (pH=7) were measured at 10 K temperature intervals from 298 to 358 K. The OKE spectrum of water was subtracted at each temperature ( Supplementary Note 1 ; Supplementary Figs 1–4 ; Supplementary Tables 1–4 ) to obtain the spectra of the solvated oligomers. At the concentration employed (0.15 M), the OKE spectrum of phosphate buffer is indistinguishable from that of water. Figure 1 shows the experimental temperature-dependent OKE spectra of the AT 20mer and fits to theoretical expressions. Below ∼ 50 GHz, the contribution of the molecular orientational diffusion of the oligomer is expected to dominate. Using the Stokes–Einstein–Debye equation 29 , a relaxation frequency between 0.2 GHz at 298 K and 1.6 GHz at 358 K was estimated for this process ( Supplementary Note 2 ; Supplementary Fig. 5 ). Because this is at significantly lower frequency than accessible in these experiments, only the high-frequency wing of the band can be seen in the spectra. Despite this, the orientational relaxation band could be fitted using a Debye function (D in Fig. 1 ; Methods). The relaxation time constants obtained through curve fitting are broadly consistent with the values calculated using the Stokes–Einstein–Debye equation. Figure 1: Experimental optical Kerr effect spectra and the component fit functions used to fit the data for the 20-bp AT oligomer. The temperatures run from 298 (blue) to 358 K (red) in steps of 10 K. Each spectrum has been fitted using the combination of a Debye function (D), a Cole–Cole function (CC) and four Brownian oscillators (B1–B4). The features in the figure that change in an unexpected manner are the bands B2 and B4 (with emphasized colours). The intensity of B2 increases with temperature, while the intensity of B4 reduces. The inset shows the intensity of the bands B2 (blue) and B4 (red). They are compared with the scaled results of a circular dichroism experiment (ellipticity at 248 nm, green). Full size image The remainder of the low-frequency portion of the spectra is caused by a very broad band that can be fitted with a Cole–Cole function (Methods) with its maximum moving from 14.2 GHz at 298 K to 27.8 GHz at 358 K. This band extends to unusually high frequency and is responsible for the increase in intensity in the region between 4 and 9 THz. The most interesting features of the OKE spectra of the AT 20mer are the changes with temperature that appear in the high-frequency part of the spectra (>200 GHz). This portion of the spectrum can be fitted by four Brownian oscillators (Methods). There is only a very small change with temperature in the intensity of the first (B1 in Fig. 1 , with the Brownian oscillator frequency ω 0 /2 π =1.01 THz and damping rate γ /2 π =1.39 THz) and third (B3, ω 0 /2 π =2.19 THz and γ /2 π =0.70 THz) bands. However, there are significant changes in the intensities of the second (B2) and fourth (B4) bands, which is unexpected. At room temperature (298 K), only B4 occurs in the spectrum but as the temperature increases, the intensity of B4 decreases, while B2 appears and increases in intensity. The inset of Fig. 1 shows the sigmoidal dependence of the intensity of these bands on temperature. A circular dichroism (CD) experiment to determine the proportion of denatured oligomer at each temperature was carried out ( Supplementary Note 3 ; Supplementary Figs 6 and 7 ) showing the same sigmoidal dependence on temperature from double stranded at room temperature to single stranded at 358 K. This strongly suggests that B2 and B4 are associated with the single- and double-stranded conformation of the oligomer, respectively. The melting temperatures for the AT 20mer derived from a sigmoidal fit to the intensity of the B2 and B4 bands (324±2 and 329±1 K, respectively) and from the CD experiment (330.1±0.6 K) are in good agreement as expected. The remaining fit parameters of these bands are ω 0 /2 π =1.38 THz and γ /2 π =0.75 THz for B2, and ω 0 /2 π =2.83 THz and γ /2 π =0.50 THz for B4. Thus both vibrational modes have a damping rate smaller than their oscillator frequencies and are therefore underdamped. Furthermore, these bands (in particular the high frequency B4 band) cannot be fitted by a Gaussian function ( Supplementary Note 4 ; Supplementary Fig. 8 ). The temperature-dependent OKE spectra of the CG 20mer ( Supplementary Note 5 ; Supplementary Fig. 9 ) show similar behaviour and can be fitted in a similar manner. However, the characteristic B4 band only starts changing at relatively high temperature consistent with results of CD, which shows that the melting temperature of the CG 20mer is higher than the accessible temperature range in the OKE experiment. Investigation of 13- and 17-base oligomers The data in Fig. 1 strongly suggest that the B2 band corresponds to single-stranded DNA and the B4 band to double-stranded DNA. To prove this, a room-temperature experiment was carried out using two oligomers of 13 and 17 bases with sequences d(CGAAAAATGTGAT) and d(CTAGATCACATTTTTCG) that minimize the possibility of association between strands when they are dissolved separately. However, when they are dissolved together, the strands match perfectly to form a 30-bp double-stranded helix. OKE spectra for 10 mM solutions in phosphate buffer of each oligomer separately and both oligomer together were measured at 298 K ( Fig. 2 ). The spectra of the oligomers separately and together can be fitted in a very similar manner. However, solutions of the 13mer and 17mer separately show unique bands at 1.08 and 1.20 THz, respectively, while a fourth band at higher frequency is absent. However, in the spectrum of the solution with both oligomers together, only a unique underdamped band at 2.83 THz appears (the B4 band, but not the B2 band), confirming our hypothesis. Figure 2: OKE spectra and fits for the solutions of the complementary oligomers of 13 and 17 bases dissolved together and separately. Dissolved together (top) and separately (bottom, where the spectrum from the 13mer is shown in red and that of the 17mer in blue). The significant difference between top and bottom is the bands filled with colour. When the nucleic acids are single stranded, there is a band at 1.08 (13mer) and 1.20 THz (17mer). However, when they form a double strand, only a characteristic band at 2.83 THz appears. Full size image Base stacking in mono-nucleotide solutions It is now clear that the B2 and B4 bands in Fig. 1 are closely associated with the single- and double-stranded forms of DNA. The B4 band, which disappears on melting, is likely associated with hydrogen bonding between the strands. To rule out an origin in base stacking, concentration-dependent OKE experiments were carried out at room temperature on aqueous solutions of cytidine monophosphate with concentrations between 0.1 and 1.6 M. It is known that mono-nucleotides in solution tend to stack without forming intermolecular hydrogen bonds between them 33 . The OKE spectra for these solutions ( Fig. 3 ) show a characteristic pattern with a low-frequency band (below 100 GHz) due to the orientational diffusion and a higher-frequency band between 400 GHz and 5 THz. The latter can be fitted using two Brownian oscillators, yielding the parameters ω 0 /2 π =1.48 THz and γ /2 π =1.08 THz, for the first oscillator, and ω 0 /2 π =2.58 THz and γ /2 π =1.06 THz, for the second. These vibrational modes do not coincide with the frequency of the modes observed in the oligomers and are less underdamped. The relation between intensity of the oscillators and nucleotide concentration is also plotted in Fig. 3 . The curves show the same saturation behaviour that can be seen in other studies of stacking of nucleotides 33 , 34 , that comes from the dependence on concentration of the average number of aggregated nucleotides. Figure 3: Influence of concentration on the OKE spectra of cytidine monophosphate. The terahertz band of the spectra has been fitted using two Brownian oscillators at 1.48 and 2.58 THz (top). The effect of the concentration on the intensity of these bands (blue and red, respectively) has been plotted (bottom). Full size image Discussion Thus, we have shown that the bands labelled B2 and B4 in Fig. 1 are associated with single- and double-stranded DNA respectively and that the B4 band reflects the dynamics of hydrogen bonding between the two strands. The B4 hydrogen-bonding band can be fitted with a Brownian oscillator function, but not by a Gaussian ( Supplementary Note 4 ). Although spontaneous Raman scattering and related techniques such as OKE cannot determine whether a band is homogeneously or inhomogeneously broadened 35 , 36 , 37 , the good fit to a Brownian oscillator function does strongly suggest a homogeneously broadened band or at most a very narrow distribution of phonon frequencies. Therefore, this band is not a broad inhomogeneous distribution of multiple vibrational modes, instead a single delocalized normal mode of the DNA 20mer. As the frequency of this mode is high ( ω 0 /2 π =2.83 THz corresponding to a period of 350 fs) and the damping rate low ( γ /2 π =0.50 THz corresponding to a 2-ps decay), it is underdamped undergoing approximately five oscillations before vanishing. According to the calculations using a fully atomistic model of B-DNA 4 as well as inelastic X-ray scattering experiments on humidified films of oriented DNA 23 , the acoustic phonon branch for delocalized phonons in DNA peaks at ∼ 12 meV (3.2 THz) at the Brillouin zone edge. OKE spectroscopy has the same selection properties as Raman scattering and is therefore normally only sensitive to optical phonon modes near the zone centre where the phonon wave vector vanishes and the wavelength tends to infinity. The observed frequency of 2.83 THz is therefore consistent with an optical phonon mode with a wavelength extending throughout the entire 20mer (Brillouin zone centre). This explains the observed reduction in intensity of the B4 band with temperature, since the heating-induced denaturation disrupts the hydrogen bonds of the base pairs and thereby causes the strands to separate eliminating the phonon mode. The phonon-like mode in double-stranded DNA observed here using OKE spectroscopy has a half-period of 177 fs, which sets the timescale for the breaking of hydrogen bonds in double-stranded DNA. However, this result is consistent with nuclear magnetic resonance experiments 2 , 38 , fluorescence spectroscopy 39 , 40 and simulations 8 showing much longer timescales. These experiments measure a global conformational change in the double-stranded DNA or the probability of imino-proton exchange in an established bubble 12 . The phonon-like mode as observed here using OKE sets the approach rate in the barrier crossing that leads to bubble formation, and is therefore expected to be a much faster process. Thus, the assumption that interaction of DNA with the surrounding water would dampen and localize the phonon mode 41 is found to be incorrect. This observation is consistent with the previous observation of similar delocalized underdamped phonon-like modes in the protein lysozyme 28 . The OKE spectrum of the AT 20mer ( Fig. 1 ) contains a broad band peaking between 14.2 and 27.8 GHz that has been fitted to a Cole–Cole function. It is tempting to assign this CC band to some overdamped motion of the DNA insensitive to the hybridization state. However, the data analysis shows that the CC band is responsible for intensity between 4 and 9 THz, that is, above the frequency of the highest-frequency mode that can clearly be assigned to the 20mer (the B4 hydrogen-bond phonon mode). This would be unphysical (relaxational processes involving a particular molecule, by their very nature must be slower than underdamped processes of the same molecule) unless the CC band is associated with liquid water whose librational frequency is ∼ 25 THz. In fact, OKE and Raman scattering studies on lysozyme 28 , 32 observed a similar band at ∼ 20–40 GHz, which could be shown using neutron scattering to originate in the diffusive translational motion of water molecules in the solvation shell of the protein 32 . Thus, the CC band can similarly be interpreted as the diffusive translational dynamics of water molecules in the solvation shell of the DNA oligomer. In bulk water at 298 K, this band peaks at 28 245 GHz, showing a slowdown of the dynamics by a factor of 17 consistent with ultrafast solvation experiments 42 . However, this appears inconsistent with the predictions made by the jump model of water diffusion 43 , 44 , which only predicts a slowdown by a factor 2 near a flat non-interacting surface. The surface of DNA is much more complex though with convex and concave areas. This would give rise to a greater slowdown in addition to the extreme broadening seen here 45 . At 248 K, when DNA is almost denatured, the slowdown of the dynamics is slightly larger (it changes from 17 at 298 K to 21.5 at 348 K). This is most likely caused by hydrogen bonding with the exposed bases. Thus, the results presented here demonstrate that the inter-strand hydrogen-bond modes are coherent delocalized phonon modes even at physiological conditions. As the hydrogen bonds need to be broken for a transcription bubble to form, this result suggests that at least the initial steps of bubble formation are coherent. This is consistent with the recently observed dynamics in enzymes and enzyme–inhibitor complexes, which were similarly shown to be delocalized and coherent (although damped much more strongly than seen here in DNA). The release of water molecules from the solvation shell of DNA is thought to provide an entropic driving force for DNA–protein and anti-cancer drug binding 43 . The unexpectedly large slowdown of water dynamics by a factor of ∼ 20 observed here will have a large effect on the dynamics and energetics of DNA binding. Methods Sample preparation DNA oligomers (salt-free and reverse-phase cartridge purified) were purchased from Eurofins and cytidine monophosphate from Merk. All the samples were prepared by directly dissolving in phosphate buffer (pH=7, 0.15 M) and filtering using 0.2 μm hydrophilic polytetrafluoroethylene (PTFE) filters (Millipore) to remove dust. Before the measurements, the solutions of DNA oligomers were annealed by equilibrating them in a water bath at 363 K, followed by slow cooling down to the desired temperature. CD spectroscopy was used to determine the temperature-dependent denaturation of the DNA ( Supplementary Note 3 ; Supplementary Figs 6 and 7 ). OKE experimental details The OKE data were recorded in a standard time-domain step-scan pump-probe configuration and Fourier transformed to obtain the frequency-domain reduced depolarized Raman spectrum as described previously 28 , 29 , 30 , 31 . A laser oscillator (Coherent Micra) provided ∼ 10-nJ pulses (0.8 W average power) with a nominal wavelength of 800 nm at a repetition rate of 82 MHz providing a 20-fs pulse width in the sample. For the longer-timescale relaxation measurements, a second set of data was taken in a similar configuration using a higher pulse energy of typically 1 μJ provided by a regeneratively amplified laser (Coherent Legend Elite USX) at a repetition rate of 1 kHz with a pulse duration stretched to ∼ 1 ps. Stretching the pulse enables a higher energy to be used without nonlinear effects and reduces the upper bandwidth limit allowing large step size scanning without introducing undersampling artefacts. Pump-probe OKE experiments were carried out with delay times from a few femtoseconds to a maximum of 1–4 ns, resulting in a spectral resolution (after Fourier transformation) of better than 1 GHz (<0.033 cm −1 ) 28 , 29 . The sample was contained in a rectangular quartz cuvette (Starna, thickness: 1 mm) held in a brass block that was temperature controlled with a precision of±0.5 K. OKE data analysis The OKE spectra consist of several broad overlapping bands that are analysed through curve fitting to a number of analytical functions ( Supplementary Note 4 ). At the lowest frequencies, one finds processes associated with diffusive orientational relaxation of the molecules. The diffusive orientational relaxation of DNA in solution has been fitted here with a Debye function where A D is the amplitude of the band, ω is the angular frequency and τ is the relaxation time. The band at slightly higher frequency, which can be assigned to the diffusive relaxation of water in the solvation shell of the DNA, cannot be fitted to a Debye function due to its much greater width. This band was modelled using a Cole–Cole function where A CC is the amplitude of the band and α is a parameter that accounts for the broadness of the observed band. In the terahertz range, one finds bands from modes that are not diffusive, but critically damped or underdamped. These originate in librations, vibrations and phonon-like modes. These have been fitted using the Brownian oscillator model 28 where ω 0 is the undamped oscillator angular frequency and γ is the damping rate. Data availability The CD data and the OKE data that support the findings of this study are available in Enlighten: Research Data Repository (University of Glasgow) with the identifier . Additional information How to cite this article: González-Jiménez, M. et al . Observation of coherent delocalized phonon-like modes in DNA under physiological conditions. Nat. Commun. 7:11799 doi: 10.1038/ncomms11799 (2016).
Scientists have shown the weird world of quantum mechanics operating in the molecule of life, DNA. The research, which was carried out by academics from the University of Glasgow and is published today in Nature Communications, describes how double-stranded DNA splits using delocalized sound waves that are the hallmark of quantum effects. DNA contains the code to life and holds a blueprint for each and every living thing on earth. Dedicated enzymes responsible for making new proteins read the code by splitting the double strand in order to access the information. One of the big outstanding questions of biology has been how these enzymes find the initial hole or "bubble" in the double strand to start reading the code. Dr Mario González Jiménez, researcher, explains, "It is believed that DNA has regions where a specific sequence of bases modifies the stiffness of the double helix favouring the formation of bubbles. This causes a break of the weak bonds between the strands showing the transcription and replication enzymes where to start their task." Dr Gopakumar Ramakrishnan adds: "It had been proposed by theoreticians that such DNA bubbles might behave like sound waves, bouncing around in DNA like echoes in a cathedral. However, the current paradigm in biology is that such sound-like dynamics are irrelevant to biological function, as interaction of a biomolecule with the surrounding water will almost certainly destroy any of these effects." Researchers in the Ultrafast Chemical Physics group at the University of Glasgow carried out experiments with a laser that produces femtosecond laser pulses about a trillion times shorter than a camera flash. This allowed them to succeed in the detection of sound-like bubbles in DNA. They could show that these bubbles whiz around like bullets in a shooting gallery even in an environment very similar to that which can be found in a living cell. PhD student Dr Thomas Harwood, recently graduated from the University of Strathclyde, points out: "The sound waves in DNA are not your ordinary sounds waves. They have a frequency of a few terahertz or a billion times higher than a human or a dog can hear!" Prof Klaas Wynne, leader of the research team and Chair in Chemical Physics at the University of Glasgow, explains: "The terahertz sound-like bubbles we have seen alter our fundamental understanding of biochemical reactions. There were earlier suggestions for a role of delocalized quantum phenomena in light harvesting, magneto reception, and olfaction. The new results now imply a much more general role for sound-like delocalized phenomena in biomolecular processes."
10.1038/ncomms11799
Medicine
Memory concerns? Blood test may put mind at ease or pave way to promising treatments
Diagnostic value of plasma phosphorylated tau181 in Alzheimer's disease and frontotemporal lobar degeneration, Nature Medicine (2020). DOI: 10.1038/s41591-020-0762-2 , nature.com/articles/s41591-020-0762-2 Journal information: Nature Medicine
http://dx.doi.org/10.1038/s41591-020-0762-2
https://medicalxpress.com/news/2020-03-memory-blood-mind-ease-pave.html
Abstract With the potential development of new disease-modifying Alzheimer’s disease (AD) therapies, simple, widely available screening tests are needed to identify which individuals, who are experiencing symptoms of cognitive or behavioral decline, should be further evaluated for initiation of treatment. A blood-based test for AD would be a less invasive and less expensive screening tool than the currently approved cerebrospinal fluid or amyloid β positron emission tomography (PET) diagnostic tests. We examined whether plasma tau phosphorylated at residue 181 (pTau181) could differentiate between clinically diagnosed or autopsy-confirmed AD and frontotemporal lobar degeneration. Plasma pTau181 concentrations were increased by 3.5-fold in AD compared to controls and differentiated AD from both clinically diagnosed (receiver operating characteristic area under the curve of 0.894) and autopsy-confirmed frontotemporal lobar degeneration (area under the curve of 0.878). Plasma pTau181 identified individuals who were amyloid β-PET-positive regardless of clinical diagnosis and correlated with cortical tau protein deposition measured by 18 F-flortaucipir PET. Plasma pTau181 may be useful to screen for tau pathology associated with AD. Main With the potential development of new disease-modifying treatments for AD 1 , screening tests that can be widely and inexpensively deployed to identify those who might benefit from treatment are urgently needed. Particularly important will be differentiating AD from other related dementias, such as frontotemporal lobar degeneration (FTLD), which can sometimes be misdiagnosed as AD in younger individuals or patients with mild or questionable symptoms, called mild cognitive impairment (MCI). Currently, two technologies are approved for differential diagnosis of AD from other dementias, expert interpretation (visual read) of measurements of brain β-amyloid (Aβ) deposition with Aβ positron emission tomography (Aβ-PET) 2 or Aβ and tau measurements in cerebrospinal fluid (CSF) 3 , 4 . These biomarkers are not widely used because of the invasiveness of lumbar punctures required for obtaining CSF and the high costs of PET imaging, often not reimbursed by third-party payers 2 . Moreover, PET scans are associated with exposure to radiation and access to PET imaging is often restricted to specialized centers. A blood-based test for AD would be a less invasive and less expensive screening tool to identify individuals who are experiencing symptoms of cognitive or behavioral decline and might benefit from more comprehensive CSF or PET testing for diagnostic purposes or before initiation of disease-modifying AD therapy. Examining the performance of a screening diagnostic test for AD in patients with FTLD is important because FTLD is similarly prevalent to AD in individuals who are less than 65 years old at onset and can be difficult to differentiate from AD because of similar clinical features, such as language and executive function impairments 5 . Moreover, at autopsy, insoluble tau deposition is present in both neuropathologically diagnosed AD (AD path ) and a subset of FTLD syndromes (FTLD-tau), including approximately half of behavioral variant frontotemporal dementia (bvFTD), most nonfluent variant primary progressive aphasia (nfvPPA) and almost all patients with progressive supranuclear palsy (PSP) 6 . Whereas, in AD path , tau pathology is associated with elevated concentrations of CSF tau species, including (total) tau and phosphorylated tau at residue 181 (pTau181) 7 , 8 in FTLD, CSF tau and pTau181 can be either elevated or decreased 9 . Insoluble tau deposition can be visualized in the brains of living individuals with AD using flortaucipir (FTP)-PET, a tracer that binds with high affinity to mixed 3 and 4 microtubule binding domain repeat (3R/4R) tau that is found in AD path neurofibrillary tangles 10 and can distinguish clinical AD (AD clin ) from other diseases 11 . However, FTP has low affinity for the predominantly 3R or 4R tau deposits found in most FTLD, limiting its usefulness 9 . In contrast, levels of neurofilament light chain (NfL) a marker of axonal damage measurable in CSF, plasma and serum 12 , 13 , 14 are increased in FTLD and correlate with survival 15 , clinical severity and brain volume 16 , 17 , 18 , 19 . CSF and serum NfL concentrations are also elevated in AD clin , but less so than in FTLD 13 , 17 , 20 , 21 . As in FTLD, serum NfL is predictive of cortical thinning and rate of disease progression in AD clin 22 , 23 . Recent studies have shown that the Aβ42/Aβ40 ratio measured in plasma can differentiate between healthy controls and patients with AD using immunoprecipitation mass spectrometry (IP–MS), but this technology is not accessible to most clinical laboratories 24 , 25 , 26 . New ultrasensitive single molecule array (Simoa) antibody-based approaches measuring Aβ in blood are easier to implement but do not yet have sufficient diagnostic precision to be useful clinically 26 . Elevated levels of total tau measured with Simoa technology in plasma are associated with cognitive decline 27 , although there is substantial overlap between concentrations measured in normal aging and AD limiting the diagnostic usefulness of such assays 28 , 29 , 30 . Recently, a new plasma pTau181 assay was found to differentiate AD clin from healthy controls 31 . We tested the differential diagnostic ability of plasma pTau181 measurements to differentiate MCI and AD clin relative to a variety of clinical FTLD phenotypes. A subset of diagnoses was verified using neuropathological examination at autopsy or by the presence of autosomal dominant mutations that lead to specific types of FTLD pathology, including mutations in the tau gene ( MAPT ) that lead to FTLD pure 4R tau or AD-like mixed 3R/4R tau deposition in the brain. We also compared plasma pTau181 to current clinical standards for dementia differential diagnosis, Aβ-PET and CSF pTau181, as well as to the research biomarkers plasma NfL, plasma Aβ42 and Aβ40, FTP-PET and brain atrophy measured with magnetic resonance imaging (MRI), to better evaluate the biological basis for elevated plasma pTau181. Results Participant characteristics Baseline demographics, clinical assessments, imaging measures and fluid biomarker levels are shown in Table 1 . The control group (HC) and the MCI group were younger than the PSP and nfvPPA groups. Plasma pTau181 and NfL concentrations were similar in men and women. Plasma NfL concentrations correlated with age ( ρ = 0.19, P = 0.006) and with time between blood draw and death in autopsy cases ( ρ = −0.27, P = 0.009); pTau181 concentrations were not correlated with either value. Plasma pTau181 concentrations were associated with the clinical dementia rating scale sum of boxes score (CDRsb) ( β = 0.184, P = 0.004, Supplementary Table 1 ), as were NfL concentrations ( β = 0.456, P < 0.0001, Supplementary Table 2 ). FTP-PET binding was highest in AD clin cases compared to MCI, corticobasal syndrome (CBS), PSP, bvFTD and nfvPPA. Pittsburgh Compound B (PiB) Aβ-PET binding was highest in AD clin . Overall, 27% of controls were Aβ-PET positive (visual read). CSF pTau181 was higher in AD clin compared to every other diagnosis, except for MCI and semantic variant primary progressive aphasia (svPPA). Table 1 Participant characteristics, primary cohort Full size table Plasma pTau181 and NfL comparisons by clinical diagnostic group Plasma pTau181 concentrations were elevated in AD clin compared to all other groups (Fig. 1a and Table 1 ). Plasma NfL concentrations were elevated in CBS, PSP and bvFTD compared to AD clin and MCI as well as controls (Fig. 1b ). NfL concentrations were also elevated in nfvPPA and svPPA as compared to controls and MCI. NfL was increased in AD compared to HC. The ratio of pTau181/NfL was decreased in all FTLD diagnoses compared to controls, AD clin and patients with MCI (extended data Fig. 1 ). The individuals with AD-associated logopenic variant primary progressive aphasia (lvPPA) had increased pTau181 levels compared to the those with FTLD-associated nfvPPA, svPPA and controls (Fig. 1c ). An age-adjusted plasma pTau181 cutoff of 8.7 pg ml −1 differentiated AD clin from clinical FTLD with a receiver operating characteristic (ROC) area under the curve (AUC) of 0.894 ( P < 0.0001, Fig. 1d and Table 2 ). The plasma Aβ42/Aβ40 ratio did not differ between the clinical diagnostic groups (Extended Data Fig. 2a ), but was able to differentiate between Aβ-PET-positive and negative cases (AUC of 0.768, P < 0.0001, Extended Data Fig. 2b and Table 2 ) and FTP-PET-positive and negative cases (AUC of 0.782, P < 0.0001, Extended Data Fig. 2c and Table 2 ). Fig. 1: Plasma pTau181 and plasma NfL per clinical diagnosis. a , pTau181 levels were elevated in AD clin compared to non-AD clinical diagnoses ( n = 362). HC, healthy control. b , Plasma NfL was lower in HCs and patients with MCI and AD compared to CBS, PSP and bvFTD, and NfL levels in HC and MCI were lower than in patients with nfvPPA and svPPA ( n = 213). c , Plasma pTau181 levels are elevated in lvPPA, which is typically caused by AD, as compared to levels in nfvPPA and svPPA, which are typically caused by FTLD and HC ( n = 136). d , Plasma pTau181 concentrations were increased in AD clin cases compared to FTLD clinical diagnoses and could differentiate between these groups ( n = 246). The notch displays the 95% confidence interval (CI) around the median. The shape reflects amyloid-PET status. *** P < 0.0001, ** P < 0.01, * P < 0.05. Full size image Table 2 Diagnostic accuracy of plasma pTau181, NfL, Aβ42/Aβ40 ratio and CSF pTau181 Full size table Plasma pTau181 and NfL in pathology-confirmed cases and FTLD mutation carriers Neuropathological diagnosis was available in 82 cases. Owing to potential effects of disease severity, analyses were adjusted for age and CDRsb at the time of blood draw. Median plasma pTau181 concentrations were higher in AD path ( n = 15, 7.5 ± 8 pg ml −1 ) compared to FTLD-tau ( n = 52, 2.3 ± 3 pg ml −1 , P < 0.0001) and FTLD-TAR DNA-binding protein (FTLD-TDP) ( n = 15, 2.1 ± 2 pg ml −1 , P < 0.0001, Fig. 2a ). Plasma pTau181 differentiated AD path from the combined FTLD-TDP and FTLD-tau group (AUC of 0.878, P < 0.0001, Fig. 2b ), from FTLD-TDP alone (AUC of 0.947, P < 0.0001) and from FTLD-tau alone (AUC of 0.858, P < 0.0001, Table 2 ). Plasma NfL was a poor discriminator of AD path from neuropathologically diagnosed FTLD (Table 2 ). Presence of pTau181 was associated with autopsy-defined Braak stage ( β = 0.569, P < 0.0001) and was higher in Braak stage 5–6 ( n = 16, 4.9 ± 4 pg ml −1 ) compared to Braak stage 0 ( n = 10, 2.1 ± 2 pg ml −1 , P = 0.003), Braak stage 1–2 ( n = 42, 2.2 ± 2 pg ml −1 , P < 0.0001) and Braak stage 3–4 ( n = 13, 2.3 ± 3 pg ml −1 , P = 0.009, Fig. 2c ). NfL did not differ by Braak stage (Extended Data Fig. 3 ). Fig. 2: Plasma pTau181 in pathology-confirmed cases and MAPT mutation carriers. a , Levels of pTau181 are elevated in AD path ( n = 15, 7.5 ± 8 pg ml −1 ), compared to FTLD-tau ( n = 53, 3.4 ± 3 pg ml −1 , P < 0.0001) and FTLD-TDP ( n = 15, 2.1 ± 2 pg ml −1 , P < 0.0001). b , Plasma pTau181 levels differentiated between AD path and pathology-confirmed FTLD (FTLD-tau and FTLD-TDP combined). c , Plasma pTau181 was increased in Braak stage 5–6 compared to Braak stage 0, stage 1–2 and stage 3–4. d , Concentrations of pTau181 were increased in MAPT mutation carriers with mixed 3R/4R tau pathology ( n = 17, 4.4 ± 4 pg ml −1 ), compared to those with 4R pathology ( n = 44, 2.2 ± 2, P = 0.024) and HCs ( n = 44, 2.0 ± 2, P = 0.011). Biomarker concentrations are shown as median ± interquartile range, *** P < 0.0001, * P < 0.05. Full size image Seventy-six individuals were FTLD-causing mutation carriers (61 MAPT , 5 GRN and 10 C9orf72 ). There was no difference in pTau181 concentrations between the mutation carriers (grouped by mutated gene) or the mutation carrier groups and normal controls (Extended Data Fig. 4 ). Plasma pTau181 levels were increased in MAPT mutation carriers with AD-like mixed 3R/4R tau pathology ( n = 17, 4.4 ± 4 pg ml −1 , Fig. 2d ), compared to those with pure 4R tau pathology 32 ( n = 44, 2.2 ± 2 pg ml −1 , P = 0.024) and controls ( n = 44, 2.0 ± 2 pg ml −1 , P = 0.011). Plasma pTau181 differentiated AD path from neuropathologically diagnosed FTLD and mutation carriers combined (AUC of 0.854, P < 0.0001, Table 2 ). Association between plasma pTau181 and other fluid biomarkers Plasma pTau181 and plasma NfL concentrations were associated in combined AD clin /MCI cases ( β = 0.66, P < 0.0001, Fig. 3a ), but not in the whole patient sample. CSF pTau181 was associated with plasma pTau181 in the whole sample ( β = 0.51, P < 0.0001; n = 74, Extended Data Fig. 5 ) and both within the AD/MCI group ( β = 0.41, P = 0.042; n = 25) and the FTLD group ( β = 0.49, P < 0.0001; n = 29), but not in controls. CSF pTau181 concentrations were higher in AD clin (45.8 ± 31 pg ml −1 ) compared to FTLD (22.1 ± 8 pg ml −1 , P < 0.0001) and differentiated the two clinical diagnoses (AUC of 0.931, P < 0.0001, Tables 1 and 2 ). Fig. 3: Association of pTau181 and NfL, PiB-PET SUVR, FTP-PET SUVR and amyloid and FTP-PET status. a , Plasma pTau181 and plasma NfL measures were not correlated. Plasma pTau181 was increased in amyloid-positive cases and plasma NfL in FTLD cases. The dashed lines represent the uncorrected cutoff value for amyloid positivity (3.6 pg ml −1 ) and the median concentration of NfL (27.2 pg ml −1 , n = 213). The color coding shows Aβ-PET status and the shape coding shows the diagnostic group. b , The association between plasma pTau181 and PiB-PET SUVRs ( β = 0.75, P < 0.0001). Color coding is per Aβ-PET status by visual read and shape coding is per clinical diagnosis ( n = 124) c , The association between plasma pTau181 and FTP-PET SUVRs ( β = 0.73, P < 0.0001). Color coding is per Aβ-PET status by visual read and shape coding is per clinical diagnosis ( n = 97). d , Plasma pTau181 concentrations were increased in Aβ-PET-positive cases and could differentiate between Aβ-PET-positive and negative cases ( n = 185, Aβ status was determined based on visual read). e , Plasma pTau181 concentrations were increased in FTP-PET-positive cases and can differentiate between FTP-PET-positive and negative cases (based on binarized cortical SUVR values using a 1.22 threshold; n = 97). The notch displays the CI around the median. *** P < 0.0001. Full size image Plasma pTau181 and NfL associations with tau (FTP)-PET and Aβ-PET There were strong linear relationships between plasma pTau181 concentrations and PiB standardized uptake value ratio (SUVR) ( β = 0.75, P < 0.0001, Fig. 3b ) as well as global cortical FTP SUVR ( β = 0.73, P < 0.0001, Fig. 3c ). Plasma NfL concentration was not related to either PET measure. An age-corrected plasma pTau181 cutoff value for Aβ-PET positivity of 8.0 pg ml −1 discriminated between all individuals who were Aβ-PET-positive and negative with 0.889 sensitivity, 0.853 specificity and AUC of 0.914 ( P < 0.0001, Fig. 3d and Table 2 ). Plasma pTau181 also differentiated between Aβ-PET-positive and negative cases within the healthy controls and MCI groups individually. In controls, the AUC was 0.859 ( P < 0.0001, 11 Aβ-PET-positive and 29 Aβ-PET-negative). Within the MCI group, the AUC was 0.944 ( P < 0.0001, 18 Aβ-PET positive and 21 Aβ-PET-negative; Table 2 and Extended Data Fig. 6 ). When a cortical FTP-SUVR diagnostic threshold 33 of 1.22 was applied to designate all cases as FTP-PET-positive and negative, plasma pTau181 was also a good discriminator of FTP-PET status (AUC of 0.919, P < 0.0001, Fig. 3e ). In the MCI cases alone, the AUC for FTP-PET status was 0.977 ( P < 0.0001, 11 FTP-PET-positive and 20 FTP-PET-negative; Table 2 ). Similar relationships between plasma pTau181 and FTP-PET values were obtained with the independent cohort from an Eli Lilly AD clin /MCI clinical research study ( n = 42; Supplementary Results and Supplementary Table 3 ). Plasma NfL did not differentiate between Aβ-PET-positive and negative cases (AUC of 0.559, P = 0.276) or between FTP-PET-positive and negative cases (AUC of 0.606, P = 0.159, Table 2 ). Levels of pTau181 were associated with FTP-PET-estimated Braak stage 9 , 34 , 35 ( β = 0.610, P < 0.0001) and were higher in FTP-PET Braak stage 5–6 ( n = 54, 9.2 ± 4 pg ml −1 ) and Braak stage 3–4 ( n = 8, 6.4 ± 3 pg ml −1 ) compared to Braak stage 0 ( n = 26, 2.4 ± 2 pg ml −1 , both P < 0.0001). NfL did not differ by FTP-estimated Braak stage (Extended Data Fig. 7 ). Voxelwise analyses of FTP-PET and gray matter volume in relation to plasma pTau181 and NfL Concentrations of pTau181 were strongly associated with FTP-PET SUVR values (Spearman’s ρ values exceeding 0.70 in peak regions) in the frontal, temporoparietal and posterior cingulate cortices and precuneus regions (Fig. 4a ). Associations remained significant in the patients with only AD clin /MCI, although with slightly lower ρ values. There were insufficient data to perform the analyses in the FTLD group separately ( n = 18). There was no association between NfL concentrations and FTP-PET uptake in the whole group. In the patients with only AD clin /MCI there were weak correlations in the right hemisphere that did not survive multiple-comparisons corrections, predominantly in the frontal and insular cortex and in the right temporal horn (reaching ρ ~0.6 in the insula; Fig. 4a ). Fig. 4: Voxelwise correlations of plasma pTau181 and plasma NfL with FTP-PET and gray matter atrophy. a , Regions of correlation between plasma pTau181 concentration and FTP-PET uptake were strongest in AD-specific brain regions: frontal and temporoparietal cortex, posterior cingulate and precuneus regions ( ρ ~0.75). There was no correlation of FTP-PET with plasma NfL in the whole cohort. In the AD clin /MCI group, correlations existed in the frontal and insular cortex ( ρ ~0.6). b , Negative correlations between plasma pTau181 and gray matter volume were highest in the bilateral temporal lobe and remained in the AD clin /MCI group, but no correlation was found in the FTLD group. The correlation between plasma NfL and gray matter volume was highest in the right putamen and insular region ( ρ ~−0.5). The association remained in the FTLD group but was not found in the AD clin /MCI group. All correlations were thresholded on the basis of an uncorrected P < 0.001 at the voxel level and family-wise error-corrected P < 0.05 at the cluster level. Full size image High plasma pTau181 concentrations correlated with lower gray matter volume in the bilateral medial temporal lobe, the posterior cingulate cortex and precuneus ( ρ = −0.35, P < 0.001, Fig. 4b ). This association was driven by the patients with AD clin /MCI, who showed the highest correlation coefficients in these regions ( ρ = −0.55, P < 0.001). There was no association between plasma pTau181 and gray matter volume in patients with FTLD. In the combined group there were strong negative correlations between NfL and gray matter volume in the right putamen and insula (ρ ~−0.5, P < 0.001) and to a lesser extent with gray matter volume in the medial prefrontal cortices ( ρ ~−0.45, P < 0.001). In the FTLD group, the association was maximal in the right putamen and insula ( ρ ~−0.4, P < 0.001), with lower correlations present in the frontal and lateral temporal regions and right precuneus (Fig. 4b ). Plasma pTau181 and NfL associations with clinical disease severity and cognitive function Levels of pTau181 showed strong associations with baseline CDRsb scores ( β = 0.486, P < 0.0001), functional activities questionnaire (FAQ) ( β = 0.541, P < 0.0001) and modified Rey figure recall ( β = −0.585, P < 0.0001) only in the AD clin /MCI group and not in the control or FTLD groups. In contrast, NfL showed associations with CDRsb and neuropsychological performance in both the AD clin /MCI and FTLD groups ( β = 0.472, P < 0.0001 for CDRsb in AD clin /MCI; β = 0.244, P < 0.010 in FTLD; Supplementary Tables 1 and 2 ). In longitudinal analyses, a higher baseline pTau181 was associated with faster rates of decline in patients with AD clin /MCI in CDRsb, mini-mental state exam (MMSE), Rey recall, Boston naming test (BNT) and FAQ (Supplementary Table 4 ), whereas higher baseline NfL predicted faster decline over time in patients with FTLD in MMSE, phonemic fluency and the trail-making test (Supplementary Table 5 ). Discussion The main findings of this study are that plasma pTau181 concentrations differentiated patients with clinically diagnosed AD from those with FTLD and elderly controls, and that plasma pTau181 concentrations were strongly associated with currently approved AD-biomarker measurements, including Aβ-PET and CSF pTau181, regardless of clinical diagnosis. Plasma pTau181 also differentiated autopsy-diagnosed AD from FTLD with slightly lower accuracy than clinically diagnosed or PET-defined cases. Plasma pTau181 accurately identified healthy elderly controls and individuals with MCI with a positive Aβ-PET scan, suggesting underlying AD path changes and also differentiated between individuals with elevated cortical tau deposition, measured by FTP-PET. Elevated pTau181 concentrations correlated with higher FTP-PET uptake and more severe gray matter atrophy in AD-related brain regions. Plasma pTau181 reflected severity of cortical AD tau pathology as reflected by Braak stage measured at autopsy 11 , 36 . Plasma pTau181 also predicted the rate of decline on clinical measures of disease severity and neuropsychological status over 2 years of follow-up in AD clin /MCI. These findings were specifically related to plasma pTau181, as plasma concentrations of NfL, a nonspecific biomarker of neurodegeneration, were not related to AD diagnosis, Aβ or FTP-PET signal. As expected, NfL concentrations were associated with measures of disease severity, cognitive function and gray matter atrophy most strongly in patients with FTLD 37 . Together, these data suggest that plasma pTau181 may be a useful screening tool for identifying the AD pathobiological process in individuals at risk of cognitive decline or with cognitive impairment. Aβ-PET has established clinical utility for differential diagnosis of AD clin from other dementias, is associated with more severe clinical and cognitive decline 38 and has been validated as a measure of AD neuropathology 39 , 40 . Plasma pTau181 accurately differentiated between AD and FTLD, similarly to the previously reported diagnostic accuracy of Aβ-PET 41 . This suggests that the diagnostic value of plasma pTau181 could be comparable to Aβ-PET in patients who are symptomatic with MCI or dementia. We found that increased plasma pTau181 concentrations were associated with Aβ-PET positivity even in cognitively healthy controls, however plasma pTau181 is unlikely to be a direct measure of Aβ pathology. Others have found that there is often tau accumulation in healthy elderly controls who are Aβ-PET positive, suggesting that amyloid positivity is a hallmark for Alzheimer pathology and may reflect not only amyloid, but also presymptomatic tau accumulation 42 . As plasma pTau181 was related to regional tau deposition measured by Braak stage at autopsy or estimated by FTP-PET uptake during life, this might explain the ability of pTau181 to differentiate between Aβ-PET-positive and negative controls. A limitation of our study was that we had few data from healthy controls with FTP-PET data and so we could not directly test the relationship of pTau181 to FTP-PET status in these individuals. Whereas CSF total tau has little diagnostic value differentiating FTLD from AD 43 , CSF pTau181 is able to differentiate clinically diagnosed AD from FTLD with a sensitivity and specificity of approximately 70–80% 44 , 45 , which is similar to the accuracy found in this study using plasma pTau181. Using autopsy data, we determined a specific association of elevated plasma pTau181 with underlying mixed 3R/4R tau pathology that is characteristic of, but not specific to AD. We found elevated pTau181 concentrations in AD path , which is known to have neurofibrillary tangles consisting of 3R/4R mixed tau pathology and low pTau181 concentrations in sporadic FTLD-tau, which is associated with insoluble deposits of either 3R (such as Pick’s disease) or 4R tau (such as corticobasal degeneration or PSP) pathology. To test the hypothesis that plasma pTau181 concentrations specifically reflect mixed 3R/4R tau pathology, we measured samples from individuals with rare MAPT mutations (R406W and V337M) 32 , 46 , 47 that lead to FTLD pathology with accumulation of neurofibrillary tangles consisting of 3R/4R tau that are similar to those seen in AD path that often cause a clinical syndrome similar to AD clin but notably without Aβ pathology. Individuals with MAPT mutations that lead to 3R/4R tau pathology had elevated plasma pTau181 concentrations compared to individuals with other MAPT mutations that lead to pure 4R tau pathology (such as P301L) and healthy controls. While this may be of interest mechanistically, this is unlikely to affect the utility of plasma pTau181 as an AD screening diagnostic test because MAPT R406W and V337M mutations are exceedingly rare and overall plasma pTau181 levels were lower in these individuals than in patients with AD. Together, these results suggest that both CSF and plasma pTau181 reflect 3R/4R tau accumulation in the brain that is usually associated with AD pathology. Plasma pTau181 concentrations were correlated with regional FTP-PET uptake, which is thought to reflect AD neurofibrillary tangle deposition 11 , 48 , 49 . Supportive of this hypothesis, we found an association between plasma pTau181 and estimated Braak stage by FTP-PET as well as with neuropathological Braak stage. The association of pTau181 with FTP-PET was stronger than with neuropathological Braak staging. Even though plasma pTau181 could differentiate late-stage tau pathology (Braak 5–6) from other stages, it could not differentiate early and moderate stages (Braak 1–2 and 3–4) from the group without pathology (Braak 0). This could indicate a limitation in the sensitivity of plasma pTau181 for AD pathology, but could also reflect differences in sample size, the more comprehensive anatomical coverage with PET and additional variability introduced by the delay from blood draw to autopsy in the pathological Braak stage analysis that was not present in FTP-PET Braak stage analysis 50 . The increased pTau181 concentrations in AD clin and their strong association with patterns of brain atrophy in AD suggest that plasma pTau181 is also associated with AD-related neuronal loss. More detailed comparisons of neuronal cell loss measured by neuropathology and plasma pTau181 concentration will be necessary to test this hypothesis. Plasma Aβ measured on an automated platform has recently been demonstrated as a promising and cost-effective tool as compared to Aβ-PET, to identify brain amyloidosis in individuals with or at risk for AD 51 . We found that the fold change in mean plasma pTau181 concentration between individuals who were Aβ-PET positive and negative in our study exceeded the fold change found by others using plasma Aβ42/Aβ40 ratio and the overlap between groups seemed much smaller 24 , 25 , 26 , 51 . Although we did not have access to the same automated Aβ measurement platform or to IP–MS, we measured plasma Aβ42/Aβ40 by Simoa and found a much larger fold difference in pTau181 between groups as compared to Aβ42/Aβ40. Aβ42/Aβ40 concentrations were less accurate in differentiating between individuals who were Aβ-PET positive and negative than pTau181. Future comparisons with more accurate plasma amyloid tests will be necessary to determine the relative value of plasma amyloid compared to pTau181 measurements. This study has a number of important limitations. There were several outlier high plasma pTau181 values in the clinical diagnostic groups who were not expected to have elevated pTau181: two controls, one in CBS, PSP, bvFTD, nfvPPA and svPPA. These findings may reflect previously undetected brain 3R/4R tau deposition. In support of this interpretation, one of those controls was Aβ-PET positive, the individual with CBS had unknown amyloid status and could have had AD pathology 52 , the individual with PSP had autopsy data showing AD co-pathology and the individual with bvFTD was a carrier for the MAPT mutation, associated with tau pathology. We also had fewer individuals with AD path than individuals with autopsy-confirmed FTLD-tau, which might have influenced the results. Verification of the diagnostic performance of plasma pTau181 in a larger number of autopsy-confirmed cases will be important. We had little FTP-PET data in healthy controls and individuals with FTLD, therefore we were not able to examine voxelwise associations with pTau181 in these individuals. Having presymptomatic individuals who were Aβ-PET positive and Aβ-PET negative with high pTau181 levels and FTP-PET imaging would help to determine whether pTau181 associates primarily with FTP-PET or Aβ-PET. The sample sizes were balanced by clinical diagnosis, but more were in the FTLD spectrum. A larger number of controls and patients with MCI and AD would have offset this, although accuracy of pTau181 in these groups has been demonstrated in a previous study 31 . Finally, neither plasma pTau181 nor NfL was able to differentiate between individuals with autopsy-confirmed FTLD-tau and FTLD-TDP. More work will be necessary to identify effective biomarkers for this context of use. This study provides strong evidence that plasma pTau181 concentration could be a useful screening blood test to identify underlying mixed 3R/4R tau pathology, consistent with AD in individuals who have symptoms of cognitive or behavioral decline in clinical settings where diagnostic status may be uncertain. Since Aβ-PET scans are expensive and require specialized imaging centers, plasma pTau181 may be a more readily accessible tool to identify individuals who should undergo more detailed diagnostic testing with this approved technology. Alternatively, given the strong relationship between plasma pTau181 and FTP-PET uptake, plasma pTau181 could be useful as a screening tool in clinical trials employing FTP-PET to measure treatment effects of new AD therapies. Methods Participants This retrospective study included 404 participants from three independent cohorts (Table 1 and Supplementary Table 3 ), a primary cohort of 362 individuals; 301 from the University of California San Francisco (UCSF) Memory and Aging Center and 61 from the Advancing Research and Treatment for Frontotemporal Lobar Degeneration (ARTFL) consortium and a secondary cohort of baseline data from 42 participants in an Eli Lilly sponsored research study ( : NCT02624778 ). Participants were only included in the study when their plasma pTau181 measurement was successful. Aβ-PET was available in 226 participants, 138 had FTP-PET (79 AD clin /MCI and 18 FTLD in the primary cohort, 41 AD clin /MCI in the secondary cohort), 220 participants had MRI (71 AD clin /MCI, 110 FTLD and 39 HC) and 74 individuals had previous CSF pTau181 concentrations available (20 HC, 25 AD clin /MCI and 29 FTLD, with an average time between plasma and CSF sample of 1.3 ± 2 years). The primary cohort consisted of 362 individuals; 70 HCs, 103 individuals on the AD spectrum: 56 AD clin per National Institute on Aging and Alzheimer’s Association (NIA-AA) criteria 53 including 14 lvPPA and 47 MCI 54 and 190 patients meeting clinical criteria for a syndrome in the FTLD spectrum: 39 CBS 52 , 48 PSP 55 , 50 bvFTD 56 , 27 nfvPPA and 26 svPPA 57 . These included 76 carriers of FTLD-causing mutations: 61 MAPT, 5 GRN and 10 C9orf72 . The MAPT mutation carrier group included 17 individuals with mutations that produce 3R/4R tau (10 V337M and 7 R406W) and 44 with mutations that produce 4R tau (22 P301L, 11 N279K, 4 IVS9-10G>T, 3 IVS10+16C>T, 1 S305S, 1 S305I and 2 S305N) 32 . All individuals with AD and 38 of the 47 individuals with MCI had either Aβ-PET, MRI, autopsy or genetic biomarker verification. Overall, 82 cases had an autopsy-confirmed diagnosis: 15 AD path , 52 FTLD-tau and 15 FTLD-TDP. The average time between blood draw and death in these cases was 2.7 ± 2 years. Healthy controls were healthy and elderly with normal neurological examinations, neuropsychological testing and CDR 58 scores. Longitudinal measures of disease severity, neuropsychological testing and executive function were available at baseline and at two follow-up visits (average baseline, n = 221; time point two, n = 115 individuals; time point three, n = 40 individuals) with an average 1.2 ± 0.1 years between measurements. Participants provided written informed consent at the time of recruitment. The study was approved by the institutional review board of each research center from which the individual was recruited. Clinical evaluation Disease severity was assessed using the CDRsb 58 and MMSE 59 . Neuropsychological measures included a trail-making test 60 , color trails test 61 , phonemic fluency 62 , the BNT 63 , Modified Rey Figure copy and recall 60 and GDS 64 . Disability was assessed using the FAQ 65 and the SEADL scale 66 . Statistical analysis A two-sided P < 0.05 was considered statistically significant and corrected for multiple comparisons using false discovery rate when appropriate 67 . Biomarker concentrations were not normally distributed and natural log-transformed data or nonparametric statistics were used. Differences in biomarker values and in clinical and neuroimaging variables were assessed with one-way analysis of variance or Kruskal–Wallis tests, with Bonferroni multiple-comparisons correction. Associations between pTau181 and NfL concentrations, FTP-PET cortical SUVR values, PiB-PET cortical SUVR values and clinical measures were assessed using linear regression models, corrected for false discovery rate 67 . ROC analyses determined the ability of plasma pTau181 and NfL to differentiate between diagnostic groups. Youden cutoff values were used for sensitivity and specificity 68 . All analyses were corrected for age, CDRsb and time between blood draw and death as appropriate. There were no differences in plasma biomarker levels between sexes. Linear mixed-effect models evaluated the relationship of baseline ln pTau181 with changes in clinical variables. Models allowed random intercepts at the individual level and were adjusted for age, sex, time differences from specimen collection date to clinical/neuropsychological testing, disease duration and biomarker by time interaction. Statistical analyses were performed using SPSS (v.25; SPSS, IBM), Stata (Stata 14.0, StataCorp) and R (v.3.5.1). Fluid biomarker methods Plasma pTau181 measurements Blood samples were obtained by venipuncture in EDTA tubes for plasma, following the ADNI protocol 69 . Within 60 min, the samples were centrifuged at 3,000 r.p.m. at room temperature, aliquoted and stored at −80 °C. Plasma pTau181 levels were measured in duplicate by electrochemiluminescence using a proprietary pTau181 assay (Lilly Research Laboratory) as previously described 31 . Briefly, samples were diluted 1:2 and 50 μl of diluted sample was used for the assay. The assay was performed on a streptavidin small spot plate using the Meso Scale Discovery platform. Biotinylated-AT270 was used as a capture antibody (anti-pTau181 Tau antibody, mouse IgG1) and SULFO-TAG-Ru-LRL (anti-tau monoclonal antibodies developed by Lilly Research Laboratory) for the detector. The assay was calibrated using a recombinant tau (4R2N) protein that was phosphorylated in vitro using a reaction with glycogen synthase kinase-3 and characterized by MS. Overall, 41 of the included samples were measured below the lower limit of quantification (LLOQ) of 1.4 pg ml −1 , none of which in the AD phenotype. One sample from an Aβ-PET negative normal control had a pTau181 concentration of 49.1 pg ml −1 , almost 12-times as high as the average pTau181 value. This individual was excluded from all analyses. The average percentage c.v. of the samples was 7.3%. The percentage c.v. of the low quality control was 5.6% and 4.6% for the high quality control. Plasma NfL measurements Plasma NfL concentrations were measured at three sites: Novartis Institutes for Biomedical Research, Quanterix Corp and UCSF using a commercially available NfL kit on the Simoa HD-1 platform. Samples were 4× diluted, automated by the HD-1 analyzer and measured in duplicate. The average interassay variation was 4.9% and all samples were measured well above the kit LLOQ of 0.174 pg ml −1 . One sample had an NfL concentration of 713 pg ml −1 , almost 20-times as high as the average NfL value. This value was excluded from all analyses. In a previous study, an overlapping set of samples from 186 participants was analyzed separately at Novartis and at Quanterix, showing that plasma NfL concentrations were highly correlated ( ρ = 0.98, P < 0.001). The samples analyzed at the two sites also had comparable means and s.d. (21.8 ± 35 pg ml −1 , Quanterix and 20.2 ± 34 pg ml −1 , Novartis). Plasma Aβ42 and Aβ40 measurements Plasma Aβ42 and 40 was measured at UCSF using the Neurology 3-plex A kit from Quanterix, which measures Aβ42, Aβ40 and tau. Samples were 4× diluted, automated by the HD-1 analyzer and measured in duplicate. The average interassay variation was 6.4% for Aβ42 and 2.9% for Aβ40 and all samples were measured well above the kit LLOQ of 0.142 pg ml −1 for Aβ42 and 0.675 pg ml −1 for Aβ40. CSF pTau181 measurements CSF pTau181 was measured in duplicate with the INNO-BIA AlzBio3 (Fujirebio) platform by a centralized laboratory. The researchers who performed the fluid biomarker analyses were blinded to the clinical information and reference standard results of the participants during sample measurement. Imaging methods MRI acquisition Structural MRIs were available for 221 participants and acquired at UCSF on a 3T Siemens Tim Trio or a 3T Siemens Prisma Fit scanner at an average of 20 d (± 58) from the plasma sample. T1-weighted magnetization prepared rapid gradient echo MRI sequences were acquired at UCSF, either on a 3T Siemens Tim Trio or a 3T Siemens Prisma Fit scanner. Both scanners had similar acquisition parameters on each scanner (sagittal slice orientation; slice thickness of 1.0 mm; 160 slices per slab; in-plane resolution of 1.0 × 1.0 mm; matrix of 240 × 256; repetition time of 2,300 ms; inversion time of 900 ms; and flip angle of 9°), although echo time slightly differed (Trio: 2.98 ms; Prisma: 2.9 ms). MRI preprocessing Before prepossessing, all scans were visually inspected for quality control. Images with excessive motion or image artifact were excluded. T1-weighted images underwent bias field correction using an N3 algorithm and segmentation was performed using statistical parametric mapping (SPM12, Wellcome Trust Center for Neuroimaging, ) unified segmentation 70 . The total intracranial volume 71 was derived from SPM12 to be used in statistical analyses. A group template was generated from the segmented gray- and white-matter tissues and CSF by nonlinear registration template generation using the large deformation diffeomorphic metric mapping framework 72 . Native subject space gray were normalized, modulated and smoothed in group template space with a 10-mm full width half maximum Gaussian kernel. Every step of the transformation from the native space to the group template was carefully inspected. FTP-PET acquisition FTP-PET was acquired on a Siemens Biograph PET/CT scanner at the Lawrence Berkeley National Laboratory (LBNL) for 75 participants (65 AD/MCI and 10 FTLD) at an average of 70 d (± 122) from the plasma sample. FTP was synthesized and radiolabeled at LBNL’s Biomedical Isotope Facility. We analyzed PET data that were acquired 80–100 min after the injection of ~10 mCi of FTP (four 5-min frames). A low-dose computed tomography scan was performed for attenuation correction before PET acquisition and data were reconstructed using an ordered subset expectation maximization algorithm with weighted attenuation and smoothed with a 4-mm Gaussian kernel with scatter correction (image resolution: 6.5 × 6.5 × 7.25 mm based on Hoffman phantom). FTP-PET preprocessing PET frames were realigned, averaged and co-registered onto their corresponding T1-MRI. SUVR images were created using the inferior cerebellum gray matter as a reference region (the region was defined using the T1-MRI was segmented using Freesurfer 5.3 ( ) and SPM12) 33 . Native-space FTP-SUVR images were warped to template space using the deformation parameters derived from the MRI procedure. Warped SUVR images were masked to limit contamination from nonrelevant areas (such as off-target binding from meninges, eyes or skull) and smoothed with a 4-mm isotropic Gaussian kernel to be used for voxelwise analyses 48 . FTP-PET analyses Using Freesurfer segmentation, the average cortical SUVR value was extracted from each patient in native space to obtain a measure of global tau burden 48 . Patients were categorized as tau-positive or tau-negative on the basis of a previously published cortical FTP-SUVR threshold of 1.22 (see Table 3 from Maass et al. 33 ). Complementary analyses were conducted using inferior temporal lobe SUVR values to classify patients (using a 1.30 threshold, see Table 3 from Maass et al. 33 ) but results were unchanged. Patients were assigned to a Braak stage (0, I–II, III–IV or V–VI) using the approach developed by Maass et al. 33 . For each patient, we extracted the average SUVR from three bilateral composite regions of interest (ROIs) in native space based on Freesurfer 5.3’s aparc + aseg segmentation file, as follows: Braak I–II ROI: entorhinal, hippocampus Braak III–IV ROI: parahippocampal, fusiform, lingual, amygdala, middle temporal, caudal anterior cingulate, rostral anterior cingulate, posterior cingulate, isthmus cingulate, insula, inferior temporal and temporal pole. Braak V–VI ROI: superior frontal, lateral orbitofrontal, medial orbitofrontal, frontal pole, caudal middle frontal, rostral middle frontal, pars opercularis, pars orbitalis, pars triangularis, lateral occipital, supramarginal, inferior parietal, superior temporal, superior parietal, precuneus, banks of the superior temporal sulcus, transverse temporal, pericalcarine, postcentral, cuneus, precentral and paracentral. The Braak stage classification scheme (including thresholds) was determined by Maass et al. 33 and works as follows: Step 1. If average SUVR in Braak V–VI ROI > 1.25, participant is assigned to Braak stage V–VI; if not: Step 2. If average SUVR in Braak III–IV ROI > 1.28, participant is assigned to Braak stage III–IV; if not: Step 3. If average SUVR in Braak I–II ROI > 1.35, participant is assigned to Braak stage I–II; if not, participant is assigned to Braak stage 0. FTP-PET imaging in secondary cohort (Eli Lilly) The tau PET acquisitions were performed from 75 to 105 min (6 × 5-min frames) after injection of approximately 240 MBq of FTP. Frames were aligned and averaged with an acquisition time-offset correction. An average 75–105-min image was spatially registered to the corresponding individual’s MRI space and then to the MRI template in Montreal Neurological Institute stereotaxic space. Reference signal was parametrically derived in the white-matter-based region to isolate nonspecific signal using the parametric estimate of reference signal intensity method 73 . The used weighted SUVR was designed by multiblock barycentric discriminant analysis, which has been shown to maximize the separation of diagnostic groups and amyloid status 74 . Aβ-PET Aβ status was available for 166 participants (41 HC, 77 AD/MCI and 48 FTLD) and derived from PET acquired with 11C-PiB (injected dose, ~15 mCi; n = 124 participants) or 18 F-florbetapir (injected dose, ~10 mCi; n = 42) at an average of 273 d (± 433) from the plasma sample. Aβ-PET data were acquired at LBNL on a Siemens ECAT EXACT HR PET scanner ( n = 32) or a Siemens Biograph PET-CT scanner ( n = 104) or at UCSF China Basin on a GE Discovery STE/VCT PET-CT scanner ( n = 32). We created a distribution value ratio (for PiB, when patients underwent a 90-min acquisition) or 50–70 min SUVR images (for florbetapir or PiB when patients only underwent a 20-min PET acquisition) as previously described 3 , 75 , using tracer-specific reference regions: cerebellar gray matter for PiB and whole cerebellum for florbetapir. Aβ-PET positivity was on the basis of visual read, as previously validated against neuropathological standards 39 , 40 . Voxelwise analyses and result rendering Voxelwise analyses were run in SPM12 to test the association between plasma markers and gray matter volume or FTP SUVR in the primary cohort (UCSF + ARTFL). Separate models were used for each pair of variable (pTau181 volume, NfL volume, pTau181-FTP and NfL-FTP) and models were run on (1) all participants with available data; (2) patients with a clinical diagnosis of MCI/AD only; and (3) patients with a clinical diagnosis of FTLD only. Specific sample size for each analysis is indicated in the Results . Age was entered as a covariate in all models and total intracranial volume was entered in MRI models to control for inter-individual variability in head size. Resulting T-maps were thresholded (based on uncorrected P < 0.001 at the voxel level with family-wise error-corrected P < 0.05 at the cluster level) and converted to R-maps using the CAT12 toolbox ( ). Maps were rendered on a three-dimensional brain surface using BrainNet Viewer 76 ( ) and default interpolation and perceptually uniform color scales (magma for MRI and viridis for tau PET; ). An overview of the methods is provided in the Nature Research Reporting Summary linked to this article. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All requests for raw and analyzed data and materials will be promptly reviewed by the corresponding author and the University of California, San Francisco to verify whether the request is subject to any intellectual property or confidentiality obligations. Some participant data not included in the paper were generated as part of clinical trials and may be subject to patient confidentiality limitations. Data and materials from participants with FTLD enrolled in ARTFL are accessible via forms that can be found on the ARTFL website ( ). Other data and materials that can be shared will be released via a material transfer agreement. Code availability All requests for code used for data analyses and data visualization will be promptly reviewed by the corresponding author and the UCSF to verify whether the request is subject to any intellectual property, confidentiality or other licensing obligations. If there are no limitations, the corresponding author will communicate with the requester to share the code.
A blood test that may eventually be done in a doctor's office can swiftly reveal if a patient with memory issues has Alzheimer's disease or mild cognitive impairment and can also distinguish both conditions from frontotemporal dementia. If approved, the blood test could lead to a jump in the number of Alzheimer's patients enrolling in clinical trials and be used to monitor response to those investigational treatments. In a study led by UC San Francisco, researchers measured blood levels of phosphorylated tau 181 (pTau181), a brain protein that aggregates in tangles in patients with Alzheimer's. They found that pTau181 was 3.5-times higher in people with the disease compared to their healthy peers. In contrast, in patients with frontotemporal dementia, a condition that is often misdiagnosed as Alzheimer's, pTau181 was found to be within the same range as the control group. The study publishes in Nature Medicine on March 2, 2020. "This test could eventually be deployed in a primary care setting for people with memory concerns to identify who should be referred to specialized centers to participate in clinical trials or to be treated with new Alzheimer's therapies, once they are approved," said senior author Adam Boxer, MD, Ph.D., of the UCSF Memory and Aging Center. Being able to easily diagnose Alzheimer's disease at early stages may be especially beneficial to patients with mild cognitive impairment, some of whom may have early Alzheimer's disease. Individuals with early Alzheimer's are more likely to respond to many of the new treatments that are being developed." Current Alzheimer's Testing Expensive, Invasive Existing methods for diagnosing Alzheimer's include measurement of the deposits of amyloid, another protein implicated in dementia, from a PET scan; or using lumbar puncture to quantify amyloid and tau in cerebrospinal fluid. PET scans are expensive, only available in specialized centers and currently not covered by insurance, and lumbar punctures are invasive, labor intensive and not easy to perform in large populations, the authors noted. There are 132 drugs in clinical trials for Alzheimer's, according to a 2019 study, including 28 that are being tested in 42 phase-3 trials—the final part of a study before approval is sought from the federal Food and Drug Administration. Among those phase-3 drugs is aducanumab, which some experts believe may be the first drug approved to slow the progression of Alzheimer's. In the study, participants underwent testing to measure pTau181 from plasma, the liquid part of blood. They were aged from 58 to 70 and included 56 who had been diagnosed with Alzheimer's, 47 with mild cognitive impairment and 69 of their healthy peers. Additionally, participants included 190 people with different types of frontotemporal dementia, a group of brain disorders caused by degeneration of the frontal and temporal lobes, areas of the brain associated with decision-making, behavioral control, emotion and language. Among adults under 65, frontotemporal dementia is as common as Alzheimer's. Blood Test Measures Up to Established Tool The researchers found that blood measures of pTau181 were 2.4 pg/ml among healthy controls, 3.7 pg/ml among those with mild cognitive impairment and 8.4 pg/ml for those with Alzheimer's. In people with variants of frontotemporal dementia, levels ranged from 1.9 to 2.8 pg/ml. These results gave similar information to the more established diagnostic tools of PET scan measures of amyloid or tau protein, Boxer said. The study follows research by other investigators published last year that found high levels of plasma amyloid were a predictor of Alzheimer's. However, amyloid accumulates in the brain many years before symptoms emerge, if they emerge, said Boxer, who is affiliated with the UCSF Weill Institute for Neurosciences. "In contrast, the amount of tau that accumulates in the brain is very strongly linked to the onset, the severity and characteristic symptoms of the disease," he said. A companion study by Oskar Hansson, MD, Ph.D., of Lund University, Sweden, published in the same issue of Nature Medicine corroborated the results of the UCSF-led study. It concluded that pTau181 was a stronger predictor of developing Alzheimer's in healthy elders than amyloid. The researchers said they hope to see the blood test available in doctor's offices within five years.
10.1038/s41591-020-0762-2
Medicine
Researchers discover clues to brain changes in depression
Tara A. LeGates et al. Reward behaviour is regulated by the strength of hippocampus–nucleus accumbens synapses, Nature (2018). DOI: 10.1038/s41586-018-0740-8 Journal information: Nature
http://dx.doi.org/10.1038/s41586-018-0740-8
https://medicalxpress.com/news/2018-11-clues-brain-depression.html
Abstract Reward drives motivated behaviours and is essential for survival, and therefore there is strong evolutionary pressure to retain contextual information about rewarding stimuli. This drive may be abnormally strong, such as in addiction, or weak, such as in depression, in which anhedonia (loss of pleasure in response to rewarding stimuli) is a prominent symptom. Hippocampal input to the shell of the nucleus accumbens (NAc) is important for driving NAc activity 1 , 2 and activity-dependent modulation of the strength of this input may contribute to the proper regulation of goal-directed behaviours. However, there have been few robust descriptions of the mechanisms that underlie the induction or expression of long-term potentiation (LTP) at these synapses, and there is, to our knowledge, no evidence about whether such plasticity contributes to reward-related behaviour. Here we show that high-frequency activity induces LTP at hippocampus–NAc synapses in mice via canonical, but dopamine-independent, mechanisms. The induction of LTP at this synapse in vivo drives conditioned place preference, and activity at this synapse is required for conditioned place preference in response to a natural reward. Conversely, chronic stress, which induces anhedonia, decreases the strength of this synapse and impairs LTP, whereas antidepressant treatment is accompanied by a reversal of these stress-induced changes. We conclude that hippocampus–NAc synapses show activity-dependent plasticity and suggest that their strength may be critical for contextual reward behaviour. Main Hippocampal activity is altered by changes in the contextual features of rewarding stimuli 3 , 4 , and a population of reward-associated cells has been identified in this region 5 . Activity-dependent enhancement of spike firing has been observed at hippocampus–NAc synapses 5 , 6 , and cocaine strengthens the connectivity between these two nuclei 7 , leading us to hypothesize that plasticity of these excitatory synapses is associated with reward. We first examined whether hippocampus–NAc inputs display activity-dependent synaptic potentiation. Using whole-cell voltage-clamp, we recorded excitatory postsynaptic currents (EPSCs) from medium spiny neurons (MSNs) in the ventromedial NAc shell in brain slices from mice expressing tdTomato in dopamine type 1 receptor (D1R)-expressing MSNs to differentiate between D1R- and presumptive D2R-expressing cells (D1R-MSNs and D2R-MSNs, respectively) 8 . Glutamatergic EPSCs, mediated by both α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA)- and N -methyl- d -aspartate (NMDA)-type receptors, were evoked by electrical stimulation of axons of hippocampal cells that projected to the NAc via the fornix. In response to high-frequency stimulation (HFS) (Fig. 1a–c ), robust potentiation was elicited similarly in both D1R- and D2R-MSNs. LTP was accompanied by a change in the coefficient of variation and a reduction in transmission failures (Extended Data Fig. 1 ), but no change in paired-pulse ratio (Fig. 1a ), suggesting that postsynaptic expression mechanisms underlie this potentiation. Fig. 1: Mechanisms that underlie activity-dependent LTP at hippocampus–NAc synapses. a , LTP of hippocampus–NAc eEPSCs is similar in D1R- and D2R-MSNs and does not alter paired pulse ratios. b , Summary data from the last 5 min of recording. # D1: t = 2.624, P = 0.0394, n = 7 cells from 7 mice; D2: t = 3.586, P = 0.0059, n = 10 cells from 10 mice. c , Representative traces of EPSCs before and after HFS. Grey shading represents individual traces. Black represents the average. d , pHFS induces LTP of light-evoked EPSCs. e , Summary data from the last 5 min of recording. # t = 3.337, P = 0.0157, n = 7 cells from 7 mice. f , Representative traces of pEPSCs before and after HFS. Grey shading represents individual traces. Blue represents the average. g , pHFS potentiates both electrically and optogenetically evoked EPSCs. h , Summary data from the last 5 min of recording. # Two-tailed paired Wilcoxon test to compare baseline to response at 30 min: W = 21, P = 0.0313, n = 3 cells from 3 mice. i , Representative traces from electrically and optogenetically evoked EPSCs before and after HFS. j , Pre-incubation with AP5 or KN62, or chelation of intracellular Ca 2+ with BAPTA, prevents LTP induction by HFS. k , Summary data from the last 5 min of recording. AP5/control AP5 : * U = 1, P = 0.0317, n = 3, 5 mice; # control AP5 : t = 2.865, P = 0.0457, n = 5 cells; AP5: t = 1.729, P = 0.1589, n = 5 cells; BAPTA/control BAPTA : * U = 5, P = 0.0221, n = 7, 6 mice; # control BAPTA : t = 3.149, P = 0.0199, n = 7 cells; BAPTA: t = 1.172, P = 0.2942, n = 6 cells; KN62/control KN62 : ** U = 6, P = 0.0089, n = 7, 8 mice; # control KN62 : t = 2.526, P = 0.0449, n = 7 cells; KN62: t = 0.4919, P = 0.6378, n = 8 cells. l , Representative traces of EPSCs from control cells and cells treated with AP5, BAPTA or KN62. m , Pre-incubation with SCH23390 or Rp-cAMPs does not affect LTP induction in D1R-MSNs. n , Summary data from the last 5 min of recording. SCH/control SCH : U = 22, P = 0.6070, n = 6, 9 mice; # control SCH : t = 5.658, P = 0.0013, n = 7 cells; SCH: t = 2.914, P = 0.0195, n = 9 cells; Rp/control Rp : U = 13, P = 0.7922, n = 5, 6 mice; # control Rp : t = 2.611, P = 0.476, n = 6 cells; Rp: t = 2.337, P = 0.0476, n = 9 cells. o , Representative traces of EPSCs from control, SCH23390, and Rp-cAMPs-treated D1R-MSNs. *Differences between treatment and control by two-tailed Mann–Whitney U -test. # Significant increase in EPSC amplitude above baseline revealed by two-tailed paired t -test. LTP kinetics are plotted in 1-min bins. Centre values represent mean, error bars represent s.e.m. For box plots, the middle line is plotted at the median. The box shows the 25th–75th percentiles. Whiskers represent minimum and maximum. Scale bars, 10 pA/10 ms. Full size image To verify that fornix-evoked EPSCs were produced by hippocampal input to the NAc, we recorded photostimulation-evoked EPSCs (pEPSCs) in slices from mice expressing channelrhodopsin (ChR) in ventral hippocampus (vHipp) pyramidal neurons (Extended Data Fig. 2 ). Light pulses delivered in the NAc evoked pEPSCs comparable to EPSCs elicited by electrical stimulation of the fornix. High-frequency photostimulation (pHFS) elicited LTP of a similar magnitude and time course to that elicited by electrical HFS in the fornix (Fig. 1d–f ). Furthermore, pHFS potentiated simultaneously recorded EPSCs and pEPSCs evoked with alternating stimuli (Fig. 1g–i ). We next used pharmacological manipulation to dissect the mechanisms that underlie hippocampus–NAc LTP. HFS did not induce LTP in MSNs in the presence of the NMDAR antagonist 2-amino-5-phosphonovaleric acid (AP5), whereas LTP was induced normally in slices in which AP5 was washed out before HFS (Fig. 1j–l ). Loading the calcium chelator BAPTA into the cell also blocked induction of LTP, indicating that it is a Ca 2+ -dependent process (Fig. 1j–l ). In accordance with this, LTP induction was blocked by pretreatment of slices with a Ca 2+ /calmodulin-dependent kinase type II (CaMKII) inhibitor, KN62 (Fig. 1j–l ). These properties were observed in both D1R- and D2R-MSNs. Therefore, induction of LTP at hippocampal–NAc synapses requires NMDAR activation, elevation of intracellular [Ca 2+ ], and CaMK activation, much like canonical Schaffer collateral–CA1 cell LTP 9 . An essential mechanism for postsynaptic LTP expression is the insertion of AMPARs. In the ventral tegmental area, Ca 2+ -permeable AMPARs that lack GluA2 subunits are preferentially inserted during the expression of cocaine-induced plasticity 10 . We investigated whether LTP induction altered subunit composition at hippocampus–NAc synapses. Prior to HFS, hippocampus–NAc EPSCs displayed a linear relationship between current and holding potential, with no change in EPSC amplitude upon application of the selective inhibitor of GluA2-lacking AMPARs, N -acetyl-spermine (NASPM) (Extended Data Fig. 3 ), consistent with the presence of mostly Ca 2+ -impermeable, GluA2-containing AMPARs at the synapse 11 . Following the induction of LTP, current–voltage relationships remained linear, and EPSCs remained insensitive to NASPM, suggesting that expression of LTP at hippocampus–NAc synapses does not involve insertion of GluA2-lacking AMPARs (Extended Data Fig. 3 ). Dopamine is a critical neuromodulator in the NAc, and there is evidence that dopamine signalling is required for LTP induction in the NAc 11 , 12 , 13 , 14 , 15 . We recapitulated these findings using local stimulation to activate unidentified excitatory synapses within the NAc and found that LTP was blocked in the presence of the D1R antagonist SCH23390 (Extended Data Fig. 4 ). To examine the requirement of dopamine signalling for LTP specifically at hippocampus–NAc synapses, we recorded from D1R- and D2R-MSNs in the presence of their respective receptor antagonists, SCH23390 and sulpiride. Robust LTP was elicited, suggesting that dopamine signalling is not required for the induction of LTP at this synapse in either cell type (Fig. 1m–o ; Extended Data Fig. 5 ). We also examined signalling downstream of dopamine receptors by blocking PKA with Rp-cAMPs, which had no effect on the development of LTP in D1R-MSNs (Fig. 1m–o ). Together, these data show that LTP at hippocampus–NAc synapses involves canonical NMDA receptor-dependent mechanisms but does not require dopamine signalling. To identify a functional role for potentiation at hippocampus–NAc synapses in vivo, we tested whether LTP modulated reward measured by conditioned place preference (CPP). ChR2 or enhanced yellow fluorescent protein (eYFP) were expressed in vHipp. Because collaterals of NAc-projecting hippocampal cells were observed in the prefrontal cortex and amygdala (Extended Data Fig. 6 ), we implanted fibres into the NAc bilaterally to stimulate hippocampus–NAc synapses selectively. Conditioning of ChR-expressing mice with pHFS resulted in a preference for the light-conditioned chamber, without altering locomotor activity (Fig. 2a, b ; Extended Data Fig. 7 ). eYFP-expressing mice showed no preference for either chamber (Fig. 2a, b ). Stimulation at 4 Hz, which does not induce LTP in slices (Extended Data Fig. 8 ), did not induce CPP (Fig. 2c, d ), suggesting that CPP was specifically dependent upon LTP induction. Fig. 2: In vivo HFS influences reward-related behaviour and NAc activity. a , Representative behavioural trace after 100-Hz conditioning. b , Conditioning with 100 Hz induces CPP in ChR-expressing mice. *Two-way repeated-measures ANOVA with Sidak’s post hoc test; F 1,33 = 5.155, P = 0.0298, n = 21, 14 mice. c , Representative behavioural trace after 4-Hz conditioning. d , Conditioning with 4 Hz light stimulation is not sufficient to induce CPP. Two-way repeated-measures ANOVA: F 1,14 = 0.08221, P = 0.7785, n = 11, 5 mice. e , pHFS induces LTP of vHipp–NAc synapses in vivo. Data are plotted in 1-min bins. Centre values represent mean, error bars represent s.e.m. f , Summary data from the last 5 min of recording. Kruskal–Wallis test with Dunn’s multiple comparison post hoc test: H = 34.58, P < 0.0001, n = 40, 24, 25 units from 4 mice. g , Representative traces of light-evoked LFPs. Scale bars, 0.01 mV/10 ms. h , Representative behavioural traces after social interaction conditioning. M: location of the mouse during conditioning. i , vHipp–NAc silencing during conditioning blocks social interaction-induced CPP. Two-way repeated-measures ANOVA with Sidak’s post hoc test: F 1,20 = 4.529, P = 0.0459, n = 12, 10 mice. j , vHipp–NAc silencing does not disrupt social interaction. Two-tailed Mann–Whitney U -test: U = 64, P = 0.5671, n = 15, 10 mice. # One-sample Wilcoxon test shows significant interaction ratios for both groups. NpHR: W = 114, P = 0.0003, n = 15 mice; YFP: W = 49, P = 0.0098, n = 10 mice. For box plots, the middle line is the median. The box represents the 25th–75th percentiles. Whiskers represent minimum and maximum. **** P < 0.0001, ** P < 0.01, * P < 0.05. Full size image To demonstrate LTP in vivo, we recorded light-evoked local field potentials (LFPs) in the NAc shell in mice expressing ChR2 in vHipp. We found that HFS induced LTP of light-evoked LFPs (Fig. 2e–g ), similar to our whole-cell results. By contrast, LTP was not observed in response to 4 Hz stimulation, or under conditions in which no stimulation paradigm was used (Fig. 2e–g ). We also examined optogenetically induced c-Fos expression as a marker of neuronal activation. HFS, but not stimulation at 4 Hz, produced a robust increase in the number of c-Fos + cells within the NAc shell, but not the core (Extended Data Fig. 9 ), corresponding to the observed LFP potentiation. We conclude that HFS induces LTP at hippocampus–NAc synapses in vivo, and this presumably underlies the formation of CPP. We then tested the contribution of this synapse to responses to natural rewards. Mice expressing the light-activated chloride pump halorhodopsin (NpHR) or YFP in the vHipp were tested for CPP in response to social interaction. Light was delivered to the NAc during conditioning to silence activity selectively at hippocampal inputs. YFP-expressing mice displayed a preference for the chamber in which they had previously encountered the target animal, whereas NpHR-expressing mice did not, suggesting that activity of hippocampus–NAc synapses during conditioning is critical for CPP (Fig. 2h, i ). By contrast, silencing of hippocampus–NAc synapses did not interfere with the rewarding quality of the social interaction itself. Both NpHR- and YFP-expressing mice showed normal social interaction during light-induced synaptic silencing (Fig. 2j ). This suggests that activity at this synapse is not necessary for generalized reward processing, but is necessary for encoding reward associated with spatial context. Maintaining excitatory drive in the NAc is crucial for normal hedonic state 16 , 17 . Synaptic weakening in the NAc contributes to stress-induced anhedonia 16 , although the source of the input was not identified. We predicted that chronic stress would decrease strength at hippocampus–NAc synapses. We used chronic multimodal stress (CMS) to induce anhedonic-like behaviour, assayed by loss of sucrose preference (Fig. 3a ). D1R-MSNs recorded in brain slices taken from mice with a loss of sucrose preference displayed a decrease in synaptic strength, as measured by a decrease in the ratio of AMPAR- to NMDAR-dependent components of the EPSC (AMPA:NMDA ratio; Fig. 3b, c ). This is consistent with previous descriptions of stress-induced AMPAR internalization 16 . Furthermore, induction of LTP was profoundly impaired in D1R-MSNs (Fig. 3d, e ). By contrast, AMPA:NMDA ratios and LTP were unaltered by chronic stress in D2R-MSNs (Fig. 3b–g ). EPSCs in D2R-MSNs instead displayed inward rectification at positive membrane potentials and sensitivity to NASPM (Extended Data Fig. 10 ), unlike EPSCs in D2R-MSNs in unstressed control mice, suggesting a stress-induced increase in the contribution of Ca 2+ -permeable, GluA2-lacking synaptic AMPARs. These data demonstrate that chronic stress selectively weakens the strength and impairs plasticity of hippocampal input to D1R-MSNs. Because activity of D1R-MSNs is associated with positive reward 18 , 19 , 20 , 21 , these results suggest that the chronic weakening of excitatory drive of the NAc is a contributing factor in stress-induced anhedonia. Fig. 3: Chronic multimodal stress weakens excitatory hippocampal input onto D1R-MSNs. a , Chronic stress induces loss of sucrose preference. Two-tailed paired t -test: t = 5.056, P = 0.0039, n = 6 mice. Dotted line represents criterion for anhedonia. b , Representative traces of EPSCs at –70 mV and +40 mV from control mice and mice exposed to chronic stress. c , Chronic stress decreases AMPA:NMDA ratio. Two-tailed t -test: t = 2.422, P = 0.0322, n = 6, 8 mice. d , D1R-MSNs from mice exposed to chronic stress show a deficit in LTP induction. e , Representative traces of EPSCs from mice exposed to chronic stress and control mice. f , Chronic stress has no effect on LTP in D2R-MSNs. g , Summary data from the last 5 min of recording. *Two-tailed Mann–Whitney U -test: U = 3, P = 0.0109, n = 8, 5 mice; # two-tailed paired t -test for baseline EPSC amplitude versus 30 min post-HFS: D1: t = 3.787, P = 0.0068, n = 8 cells; D1 stress : t = 1.222, P = 0.2564, n = 9 cells; D2: t = 3.854, P = 0.012, n = 6 cells; D2 stress : t = 3.164, P = 0.0341, n = 5 cells. h , Chronic stress abolishes pHFS-induced CPP. Repeated-measures ANOVA with Tukey’s post hoc test: F 2.109,10.55 = 5.551, P = 0.0215, n = 6 mice. LTP kinetics are plotted in 1-min bins. Centre values represent mean, error bars represent s.e.m. For box plots, the middle line is the median. The box represents the 25th–75th percentiles. Whiskers represent minimum and maximum. Scale bar, 10 pA/10 ms. Full size image As potentiation of the hippocampus–NAc synapse elicits CPP, whereas weakening was associated with anhedonia, we sought to determine the functional consequence of these stress-induced synaptic plasticity deficits. We observed that the ability of pHFS to induce CPP was abolished after exposure to chronic stress, in contrast to results before stress, in which pHFS induced CPP (Fig. 3h ). This suggests that chronic stress interferes with CPP by weakening and impairing LTP at hippocampal synapses onto D1R-MSNs. If dysfunction of hippocampus–NAc synapses contributes to stress-induced changes in reward behaviour, then antidepressant treatment should restore normal reward behaviour and reverse these synaptic changes. We treated mice that displayed loss of sucrose preference after CMS with the selective serotonin reuptake inhibitor fluoxetine. Chronic fluoxetine treatment reversed loss of sucrose preference (Fig. 4a ) and restored the CPP deficit induced by chronic stress (Fig. 4b ). AMPA:NMDA ratios and LTP in D1R-MSNs from stressed mice treated with chronic fluoxetine were similar to those observed in unstressed controls (Fig. 4c–g ). Similarly, stress-induced changes in AMPAR subunit composition observed in D2R-MSNs were restored after chronic fluoxetine treatment (Extended Data Fig. 10 ). Acute treatment (24–48 h) with fluoxetine, which was not sufficient to restore normal sucrose preference, failed to reverse chronic stress-induced synaptic changes in D1R-MSNs (Fig. 4a–g ). Together, these data suggest that restoration of excitatory synaptic strength and plasticity at the hippocampus–NAc synapse coincides with the reinstatement of normal reward behaviour. Fig. 4: Antidepressant treatment rescues synaptic weakening induced by chronic stress. a , Chronic fluoxetine restores normal sucrose preference. One-way ANOVA with Holm–Sidak’s post hoc test: F = 36.38, P < 0.0001, n = 12, 12, 4, 6 mice. Dotted line represents anhedonia criterion. b , Chronic fluoxetine restores CPP. Two-way repeated-measures ANOVA with Sidak’s post hoc test: F 2,15 = 7.293, P = 0.0061, n = 6 mice. c , Chronic fluoxetine restores stress-induced decrease in AMPA:NMDA ratio in D1-MSNs. ANOVA with Holm–Sidak’s post hoc test: D1: F = 7.309, P = 0.0019, n = 6, 4, 5, 8 mice. d , Representative traces of EPSCs at –70 mV and +40 mV. e , Chronic fluoxetine restores LTP deficit induced by chronic stress. f , Summary data from the last 5 min of recording. *Kruskal–Wallis test with Dunn’s post hoc test: H = 18.46, P = 0.0004, n = 5, 8, 6, 7 mice; # two-tailed paired t -test for baseline EPSC amplitude versus 30 min post-HFS: D1: t = 4.540, P = 0.0027, n = 8 cells; D1 stress : t = 0.2615, P = 0.8012, n = 8 cells; D1 acute : t = 4.109, P = 0.0093, n = 6 cells; D1 chronic : t = 2.816, P = 0.0305, n = 7 cells. g , Representative traces of EPSCs before (grey) and after HFS (colour). # Significant increase in EPSC amplitude above baseline revealed by paired t -test. LTP kinetics are plotted in 1-min bins. Centre values represent mean, error bars represent s.e.m. For box plots, the middle line is the median. The box represents the 25th–75th percentiles. Whiskers represent minimum and maximum. *** P < 0.001, ** P < 0.01, * P < 0.05. Scale bars, 10 pA/10 ms. Full size image Reward drives goal-directed behaviours and various aspects of this process, such as motivation, anticipation, and contextual information, are encoded in different brain regions. We found that synapses formed by hippocampal inputs onto the NAc are highly plastic. Brief correlated high-frequency activity was sufficient to induce both LTP and persistent contextual reward behaviour. Indeed, activity-dependent synaptic plasticity of these synapses is required for the formation of reward-related memories, as shown by the ability of acute silencing of the synapse to disrupt the formation of contextual reward-related memories, but not primary reward processing. Recent work has also shown strengthening of hippocampus–NAc coupling in conjunction with cocaine-induced CPP 7 . The correlation between excitatory strength at this synapse and reward was reinforced by our observation that chronic stress induced deficits in reward-related behaviour, namely anhedonia, and weakened excitatory synaptic strength and impaired plasticity. Conversely, restoration of strength and plasticity at this synapse in response to antidepressant treatment was accompanied by restoration of normal hedonic state. The plasticity of these synapses represents a novel mechanism in the biology of reward. Targeting reward circuits for further study will expand our understanding of the pathophysiology that underlies depression and mechanisms of antidepressant response. Methods Mice Male DRD1A–tdTomato hemizygous mice were generated by mating a DRD1A–tdTomato hemizygous mouse to a C57BL/6 mouse and were used to differentiate between D1R- and D2R-expressing MSNs. D1R-MSNs were identified by expression of tdTomato whereas unlabelled cells were presumed to be D2R-MSNs. All mice were used between 2 and 4 months of age. Mice were group housed in a 12 h–12 h light–dark cycle with food and water ad libitum. All experiments were performed in accordance with the regulations set forth by the University of Maryland Institutional Animal Care and Use Committee. No statistical methods were used to predetermine sample size. Mice were randomly assigned to experimental and control groups, and experimenters were blinded during data collection and analyses. Chronic multimodal stress Sucrose preference was assessed before starting CMS. Only mice that showed a sucrose preference (>70%) were used. Mice were confined to a restraint tube (IBI Scientific, Peosta, IA) in the presence of white noise and a strobe light for 4 h per day, after which they were returned to their home cage and were individually housed. Mice were stressed daily for 10–14 days, and the procedure began no later than zeitgeber time (ZT)4 each day. Loss of sucrose preference (<65%) was used to assess stress susceptibility and defined a depression-like anhedonic state. Sucrose preference test Mice were trained by introducing two bottles containing 2% sucrose to their home cage at least one full day before their initial testing. To assess sucrose preference, one bottle containing 1% sucrose in water and one bottle containing plain water were introduced at the beginning of the active (dark) phase. The bottles were removed at the end of the active phase and weighed to measure amount consumed. Sucrose preference was calculated by dividing the volume of sucrose solution consumed by the total volume consumed (water and sucrose) and expressed as a percentage. Antidepressant treatment Mice with a sucrose preference (>70%) were subjected to CMS as described above. Upon loss of sucrose preference (<65%), mice were treated with fluoxetine (18 mg/kg/day) in their drinking water acutely (3 days) or chronically (3 weeks). Sucrose preference was tested following fluoxetine treatment. Electrophysiology Standard methods were used to prepare 400 μm parasagittal sections that contained both the NAc shell and the fornix, the source of NAc-projecting hippocampal efferents. Dissection and recording were performed in cold artificial cerebrospinal fluid (ACSF) containing (in mM) 120 NaCl, 3 KCl, 1.0 NaH 2 PO 4 , 1.5 MgSO 4 ·7H 2 O, 2.5 CaCl 2 , 25 NaHCO 3 , and 20 glucose and bubbled with carbogen (95% O 2 /5% CO 2 ). Slices recovered for one hour and were then transferred to a submersion-type recording chamber and superfused at 20–22 °C (flow rate 0.5–1 ml/min). Cells were visualized under differential interference contrast using a 60× water immersion objective (Nikon Eclipse E600FN). D1R- and D2R-MSNs were identified by the presence or absence of tdTomato, respectively. Whole-cell currents were recorded in the ventromedial region of the NAc shell under voltage-clamp conditions (–70 mV) using an Axopatch 200B amplifier (Axon Instruments, Molecular Devices) and digitized with a Digidata 1440 analogue-digital converter (Axon Instruments). EPSCs were evoked electrically, by placing a bipolar stimulating electrode (FHC) in the fornix, or optogenetically, by placing a fibre emitting 473 nm blue light from a 473 nm diode-pumped solid-state laser (OEM Laser Systems) above the slice over the NAc shell. EPSCs were evoked at a frequency of 0.1 Hz. Patch pipettes were pulled to resistances of 3–8 MΩ. For LTP experiments, patch pipettes were filled with a solution containing 130 mM K-gluconate, 5 mM KCl, 2 mM MgCl 6 -H 2 O, 10 mM HEPES, 4 mM Mg-ATP, 0.3 mM Na 2 -GTP, 10 mM Na 2 -phosphocreatine, and 1 mM EGTA. For rectification and AMPA:NMDA ratio experiments, patch pipettes were filled with 135 mM CsCl, 2 mM MgCl 6 -H 2 O, 10 mM HEPES, 4 mM Mg-ATP, 0.3 mM Na 2 -GTP, 10 mM Na 2 -phosphocreatine, 1 mM EGTA, 5 mM QX-314, and 100 μM spermine. The extracellular solution consisted of ASCF and 50 μM picrotoxin. For experiments involving pharmacological manipulation of signalling pathways involved with LTP induction (AP5 (Sigma-Aldrich, 50 μM), KN-62 (Tocris, 3 μM), Rp-cAMP (Tocris, 5 μM), SCH23390 (Tocris, 3 μM), sulpiride (Tocris, 10 μM)), drugs were superfused over the slice for at least 15 min, after which baseline EPSCs were recorded and HFS was used to elicit LTP. To examine the requirement of Ca 2+ signalling in LTP induction, BAPTA (Molecular Probes, 10 mM) was included in the patch pipette to block intracellular Ca 2+ . To examine subunit composition changes after LTP induction, HFS was used to induce potentiation, and NASPM (Tocris, 200 μM) was applied after stable potentiated responses were recorded for 10 min. Recordings were discarded if access resistance changed by >20%. Summary LTP graphs were generated by averaging the peak amplitudes of individual EPSCs in 5-min bins (six consecutive sweeps) and normalizing these to the mean value of EPSCs collected during the 10-min baseline immediately before the LTP-induction protocol (four bouts of 100 Hz stimulation for 1 s with 15 s between bouts while holding the cell at –40 mV). Individual experiments were then averaged together for graphical representation. The last five minutes of recording were used for statistical comparisons. For AMPA:NMDA ratios, the peak amplitude at –70 mV was used to quantify the AMPA component while the amplitude at +40 mV at 50 ms after stimulation (>3 time constants of the decay of AMPAR-mediated synaptic currents) was used to quantify the NMDA component. The investigator was blind to treatment groups during recording and analysis. Virus and optogenetic fibre placement surgery Mice were anaesthetized with 3% isoflurane and underwent stereotaxic surgery to inject serotype 5 adeno-associated viruses (AAV) encoding CaMKIIa-ChR2(H134R) –eYFP, CAMKIIa-eNpHR3.0–YFP, or CaMKIIa–eYFP (UNC Viral Vector Core, Chapel Hill, North Carolina) and implant optic fibres. Virus was injected bilaterally into the vHipp (from bregma anterior/posterior: –3.7, lateral: +3.0, dorsal/ventral: –4.8 from top of skull) and was infused at a rate of 0.1 ml per minute. The injection needle was left in place for 10 min following the infusion. Mice recovered for 6–8 weeks to allow infection of the hippocampal projections to occur. For in vivo optogenetic experiments, 4 mm chronically implantable fibres (0.22 numerical aperture, 105 µm core) were placed bilaterally to target the NAc (anterior/posterior: +1.6, lateral: +1.5, dorsal/ventral: –4.4 from top of skull). Conditioned place preference Mice were allowed to recover from surgery for at least two weeks before behaviour experiments. The ability of optogenetic potentiation to induce CPP was evaluated using a three-chamber CPP arena (Maze Engineers), which consisted of two chambers distinguishable by visual cues and a smaller chamber connecting the two rooms. Behaviour was monitored using a camera positioned above the arena, and data were collected using Anymaze software (Stoelting). Mice were allowed to freely explore the entire arena for 30 min. During this habituation phase, mice were connected to a patch cord but no light was transmitted. Mice that showed an inherent preference >65% for either side of the arena were removed from the experiment. On the following day, mice were connected to the patch cord and confined to one compartment during which they were conditioned with ~5 mW 473 nm light administered in four bouts of 100 Hz stimulation for 1 s (2 ms pulse width) with 15 s between bouts using a 473 nm diode-pumped solid-state laser (OEM Laser Systems). The mice remained in the arena for 30 min after stimulation. In a second session on the same day (~4 h later), mice were confined to the other side of the arena while connected to a patch cord with no light administered. Whether mice received light or no light first was randomized. This was repeated and counterbalanced on the following day. Following two days of conditioning, CPP was tested by allowing mice to freely explore the entire arena for 20 min. The experiment was performed similarly for CPP in response to 4 Hz stimulation, except mice were conditioned to light administered in four bouts of 4 Hz stimulation for 25 s with 15 s between bouts. For the experiments testing the effect of chronic stress and fluoxetine treatment on CPP, mice were subjected to stress, fluoxetine treatment, and CPP as described above. Sucrose preference and CPP were tested in all mice before stress. Mice were then exposed to chronic stress, and once sucrose preference was lost, mice underwent the CPP protocol again. Following this, mice were treated with fluoxetine and restoration of sucrose preference was measured followed by CPP. A separate group of mice continued to undergo stress but were not treated with fluoxetine. The experimenter was blinded to the groups during testing and analysis. For experiments testing the effect of synaptic silencing, social interaction was used to induce CPP. Set up and habituation were performed as described above. On the following day, mice were connected to the patch cord and confined to one compartment in the presence of a female mouse (target animal) while ~9–10 mW 473 nm light was delivered (3 s on/3 s off) for 30 min. The target mouse was confined in a small wire cage to permit interaction. In a second session on the same day (~4 h later), mice were confined to the other side of the arena while connected to a patch cord with no light administered and in the absence of a target animal. Whether mice received light or no light first was randomized. This was repeated and counterbalanced each day. Following three days of conditioning, CPP was tested by allowing mice to freely explore the entire arena for 20 min. Social interaction Social interaction was evaluated in a 33.65 cm × 33.65 cm arena with a 9.5 cm diameter × 10 cm height wire cage positioned on one side to hold the target animal. Behaviour was monitored using a camera positioned above the arena, and data were collected using Anymaze software (Stoelting). Mice were connected to a patch cord to deliver light (~9–10 mW, 473 nm, 3 s on/3 s off) and placed on the side of the arena opposite the target cage in the absence of a target animal. Mice were allowed to freely explore for 150 s after which they were returned to their home cages briefly while a target animal was placed in the target cage. Mice were then placed back into the arena opposite the target cage and allowed to explore for 150 s. The time spent interacting with the target animal was defined by entry into the area immediately surrounding the target cage. In vivo electrophysiology CaMKIIa-ChR2(H134R) –eYFP (UNC) was injected into the vHipp as described above and an optic fibre (Thorlabs) was implanted over the vHipp and craniotomy was made over the NAc (from bregma in mm: +1.6 AP, +0.6 ML). A 16-channel, silicone recording probe (A1x16-Poly2-5mm-50 s-177-A16, NeuroNexus) was lowered at a rate of 100 µm/s to a depth of –4.5 mm to target the NAc shell. After allowing 20 min for the recording to stabilize, 10-ms light pulses (473 nm wavelength, Plexbright LED) were delivered through the optic fibre at 2.5 s intervals. After a 10-min baseline recording, 4 stimulation trains (either 100 Hz or 4 Hz, 4 ms pulse width, 15 s interstimulus interval) were applied through the optic fibre as described for slice physiology experiments, before resuming recording conditions identical to baseline for an additional 40 min. A mock stimulation group was included as a control (after 10 min of baseline recording, light pulses were stopped for 2 min before proceeding with recordings). After termination of the recording, the silicone probe was removed, and mice were sutured and returned to their home cage. Each mouse was recorded on each hemisphere for two recording conditions, and the order of hemisphere and stimulation protocol was counter-balanced across subjects. Light evoked-responses in the LFP were analysed using Neuroexplorer software (Plexon). Peri-event histograms of LFP responses were averaged for 24 trials (to yield 1-min intervals) and then computed with 40 ms bins around the onset of light stimulation. The difference between the LFP amplitude in the 40-ms bin immediately preceding light onset and peak LFP deflection was calculated for each 1-min interval and was then plotted as a function of time. Only channels with significant light-evoked changes in the LFP response (as determined by repeated t -test during baseline recordings) were used in the analysis. HFS-induced c-Fos expression ChR-expressing mice were connected to patch cords while in their home cage, and 100 Hz or 4 Hz blue light was administered as described above. 100 Hz stimulation was administered to YFP-expressing mice. Approximately 70 min later, mice were anaesthetized with isofluorane. Once anaesthetized, the mice were perfused transcardially with 0.9% saline followed by 4% paraformaldehyde. Brains were removed, postfixed overnight in 4% paraformaldehyde, and then transferred to 0.1 M phosphate buffer (PB). Brains were sectioned (40 µm) through the rostro-caudal extent of the NAc using a vibratome. Sections were stored free-floating in 0.1 M PB. Sections were incubated in blocking buffer (0.1 M PB, 3% triton X-100, 0.5% goat serum) for 2 h. Sections were incubated in rabbit anti-c-Fos (Santa Cruz sc-52; 1:1,000) overnight at 4 °C and then visualized with a goat anti-rabbit fluorescent secondary antibody (Alexafluor 546). Sections were mounted on microscope slides and coverslipped with Vectashield. Slides were viewed and imaged on a Nikon Eclipse E400. Photoshop was used to count c-Fos-positive cells and measure the area of region counted from. The number of c-Fos-positive cells was normalized to the area of the region. The investigator was blinded to the groups during processing of tissue and cell counting. Hippocampus–NAc projection labelling A retrograde virus expressing Cre recombinase (AAV5-hSyn-Cre-hGH; Penn Vector Core) was injected into the NAc shell (from bregma: anterior/posterior: +1.6, lateral: +0.6, dorsal/ventral: 4.5), and a Cre-dependent virus (AAV2-DIO-ChR2eYFP) was injected into the vHipp (from bregma anterior/posterior: –3.7, lateral: +3.0, dorsal/ventral: –4.8 from top of skull). Viruses were expressed for approximately 8 weeks to allow labelling of hippocampal cells and their projections in the brain. Mice were then perfused as described above and brain postfixed as described above. 100-µm sections were made through the rostra–caudal extent of the brain using a vibratome. Sections were mounted and coverslipped with Vectashield. 10× images were taken using a W-1 spinning disk confocal microscope (Nikon), and z -stacks were taken at 100× on a LSM 710 NLO (Zeiss). Maximum intensity projections of the z -stacks were generated in ImageJ. Statistical analyses and data Statistical analysis was performed using Graphpad Prism 6 software. When results are compared before and after HFS, n represents the number of cells or units. For all other experiments, n represents the number of mice. For electrophysiological experiments, this represents multiple cells recorded and averaged from each mouse. For box plots, the line in the middle of the box is plotted at the median. The box extends from the 25th to 75th percentiles. Whiskers represent minimum and maximum. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Datasets are available from the corresponding author upon request.
In new pre-clinical research, scientists at the University of Maryland School of Medicine (UMSOM), led by Scott Thompson, Ph.D., Professor of Physiology, have identified changes in brain activity linked to the pleasure and reward system. The research, published in the journal, Nature, provides news insights into how the brain processes rewards, and advances our understanding of addiction and depression. The research, which was conducted by Tara LeGates, Ph.D., a Research Associate in the Department of Physiology, discovered that the strength of signals between two brain regions, the hippocampus and the nucleus accumbens, is critical for processing information related to a rewarding stimulus, such as its location. "These two parts of the brain are known to be important in processing rewarding experiences," said Dr. Thompson." The communication between these regions is stronger in addiction, although the mechanisms underlying this were unknown. We also suspected that opposite changes in the strength of this communication would occur in depression. A weakening of their connections could explain the defect in reward processing that causes the symptom of anhedonia in depressed patients." Anhedonia is the inability to enjoy normally pleasurable experiences, such as food, being with friends or family, and sex. This research uncovered a key circuit in the brain of mice that is important for goal-directed behaviors and shows that the strength of the signals in this circuit are changeable, a process referred to as plasticity. Reward circuits and the molecular components that underlie their plasticity represent new targets for development of treatments for disorders like addiction and depression. Using Light-Sensitive Proteins To activate or inhibit this connection, researchers used special light-sensitive proteins introduced into specific neurons in the brains of the mice. In mice with the light-sensitive protein that stimulated the neurons, just four seconds of light exposure not only activated this hippocampus-to-nucleus accumbens pathway while the light was on but persistently reinforced the strength of the signals along this pathway, creating an artificial reward memory. A day later, the mice returned to the place where the artificial memory was created, even though they never experienced an actual reward there. Researchers then used light to silence the same pathway in mice with the light-sensitive protein that inhibited the neurons and found that this pathway is required for associating a reward with its location. The mice no longer showed a preference for the place where they had interacted with another mouse. The researchers also examined this circuit in depressed mice. This pathway could not be enhanced using the stimulating light-sensitive protein. After receiving anti-depressant medication, the researchers could enhance this pathway using the light-sensitive protein and create artificial reward memories in the mice. This work was funded by the National Institutes of Health, the Whitehall Foundation, and the Brain & Behavior Research Foundation (NARSAD Young Investigator Grant). "These exciting results bring us closer to understanding what goes wrong in the brains of clinically depressed patients," said UMSOM Dean E. Albert Reece, MD, Ph.D., MBA, who is also the Executive Vice President for Medical Affairs, University of Maryland, and the John Z. and Akiko K. Bowers Distinguished Professor.
10.1038/s41586-018-0740-8
Medicine
Just a few minutes of light intensity exercise linked to lower death risk in older men
Barbara J Jefferis et al. Objectively measured physical activity, sedentary behaviour and all-cause mortality in older men: does volume of activity matter more than pattern of accumulation?, British Journal of Sports Medicine (2018). DOI: 10.1136/bjsports-2017-098733 Journal information: British Journal of Sports Medicine
http://dx.doi.org/10.1136/bjsports-2017-098733
https://medicalxpress.com/news/2018-02-minutes-intensity-linked-death-older.html
Abstract Objectives To understand how device-measured sedentary behaviour and physical activity are related to all-cause mortality in older men, an age group with high levels of inactivity and sedentary behaviour. Methods Prospective population-based cohort study of men recruited from 24 UK General Practices in 1978–1980. In 2010–2012, 3137 surviving men were invited to a follow-up, 1655 (aged 71–92 years) agreed. Nurses measured height and weight, men completed health and demographic questionnaires and wore an ActiGraph GT3x accelerometer. All-cause mortality was collected through National Health Service central registers up to 1 June 2016. Results After median 5.0 years’ follow-up, 194 deaths occurred in 1181 men without pre-existing cardiovascular disease. For each additional 30 min in sedentary behaviour, or light physical activity (LIPA), or 10 min in moderate to vigorous physical activity (MVPA), HRs for mortality were 1.17 (95% CI 1.10 to 1.25), 0.83 (95% CI 0.77 to 0.90) and 0.90 (95% CI 0.84 to 0.96), respectively. Adjustments for confounders did not meaningfully change estimates. Only LIPA remained significant on mutual adjustment for all intensities. The HR for accumulating 150 min MVPA/week in sporadic minutes (achieved by 66% of men) was 0.59 (95% CI 0.43 to 0.81) and 0.58 (95% CI 0.33 to 1.00) for accumulating 150 min MVPA/week in bouts lasting ≥10 min (achieved by 16% of men). Sedentary breaks were not associated with mortality. Conclusions In older men, all activities (of light intensity upwards) were beneficial and accumulation of activity in bouts ≥10 min did not appear important beyond total volume of activity. Findings can inform physical activity guidelines for older adults. physical activity sedentary behaviour accelerometer mortality bouts. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: googletag.cmd.push(function() { googletag.display("dfp-ad-mpu"); }); Statistics from Altmetric.com See more details Picked up by 129 news outlets Blogged by 2 Referenced in 1 policy sources Tweeted by 354 On 20 Facebook pages Reddited by 1 257 readers on Mendeley Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. ?xml version="1.0" encoding="UTF-8" ? Request permissions physical activity sedentary behaviour accelerometer mortality bouts. Nearly all epidemiological evidence used to estimate the shape of the dose–response curve between physical activity (PA) and mortality is based on self-reported PA. 1 Moderately active compared with inactive adults have 20%–30% reductions in all-cause mortality, with greater reductions in older (>65 years) than middle-aged adults. 2 PA is a key determinant of longevity globally. 3 Current activity guidelines suggest accumulating ≥150 min moderate to vigorous PA (MVPA) per week in bouts lasting ≥10 min. 4 5 The 10 min bout requirement was based on trial data for cardiometabolic risk factors only, not clinical end points. 5 In order to test whether the accumulation of MVPA in ≥10 min bouts affects risk of mortality, prospective cohort studies with device-measured physical activity (which can provide minute by minute data for calculation of bouts) and mortality data are required, but few studies have such data. Such data can also inform whether accruing sedentary time in prolonged bouts is associated with adverse effects on mortality, as this has been identified as an important research gap. 6 Many studies report that higher levels of self-reported sedentary time are associated with mortality, 7–10 although self-reported sedentary behaviours may suffer from measurement error or recall bias. 11–15 Experimental studies suggest benefits of breaking up sedentary time for metabolic and haemostatic markers. 16 17 Hence, activity guidelines now suggest avoiding ‘long’ sedentary periods, but without quantifying how ‘long’ is detrimental. 4 Recently, prospective cohort studies using body-worn devices to measure PA report that more time spent in MVPA is associated with lower mortality risks and sedentary behaviour with higher risks. 18–28 However, few address the question of pattern of accumulation of activity rather than total volume. Most of the studies use the US National Health and Nutrition Examination Survey (NHANES) data set, 18–24 and not all findings are consistent. 18 23 There is little information from other populations and older age groups, >80 years. We address important gaps in knowledge by focusing on older men: older adults are increasingly important given global population ageing. We use a community-dwelling cohort of older British men to investigate how device-measured PA is associated with all-cause mortality (including light PA (LIPA) and sedentary behaviour which are the predominant activities in this age group 29 ). Importantly, we fill a research gap by investigating dose–response associations, 6 testing for linear and non-linear associations in order to understand whether the reductions in mortality risk for higher levels of physical activity are linear, or if there is a threshold level at which the benefits per unit of activity decrease (and conversely for sedentary behaviour). We also investigate whether, as suggested elsewhere, 30 the association of sedentary behaviour with mortality depends on PA level. Finally, a particularly novel and policy-relevant aspect of this paper is that we investigate patterns of accumulation of activity (including bout length and sedentary breaks) in relation to mortality. Answers to these questions will help inform future guidelines for older adults. Methods Sample The British Regional Heart Study is a prospective cohort study of 7735 men recruited from a single general practice in each of 24 British towns in 1978–1980 (ages 40–59 years). In 2010–2012, survivors (n=3137) were invited to a physical examination. 31 Measurements at 2010–2012 examination Objective physical activity assessment Men wore a GT3x accelerometer (ActiGraph, Pensacola, FL, USA) over the right hip for 7 days, during waking hours, removing it for bathing and swimming (2% reported swimming). Data were processed using standard methods described previously. 29 Non-wear time was excluded using the R package ‘Physical Activity’. 29 32 By convention, we defined valid wear days as ≥600 min wear time, and included participants with ≥3 valid days. Each minute of activity was categorised using intensity threshold values of counts per minute (CPM) developed for older adults: <100 for sedentary behaviour (<1.5 Metabolic Equivalent of Task (MET)), 100–1040 for light activity (LIPA) (1.5–3 MET) and >1040 for MVPA (≥3 MET). 33 Body mass index Body mass index (BMI, kg/m 2 ) was calculated from nurse-measured height (Harpenden stadiometer) and weight in light indoor clothing (Tanita body composition analyser (BC-418-MA)). Questionnaire data Men’s self-reported information included: current cigarette smoking, alcohol consumption, usual duration of night-time sleep, whether they lived alone and had pre-existing cardiovascular disease (CVD) (ever received a doctor diagnosis of heart attack, heart failure or stroke (with symptoms lasting >24 hours)). Mobility disability was present if the men reported being unable to do any of: (1) walking 200 yards without stopping and without discomfort; (2) climbing a flight of 12 stairs without holding on and taking a rest; or (3) bending down and picking up a shoe from the floor. Social class was based on longest held occupation at study entry (1978–1980) and categorised as manual and non-manual for parsimony (sensitivity analyses used the full seven categories of occupation and four categories of age leaving education). Region of residence (1978–1980) was grouped into Scotland, North, Midlands and South of England. Mortality Men were followed-up for all-cause mortality through National Health Service central registers until 1 June 2016. Patient involvement Participants had the opportunity to contribute their views on future research priorities for the study, and detailed feedback about physical activity levels from the accelerometer study was given on request. A summary of the findings of the study and update on progress of the accelerometer study was mailed to the participants yearly. Statistical methods Means, medians or proportions of covariates selected a priori were calculated according to quartiles of time spent in MVPA and sedentary behaviour. Cox proportional hazards models were used to estimate the HRs for mortality according to (1) total steps per day and total daily minutes in (2) MVPA, (3) LIPA and (4) sedentary behaviour, measured in 2010–2012. Each activity measure was analysed (1) in quartiles and (2) as a continuous variable. To aid interpretation, HRs were estimated for each increase in 1000 steps, 30 min of sedentary behaviour or LIPA and 10 min of MVPA. Model 1 was adjusted for measurement-related factors (average accelerometer wear time (min/day), season of wear (warm, May to September or cold, October to April), age, region of residence). Model 2 additionally adjusted for: social class, living alone, duration of sleep, smoking status, alcohol consumption and BMI. Model 3 further adjusted for presence of mobility disability. Model 4 also adjusted for other intensity of PA to investigate whether (1) MVPA and sedentary behaviour and (2) MVPA and LIPA were associated with mortality independent of each other. Model 5 adjusted simultaneously for MVPA, LIPA and sedentary behaviour as continuous variables (partition model). The linearity of associations between each measure of PA and sedentary behaviour and mortality was tested by comparing linear models with quadratic models using a likelihood ratio test in Stata, based on a priori expectations. Where non-linear associations were detected, the shape of the non-linear association was estimated using penalised splines in R. The penalised spline is a non-parametric estimation method which makes few assumptions about the underlying shape of the association. Predicted values from spline models were plotted. The Akaike information criterion (AIC) was compared between linear and spline models. We estimated the HR for mortality among men who accumulated ≥150 min MVPA/week (1) in bouts lasting ≥1 min and (1) in bouts lasting ≥10 min. For MVPA and LIPA, we also compared minutes in bouts lasting 1–9 min with minutes in bouts of ≥10 min, testing the difference in coefficients using a post hoc test. For sedentary behaviour, we compared bouts lasting 1–15 min, 16–30, 31–60 and over 61 min. We estimated the HR for mortality for the number of sedentary breaks per hour (defined as the interruption of a sedentary bout lasting >1 min by ≥1 min of LIPA or MVPA). The number of sedentary breaks per hour was split into quartiles for analysis, models were adjusted for total sedentary time. Sensitivity analyses (reported in the online supplementary appendix 1) investigated (1) the skewed distribution of MVPA, (2) the percentage of the day spent in each activity, (3) excluding the first year of follow-up and (4) excluding men with disability and pre-existing CVD, (5) including men with pre-existing CVD (6) confounding by socioeconomic status. Analyses were conducted in Stata V.14.2 34 and R V.3.4.0. 35 Results Of 3137 surviving men, 1566 (50%) agreed to participate and returned an accelerometer with data. Of these, 1528 (49%) had ≥600 min/day wear time on ≥3 days. 254 men with pre-existing heart attack, heart failure or stroke were excluded, leaving 1274 men. Participants’ mean age was 78.4 (range 71–92) years ( table 1 ). Mean accelerometer wear time was 855 min/day, of which 616 min was in sedentary behaviour and 199 min in LIPA. MVPA minutes had a right-skewed distribution, median 33 min (IQR 16–56) ( table 1 ). There were dose–response associations across quartiles of MVPA, where men who were more active compared with less active were younger, less likely to smoke cigarettes and had lower alcohol consumption, BMI and prevalence of mobility disability, and spent less time in sedentary behaviour ( table 1 ). Similarly, dose–response associations, in the opposite direction, were observed over quartiles of sedentary behaviour (data not presented). The distribution of bouts spent in each activity intensity is presented in online supplementary table 1 . Supplementary Appendix 1 [bjsports-2017-098733supp001.docx] View this table: View inline View popup Table 1 Characteristics of British men without pre-existing CVD or heart failure, by quartile of daily minutes spent in MVPA, measured in 2010–2012 (n=1274) PA, sedentary behaviour and all-cause mortality During a median follow-up of 5.0 years (range 0.2–6.1), 194 deaths occurred. For each additional 30 min in sedentary behaviour and LIPA or 10 min in MVPA, HRs for all-cause mortality (model 1) were respectively 1.17 (95% CI 1.10 to 1.25) ( table 2 ), 0.83 (95% CI 0.77 to 0.90) ( table 3 ) and 0.90 (95% CI 0.84 to 0.96) ( table 4 ). For each additional 1000 steps/day the HR was 0.84 (95% CI 0.78 to 0.91) ( table 5 ). Adjustments for sociodemographic factors, health behaviours and sleep time (model 2) and mobility disability (model 3) minimally affected the estimates and CIs. Adjustment for MVPA (model 4) did not meaningfully change associations for sedentary behaviour ( table 2 ) or LIPA ( table 3 ), but adjustment for sedentary time reduced the association for MVPA to 1.00 (95% CI 0.92 to 1.09) ( table 4 ). In the partition model (model 5, tables 2–4 ), only LIPA was significant at HR 0.86 (95% CI 0.78 to 0.94 per 30 min/day) on mutual adjustment for MVPA, sedentary behaviour and sleep time. There were dose–response associations across quartiles of activity, with higher risk in higher quartiles of sedentary behaviour ( table 2 ) and lower risk in higher quartiles of MVPA ( table 4 ) and steps ( table 5 ). View this table: View inline View popup Table 2 Association between minutes per day in sedentary behaviour with all-cause mortality among 1181 British men without pre-existing CHD, stroke or heart failure View this table: View inline View popup Table 3 Association between minutes per day in light physical activity with all-cause mortality among 1181 British men without pre-existing CHD, stroke or heart failure View this table: View inline View popup Table 4 Association between moderate to vigorous physical activity with all-cause mortality among 1181 British men without pre-existing CHD, stroke or heart failure View this table: View inline View popup Table 5 Association between steps per day with all-cause mortality among 1181 British men without pre-existing CHD, stroke or heart failure Shape of associations Likelihood ratio tests suggested better fit for quadratic than linear models of step count or MVPA minutes (both P<0.001) and all-cause mortality. In models for steps and MVPA, the increment in goodness of fit (based on AIC) between linear and spline models was minimal (online supplementary table 2 ). Plots of estimated splines (online supplementary figures 1 and 2 ) did not show great deviations from linearity. Hence, for clinical interpretation, the simpler linear model was adequate. Bouts of activity and all-cause mortality Table 6 presents the HR for mortality for each minute of MVPA spent in bouts; the HR per minute of MVPA spent in bouts lasting 1–9 min was 0.99 (95% CI 0.98 to 1.00), and 0.99 (95% CI 0.98 to 1.01) per minute of MVPA spent in bouts lasting ≥10 min; HRs did not differ (post hoc test P=0.59). Equivalent estimates for LIPA were HR 0.99 (0.99, 1.00) and 1.00 (0.99, 1.01), respectively (HRs did not differ; post hoc test P=0.48). Adjusting for presence of mobility disability attenuated HRs. View this table: View inline View popup Table 6 Association between duration of bouts of sedentary behaviour, LIPA and MVPA* with all-cause mortality among 1181 British men without pre-existing CHD, stroke or heart failure The HR for accumulating 150 min MVPA/week in sporadic minutes (achieved by 66% of men) was 0.59 (95% CI 0.43 to 0.81) in model 1, and was not meaningfully changed in models 2 and 3 (data not presented). The HR for accumulating 150 min MVPA/week in bouts lasting ≥10 min (achieved by 16% of men) was 0.58 (95% CI 0.33 to 1.00) in model 1, and changed little in model 3. The model for ‘meeting the guidelines in bouts of ≥1 minute’ (yes/no) is not adjusted for total MVPA time per week, because the binary variable cuts the total MVPA time per week at 150 min/week, so the two are highly correlated (r>0.8). The numbers of minutes spent in sedentary bouts lasting 1–15 min, 16–30, 31–60 and >61 min were all similarly associated with mortality; each HR 1.01 (95% CI 1.00 to 1.01) per minute fully adjusted ( table 6 ). Analyses of number of sedentary breaks found that the HR for mortality among men in higher quartiles did not differ compared with the lowest quartile ( table 7 ). See online supplementary appendix 1 for results of sensitivity analyses. View this table: View inline View popup Table 7 Association between number of sedentary breaks per hour* with all-cause mortality among 1181 British men without pre-existing CHD, stroke or heart failure Discussion Among community-dwelling older men, we observed consistent prospective associations between higher total daily step count, minutes spent in LIPA or MVPA, lower sedentary time and lower risk of all-cause mortality. Associations changed little after adjustment for other health behaviours, BMI, presence of mobility disability and wear time. Associations of LIPA with mortality were only slightly further attenuated after adjustment for time spent in sedentary behaviour and MVPA, although associations between MVPA and mortality were entirely attenuated after adjustment for sedentary behaviour. The lower mortality risks were gained across the spectrum of activity levels, not confined to a particular threshold level. The total volume rather than pattern of accrual of physical activity was the most important influence on mortality. Our data extend evidence to an older population (range 72–91 years at baseline), which is important as data on the over 80s are sparse, 25 and to a non-US population (most reports use US data, 18–25 27 28 nearly all use one data source). Few studies of device-measured activity and mortality have looked at light activity, 21 36 or tested non-linearity in activity–mortality associations, 24 26 27 and only one investigated bouts of MVPA, 23 whereas we look at specific bouts of MVPA, LIPA, sedentary behaviour as well as the number of breaks in sedentary time. PA intensity and duration Overall in our older aged sample of men, the associations between PA and mortality tended to be stronger than in younger adults, in line with findings of a meta-analysis of self-reported PA in relation to mortality. 2 Comparing our findings with other studies with objective PA data is difficult because definitions of activity intensity and analysis methods vary. We found that each 30 min/day increase in sedentary behaviour was associated with a 15% increase in mortality risk, after exclusion of men with pre-existing CVD and exclusion of the first year of follow-up data. However, the adjustments for LIPA and MVPA in the partition model fully attenuated the association. While an early NHANES study reported that accelerometer-measured sedentary behaviour was associated with incident mortality, 18 a study with longer follow-up and excluding prevalent CVD and deaths in the first 2 years of follow-up did not find significant associations. 23 Additionally, a recent study of older women found that the raised risks of mortality associated with higher sedentary time were fully attenuated after adjusting for MVPA. 28 In our study, each 30 min/day increase in LIPA was associated with a 17% reduction in mortality, which was robust to adjustment for sedentary behaviour and MVPA, suggesting that the increase in LIPA rather than the reduction in sedentary behaviour was most important. In a younger NHANES sample, a reduction in mortality of 16% was found per hour of LIPA. 36 They defined LIPA as >2020 CPM (compared with >1040 CPM in our study), and did not adjust for MVPA or account for pre-existing disease. 36 Another analysis of NHANES found a 17% reduction in mortality per hour of LIPA adjusted for MVPA, but using lower cut points (100–760 CPM). 24 In contrast, a study of older women did not find that LIPA was associated with consistent reductions in mortality, although different definition of LIPA was used. 28 We found that each 10 min/day increase in MVPA was associated with a 10% reduction in mortality (approximately 75% reduction per hour), which was not explained by adjustment for behavioural and social confounders and mobility disability whereas in NHANES data, the adjusted estimate was approximately 40% reduction per hour MVPA, but using a lower cut point (>760 CPM) to define MVPA. 24 However, in models adjusting simultaneously for all intensities of activity, significant associations were observed only for LIPA, suggesting that among older men the lighter intensity stimulus is sufficient for prevention of mortality. The associations between LIPA and mortality were robust to adjustment for behavioural and social confounders and mobility disability, but future work should investigate the dose of activity that is protective against geriatric syndromes (such as cognitive and functional limitations), which may be on the pathway to raised risks of mortality and are increasingly important for elderly health and well-being. We found that each increase of 1000 steps/day was associated with a 15% reduction in mortality, compared with a 6% reduction in the younger Australian and Tasmanian cohorts (average age <60 years at baseline). 26 Non-linearity of associations Given the very marginal benefits of the non-linear models, we concluded that more steps, LIPA and MVPA and less sedentary behaviour are beneficial, rather than there being a particular threshold for benefits to accrue. Pattern of activity: bouts and breaks Many PA guidelines advise accumulating MVPA in bouts lasting over 10 min and avoiding long spells of sedentary behaviour. 4 5 If pattern of activity beyond total volume was important, we would expect the minutes spent in bouts of MVPA lasting ≥10 min to be more strongly associated with mortality than minutes spent in shorter bouts (and the same for shorter bouts of sedentary behaviour), but our findings did not support this. Furthermore, the benefit from accumulating 150 min sporadically was very similar to accumulating it in bouts ≥10 min. This suggests that total volume of PA rather than the pattern is important. Hence, for older men, for all-cause mortality at least, accumulating bouts of activity lasting ≥10 min may not be important. However, these analyses should be replicated in larger samples. We found no evidence that sedentary breaks were associated with mortality risk, once total sedentary time was accounted for, indicating that in this age group, total time rather than pattern of accumulation may be key. Taken together with the partition analysis results, this suggests that the guidelines for older adults may do better to focus on increasing time spent in light (or more intense) activity in order to gain the benefits of activity and by implication displace sedentary time, rather than encouraging breaks in sedentary periods as a means to reduce sedentary bout length. To our knowledge, only two other studies have directly examined bouts; one found that more time spent in MVPA bouts of ≥10 min was associated with lower mortality, but that sedentary bouts were not associated with mortality, 23 and the other study concluded that longer sedentary bouts were associated with raised mortality risk. 27 Interactions of physical activity with other variables It is suggested that the raised mortality risks associated with higher sedentary time are heightened in people with low MVPA levels. 30 We did not find strong evidence to suggest this, but a stratified analysis suggested stronger associations between sedentary behaviour and mortality in the less active men. This is consistent with data from a meta-analysis including over 1 million individuals using self-reported PA, and sedentary behaviour found that the risks of sedentary behaviour were more pronounced in the less active individuals. 30 Two analyses of device-measured activity in NHANES data reported similar patterns, 19 24 but in a study of older adults, the reverse was found. 25 Strengths and limitations This study benefits from prospectively collected data on exposures, important confounders and mediators and mortality. PA was measured using accelerometers and the PA intensities defined using age-appropriate and validated cut points. 33 The sedentary behaviour measure does not include postural data and could include some standing time; however, hip-worn ActiGraph-measured sedentary behaviour has minimal bias compared with thigh-worn activPAL-measured sedentary behaviour (correlation r=0.76) in a sample of middle-aged adults. 37 In a sample of healthy older adults, the ActiGraph cut point of <100 CPM has an estimated 93% sensitivity and 58% specificity; 11.8% of time classified by activPAL as standing was classified by accelerometer as sedentary 38 ; however, in comparison, our sample is older and less healthy, so likely to engage in less prolonged standing time, which would improve classification. The response rate to the accelerometer study was similar or superior to other studies of older adults; nevertheless, participants were more often younger, and had healthier behaviours than non-participants, and may therefore have been more physically active and less sedentary than the general population. Data are from a population-based cohort of community-dwelling older predominantly white British men, so results may not apply to women, other ethnicities or younger men; however, other studies have not found evidence that the associations between PA or sedentary behaviour and mortality differ by gender 18 24 26 27 36 and a recent study of older women finds associations in the same direction as ours, although due to methodological differences it is hard to compare effect sizes. 28 Given that LIPA and sedentary time are highly correlated, it can be difficult to distinguish their effects; we did not use isotemporal substitution analyses as the sedentary behaviour and LIPA were too highly correlated, resulting in problems with collinearity and model convergence. In sensitivity analyses, our findings did not meaningfully change after exclusion of men with mobility disability, prevalent CVD and the first year of follow-up, suggesting findings are not likely to be due to reverse causality. Conclusions The dose–response associations between sedentary behaviour and mortality as well as inverse associations between MVPA and LIPA suggest that among older men there are sustained benefits to longevity from physical activity of all intensities, from LIPA upwards. Results suggest that all activities, however modest, are beneficial. The finding that LIPA is associated with lower risk of mortality is especially important among older men, as most of their daily PA is light intensity. Furthermore, the pattern of accumulation of physical activity did not appear to alter the associations with mortality, suggesting that it would be beneficial to encourage older men to be active irrespective of bouts. Future work should replicate the investigation into bouts of activity in larger samples and including women. Given the rapid decline in physical activity with age among the oldest old populations, encouraging even light activity may provide benefits for longevity. What are the findings? In older British men, accumulating more minutes of activity from light intensity upwards was associated with lower all-cause mortality. There was no evidence to suggest that accumulating moderate to vigorous activity in bouts lasting ≥10 min lowered risk of mortality compared with accumulating activity in shorter bouts, nor that breaking up sedentary time was associated with lower mortality risks. How might it impact on clinical practice in the future? Findings could refine physical activity guidelines and make them more achievable for older adults with low activity levels: stressing the benefits of all activities, however modest, from light intensity upwards; second, encouraging accumulating activity of all intensities without the need to sustain bouts of 10 min or more. Acknowledgments We acknowledge the British Regional Heart Study team for data collection. Other content recommended for you Associations of vigorous physical activity with all-cause, cardiovascular and cancer mortality among 64 913 adults Juan Pablo Rey Lopez et al., BMJ SEM, 2019 OP63 Device assessed physical activity, sedentary behaviour and sleep in relation to midlife cognition: compositional analysis of the 1970 British Cohort Study* John J Mitchell et al., J Epidemiol Community Health, 2022 Do associations of physical activity and sedentary behaviour with cardiovascular disease and mortality differ across socioeconomic groups? A prospective analysis of device-measured and self-reported UK Biobank data Susan Paudel et al., British Journal of Sports Medicine, 2023 Physical activity and sedentary activity: population epidemiology and concordance in Australian children aged 11–12 years and their parents François Fraysse et al., BMJ Open, 2019 Association of occupation with the daily physical activity and sedentary behaviour of middle-aged workers in Korea: a cross-sectional study based on data from the Korea National Health and Nutrition Examination Survey Joo Hye Sung et al., BMJ Open, 2021 Efficacy and safety of systematic corticosteroids among severe COVID-19 patients: a systematic review and meta-analysis of randomized controlled trials Shaolei Ma et al., Signal Transduction and Targeted Therapy, 2021 Erlotinib versus gemcitabine plus cisplatin as neoadjuvant treatment of stage IIIA-N2 EGFR-mutant non-small-cell lung cancer: final overall survival analysis of the EMERGING-CTONG 1103 randomised phase II trial Wen-Zhao Zhong et al., Signal Transduction and Targeted Therapy, 2023 Epidemiological investigation on bone mineral density of men in Linyi Weiyong LIU et al., Journal of Shandong First Medical University & Shandong Academy of Medical Sciences, 2022 Systematic evaluation and Meta-analysis of the diagnostic accuracy of ovarian adnexal mass by ultrasound IOTA ADNEX model Yixin ZHANG et al., Journal of Shandong First Medical University & Shandong Academy of Medical Sciences, 2022 The damage evolution behavior of polypropylene fiber reinforced concrete subjected to sulfate attack based on acoustic emission Ninghui LIANG et al., Frontiers of Structural and Civil Engineering, 2022 Powered by Privacy policy Do not sell my personal information Google Analytics settings
Clocking up just a few minutes at a time of any level of physical activity, including of light intensity, is linked to a lower risk of death in older men, suggests research published online in the British Journal of Sports Medicine. Providing the recommended 150 minute weekly tally of moderate to physical activity is reached, total volume, rather than activity in 10 minute bouts, as current guidelines suggest, might be key, the findings indicate. This lower level of intensity is also likely to be a better fit for older men, most of whose daily physical activity is of light intensity, say the researchers. Current exercise guidelines recommend accumulating at least 150 minutes a week of moderate to vigorous physical activity in bouts lasting 10 or more minutes. But such a pattern is not always easy for older adults to achieve, say the researchers. To find out if other patterns of activity might still contribute to lowering the risk of death, the researchers drew on data from the British Regional Heart Study. This involved 7735 participants from 24 British towns, who were aged between 40 and 59 when the study stated in 1978-80. In 2010-12, the 3137 survivors were invited for a check-up, which included a physical examination, and questions about their lifestyle, sleeping patterns, and whether they had ever been diagnosed with heart disease. They were also asked to wear an accelerometer—a portable gadget that continuously tracks the volume and intensity of physical activity—during waking hours for 7 days. Their health was then tracked until death or June 2016, whichever came first. In all, 1566 (50%) men agreed to wear the device, but after excluding those with pre-existing heart disease and those who hadn't worn their accelerometer enough during the 7 days, the final analysis was based on 1181 men, whose average age was 78. During the monitoring period, which averaged around 5 years, 194 of the men died. The accelerometer findings indicated that total volume of physical activity, from light intensity upwards, was associated with a lower risk of death from any cause. Each additional 30 minutes a day of light intensity activity, such as gentle gardening or taking the dog for a walk, for example, was associated with a 17 percent reduction in the risk of death. This association persisted even after taking account of potentially influential lifestyle factors, such as sedentary time. Whilst the equivalent reduction in the risk of death was around 33 percent for each additional 30 minutes of moderate to vigorous intensity physical activity a day, the benefits of light intensity activity were large enough to mean that this too might prolong life. And there was no evidence to suggest that clocking up moderate to vigorous activity in bouts of 10 minutes or more was better than accumulating it in shorter bouts. Sporadic bouts of activity were associated with a 41 percent lower risk of death; bouts lasting 10 or more minutes were associated with a 42 percent lower risk. Sporadic bouts seemed easier to achieve as two thirds (66%) of the men achieved their weekly total of moderate to vigorous physical activity in this way while only 16% managed to do so in bouts of 10 or more minutes. Finally, there was no evidence to suggest that breaking up sitting time was associated with a lower risk of death. This is an observational study so no firm conclusions can be drawn about cause and effect. And those who wore the accelerometer tended to be younger and have healthier lifestyles than those who didn't, so this might have skewed the results, say the researchers. Nor is it clear if the findings would be equally applicable to younger age groups or older women. Nevertheless, the results could be used to refine current physical activity guidelines and make them more achievable for older adults, suggest the researchers. Future guidance might emphasise that all physical activity, however modest, is worthwhile for extending the lifespan—something that is particularly important to recognise, given how physical activity levels tail off rapidly as people age, they point out. "[The ] results suggest that all activities, however modest, are beneficial. The finding that [low intensity physical activity] is associated with lower risk of mortality is especially important among older men, as most of their daily physical activity is of light intensity," write the researchers. "Furthermore, the pattern of accumulation of physical activity did not appear to alter the associations with mortality, suggesting that it would be beneficial to encourage older men to be active irrespective of bouts," they add.
10.1136/bjsports-2017-098733
Biology
Foreign vs. own DNA: How an innate immune sensor tells friend from foe
Ganesh R. Pathare et al. Structural mechanism of cGAS inhibition by the nucleosome, Nature (2020). DOI: 10.1038/s41586-020-2750-6 Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-2750-6
https://phys.org/news/2020-11-foreign-dna-innate-immune-sensor.html
Abstract The DNA sensor cyclic GMP–AMP synthase (cGAS) initiates innate immune responses following microbial infection, cellular stress and cancer 1 . Upon activation by double-stranded DNA, cytosolic cGAS produces 2′3′ cGMP–AMP, which triggers the induction of inflammatory cytokines and type I interferons 2 , 3 , 4 , 5 , 6 , 7 . cGAS is also present inside the cell nucleus, which is replete with genomic DNA 8 , where chromatin has been implicated in restricting its enzymatic activity 9 . However, the structural basis for inhibition of cGAS by chromatin remains unknown. Here we present the cryo-electron microscopy structure of human cGAS bound to nucleosomes. cGAS makes extensive contacts with both the acidic patch of the histone H2A–H2B heterodimer and nucleosomal DNA. The structural and complementary biochemical analysis also find cGAS engaged to a second nucleosome in trans . Mechanistically, binding of the nucleosome locks cGAS into a monomeric state, in which steric hindrance suppresses spurious activation by genomic DNA. We find that mutations to the cGAS–acidic patch interface are sufficient to abolish the inhibitory effect of nucleosomes in vitro and to unleash the activity of cGAS on genomic DNA in living cells. Our work uncovers the structural basis of the interaction between cGAS and chromatin and details a mechanism that permits self–non-self discrimination of genomic DNA by cGAS. Main In the cytoplasm of mammalian cells, the enzyme cGAS is crucial for the detection of double-stranded DNA (dsDNA) during infection 2 . On binding dsDNA, cGAS synthesizes the second messenger 2′3′ cyclic GMP–AMP (cGAMP), which in turn stimulates antiviral and pro-inflammatory responses through the adaptor protein stimulator of interferon genes (STING) 2 , 3 , 4 , 5 , 6 , 7 , 10 . In addition, the nucleus contains a pool of cGAS that associates strongly with chromatin (Extended Data Fig. 1a ) 8 , 9 , 11 . The chromatinized state of intact genomic DNA has been reported to limit cGAS activity 12 , 13 , and cGAS has been found to bind more tightly to nucleosomes than to the corresponding naked DNA duplexes 13 . Here we sought a mechanism that explains how cGAS can be juxtaposed to nucleosomal DNA without undergoing activation. A minimal inhibitory histone unit for cGAS We tested whether histones, the building blocks of nucleosomes, may regulate cGAS inside the nucleus. Treatment of cells with aclarubicin robustly evicts core histones from chromatin 14 ; in particular, histones H2A and H2B (Extended Data Fig. 1b ). Notably, disruption of nucleosomes by aclarubicin also led to mobilization of nuclear cGAS (Extended Data Fig. 1b ). Proximity-ligation assays (PLAs) further indicated prominent association of cGAS with histones in situ, which was partially lost upon aclarubicin treatment (Extended Data Fig. 1c, d ). Thus, histones appear to dynamically engage cGAS in the nucleus. Consistent with previous work 13 , functional analysis of the in vitro enzymatic activity of cGAS revealed that mononucleosomes (hereafter nucleosomes) inhibited DNA-induced cGAMP synthesis (Extended Data Fig. 1e ). Likewise, compact chromatin fibres (12-mer nucleosome arrays) suppressed cGAS activity (Extended Data Fig. 1e ). H2A–H2B dimers also had an inhibitory effect, but neither H2A or H2B monomers nor H3 or H4 monomers did (Extended Data Fig. 1f, g ). Thus, H2A–H2B dimers on their own can suppress cGAS, albeit with weaker overall potency than fully assembled nucleosomes (Extended Data Fig. 1h ). Additional features of chromatin are therefore necessary to exert maximal inhibition. Overall structure of the cGAS–NCP complex To determine how cGAS interacts with nucleosomes, we pursued structural studies. A 1.5:1 molar mixture of human cGAS (residues 155–522) with a 147-bp 601 DNA nucleosome core particle (NCP) resulted in heterogenous particle distribution (Extended Data Fig. 2a–d ). To select for and stabilize more homogenous cGAS–NCP complexes, we combined gradient centrifugation with chemical crosslinking (GraFix) 15 . Both wild-type (WT) cGAS and cGAS K394E, a mutant impaired in dsDNA-mediated cGAS dimerization 16 , were used for structure determination. For the cGAS K394E mutant, we obtained a 4.1 Å reconstruction that revealed two NCPs organized in an NCP1–cGAS1–cGAS2–NCP2 sandwich arrangement with an expected molecular weight of around 560 kDa, consistent with the most prominent peak fraction in multi-angle light scattering under non-crosslinked conditions (Fig. 1a, b , Extended Data Fig. 3 , Extended Data Table 1a , Supplementary Videos 1 , 2 ). The two individual nucleosomes are held together by two cGAS protomers. While the first cGAS protomer and its corresponding NCP (designated cGAS1 and NCP1) are well resolved, the second nucleosome–cGAS pair (NCP2 and cGAS2) is less ordered (Extended Data Fig. 3e ). In the dimeric NCP1–cGAS1–cGAS2–NCP2 arrangement, each cGAS protomer interacts with the histone octamer of one NCP through histones H2A and H2B and the nucleosomal DNA (for example, cGAS1 and NCP1), while contacting the second nucleosome (for example, cGAS1 and NCP2) primarily through interactions with the nucleosomal DNA (Fig. 1a, b ). In the WT cGAS structure, we observed a similar overall structural arrangement, with the NCP1–cGAS1–cGAS2–NCP2 complex at 5.1 Å and a NCP1–cGAS1 structure at 4.7 Å resolution following focused 3D classification (Extended Data Figs. 4 , 5 , Extended Data Table 1 ). Given the structural similarity, the higher-resolution cGAS K394E mutant was used for subsequent analysis (Extended Data Figs. 3 , 6 ). Fig. 1: Cryo-electron microscopy structure of cGAS bound to nucleosomes. a , 3D reconstruction of the complex containing two cGAS protomers, cGAS1 (red) and cGAS2 (orange), and two nucleosomal core particles, NCP1 and NCP2, respectively. b , c , Ribbon diagrams of the NCP1–cGAS1–cGAS2–NCP2 complex ( b ) and the cGAS1–NCP1 complex ( c ) fit into corresponding electron-density maps. The two lobes of cGAS, N-lobe and C-lobe, are shown in pink and red, respectively. d , Schematic domain architecture for human cGAS (hcGAS) as previously defined 23 . Residue numbers are shown and the dotted line indicates the construct used for structural analysis. The dsDNA-sensing regions that are involved in cGAS activation are underlined in purple, and the nucleosome-binding regions that are involved in cGAS inhibition are marked by blue dashed boxes. Full size image Structural insights into the cGAS1–NCP1 complex For the cGAS K394E mutant, focused 3D classification of cGAS1–NCP1 yielded a structure at a resolution of 3.1 Å (Fig. 1c , Extended Data Fig. 3c, f , Extended Data Table 1a ). This revealed the binding interface between cGAS and the nucleosome, which is in large parts contributed by three contact surfaces on cGAS that interact with the acidic patch of H2A–H2B—a common site involved in protein–nucleosome assemblies 17 (Fig. 2a , Extended Data Fig. 6a–d , Supplementary Video 3 ): (1) loop 1 (residues 234–237), a canonical acidic patch contact involving R236 of cGAS forming a salt bridge to E62 and E65 of H2A (Fig. 2a , Extended Data Fig. 6b ); (2) loop 2 (residues 255–258), a proximal cGAS loop that connects β4 and α4, with R255 forming a salt bridge to H2A residues D91, E93 and E62 (Fig. 2a , Extended Data Fig. 6c ); and (3) loop 3 (residues 328–330), which together with the C-terminal end of cGAS helix α5 (residues 354–356) forms multiple interactions with H2A helix α3 (residues 64–71) (Fig. 2a , Extended Data Fig. 6d ). In addition to these protein–protein interactions, the cGAS1 C-lobe also forms localized DNA backbone contacts with NCP1, engaging the nucleosomal DNA around super-helical location 6 (SHL6) through residues K347 and K350 18 (Fig. 2a (right panel), Extended Data Fig. 6e, f ). Fig. 2: The cGAS1–NCP1 complex and structural mechanism of inhibition. a , Magnified view of the cGAS1(K394E)–NCP1 complex bipartite interactions, cGAS–histone interactions (left) and cGAS–nucleosomal DNA interactions (right). b , EMSA gel showing the interaction of nucleosomes (40 ng μl −1 ) with a concentration gradient of WT, R255A cGAS and R236A cGAS (from 50 to 6 ng μl −1 ; 1:2 step dilutions); the black arrowheads indicate higher-order cGAS–NCP complexes. c , In vitro cGAMP synthesis of WT cGAS, R255A cGAS and R236A cGAS with or without a concentration gradient of chromatin (from 5 to 0.3125 nM; 1:2 step dilutions) normalized by cGAMP levels in the absence of chromatin for each individual mutant. d , EMSA gel showing the interaction of nucleosomes (40 ng μl −1 ) with increasing concentrations of WT or K350A/L354A cGAS (from 100 to 12 ng μl −1 ; 1:2 step dilution). e , In vitro cGAMP synthesis of WT and K350A/L354A hcGAS with or without chromatin (5 nM) normalized by cGAMP levels in the absence of chromatin for each individual mutant. Data are representative for three independent experiments showing similar results ( b , d ) or mean ± s.d. of n = 3 independent experiments ( c , e ). One-way analysis of variance (ANOVA) with post-hoc Dunnett multiple comparison test; ** P = 0.0092, * P = 0.092 (WT) and * P = 0.0311 (R236A) ( c ) and two-tailed Student’s t -test; * P = 0.0192 ( e ). The data points are from independent experiments. f , Overview of active hcGAS–DNA 2:2 complex with two distinct dsDNA-binding surfaces (A-site and B-site) 16 (Protein Data Bank (PDB) ID: 4LEY). g , Superposition of the hcGAS–dsDNA ( f ) and cGAS1–NCP1 complexes illustrating the incompatibility of DNA ligand binding (dsDNA1 in yellow) to cGAS in the nucleosome-bound configuration. h , Model based on superpositioning of the cGAS1–NCP1 complex onto DNA-bound cGAS oligomers as previously defined 22 (PDB: 5N6I) ( h ). For gel source data, see Supplementary Fig. 1 . Full size image To investigate the functional importance of the observed interfaces between cGAS and NCP1, we performed site-directed mutagenesis and carried out electromobility shift assays (EMSAs). Mutations of cGAS residues that engage the nucleosomal acidic patch (R255A and R236A) completely abrogated nucleosome binding (Fig. 2b ). In in vitro enzymatic activity, we found that both cGAS R255A and cGAS R236A were no longer inhibited by chromatin (Fig. 2c ). To corroborate the relevance of the acidic patch interaction for cGAS inhibition, we made use of a peptide derived from latency-associated nuclear antigen (LANA), a well-known acidic patch binder 19 . In the presence of the LANA peptide, but not a corresponding mutant peptide, cGAS was competed off from the nucleosome and regained in vitro DNA-induced activity in the presence of chromatin, as judged by robust cGAMP synthesis (Extended Data Fig. 7a, b ). Of note, cGAS residues K350 and L354, which contact the nucleosomal DNA of NCP1 (Extended Data Fig. 6f, i ), also had significant effects on nucleosome binding and inhibition when mutated (Fig. 2d, e , Extended Data Fig. 6f, i ). Thus, cGAS is anchored to chromatin through a bipartite interface on nucleosomes composed of the acidic patch and nucleosomal DNA contacts, respectively. Mechanism of cGAS inhibition by NCP1 In canonical binding of dsDNA, two separate surfaces on cGAS, designated A-site and B-site, interact with two individual strands of DNA to promote the assembly of a 2:2 cGAS–DNA complex—the minimal active enzymatic unit 20 , 21 , 22 , 23 (Fig. 2f ). Moreover, a third DNA-binding site, designated C-site, has been proposed to facilitate cGAS oligomerization in liquid-phase condensation 24 . In the NCP-bound configuration, the cGAS A-site, including the zinc thumb, faces the histone octamer disc. Nucleosomal DNA interactions are further enforced by residues that are essential for cGAS dimerization (for example, K394) (Extended Data Fig. 4c, d ), although the K394-containing loop and the zinc-finger motif play only a minor role in nucleosome binding (Extended Data Figs. 4c, d , 7f, g ). The key cGAS–NCP interaction originates from the B-site (for example, R236, K254, R255 and S328), which also contributes to nucleosomal DNA binding (for example, K347 and L354). The cGAS active site in our structure points away from NCP1, towards the solvent and NCP2, and is principally accessible (Fig. 2 ). Nucleosome binding hence confers cGAS inactivation in three essential ways (Fig. 2g, h , Supplementary Video 4 ): first, owing to steric clashes with both the nucleosomal DNA and histones H2A and H2B, cGAS cannot engage dsDNA at the interface between lobe 1 and lobe 2; second, key residues on cGAS that are required for DNA binding and dimerization are tied up in interactions with the nucleosome and, thus, are not available for canonical dsDNA binding and activation (Extended Data Fig. 4c, d ); and third, both histones and nucleosomal DNA sterically prevent dimerization of cGAS, an essential prerequisite for enzymatic activity 16 , 21 , 22 . Importantly, the steric restrictions imparted by H2A–H2B are sufficiently pronounced to explain the inability of cGAS to undergo dsDNA-dependent activation in the presence of this histone dimer. The structure thereby provides the mechanism of cGAS inhibition by H2A–H2B, while identifying additional contacts and inhibitory principles that are specific to cGAS inhibition by the nucleosome (Extended Data Fig. 6g–i ). We next dissected the contributions of nucleosomal DNA as opposed to linker DNA to cGAS binding and activation. Fluorescence polarization assays revealed that cGAS binds more tightly to nucleosomes with long overhangs than to those without or with only short overhangs, probably owing to the presence of additional DNA-binding sites (Extended Data Fig. 7d, e ). We then assessed the catalytic activity of cGAS (WT) and the cGAS acidic patch mutants, R236A and R255A, on nucleosomes with and without a 80-bp dsDNA overhang. Whereas WT cGAS and cGAS mutants robustly synthesized cGAMP on naked dsDNA, they remained inactive in the presence of nucleosomes that lacked a DNA overhang (Extended Data Fig. 7c ). Thus, nucleosomal DNA is not a good substrate for cGAS activation. Nucleosomes carrying 80-bp long linker DNA still failed to activate WT cGAS, but elicited activation of both cGAS R236A and cGAS R255A (Extended Data Fig. 7c ). Thus, in vitro WT cGAS preferentially binds to the NCP over linker DNA, limiting its enzymatic activity. In trans interaction between cGAS and NCP2 The cGAS1 protomer also binds to a second nucleosome (for example, cGAS1 and NCP2) predominantly through protein–DNA contacts with the nucleosomal DNA at SHL3. The two NCPs are held about 20 Å apart, with the DNA entry or exit sites of the two nucleosomes pointing roughly 90º away (Fig. 3a ). The interaction with the second nucleosome is mediated by phosphate backbone contacts involving conserved cGAS1 residues K285 and K299–R302 (Fig. 3b ). Compared to the protein–DNA contacts, fewer and less well-ordered cGAS1–NCP2 protein–protein interactions were observed between a β-hairpin loop (cGAS residues 365–369) extending towards the C-terminal tail of H2B, as well as inter-nucleosomal contacts between two N-terminal tails of histone H4 (Extended Data Fig. 8a, b ). The in trans nucleosome interaction interface on cGAS is largely provided by the C-site 24 (Extended Data Fig. 8c–f ), which supports higher-order cGAS–NCP assemblies in this structure. Fig. 3: cGAS interactions with the second nucleosome in trans . a , b , The NCP1–cGAS1–cGAS2–NCP2 di-nucleosomal arrangement is shown. A magnified view detailing the interactions between the N-lobe of cGAS1 (pink), the C-lobe of cGAS1 (red) and the nucleosomal DNA of NCP2 (grey) ( b , bottom) is also displayed. c , EMSA gel showing the interaction of nucleosomes with increasing concentrations of WT or K285A/R300A/K427A cGAS (100 to 12 ng μl −1 ; 1:2 step dilution). The arrowheads highlight free nucleosomes (dark grey), complexed nucleosomes (black) and a putative 1:1 cGAS:NCP assembly (light grey). The experiment shown in c was independently repeated three times with similar results. For gel source data, see Supplementary Fig. 1 . Full size image To validate the observed in trans cGAS1–NCP2 interface, we performed EMSAs. We found that the combined mutation of the NCP2-interacting motifs on cGAS (K285, R300 and K427) still allowed cGAS to interact with nucleosomes, as indicated by a prominent EMSA gel shift that probably reflects a 1:1 cGAS1–NCP1 complex (Fig. 3c ). However, all higher-order cGAS–NCP assemblies readily detected with WT cGAS were lost when the secondary nucleosome-binding site was mutated. Consistent with the preserved ability to bind to nucleosomes in vitro, mutations to cGAS in the trans interface (K285A/R300A/K427A) had no detectable effect on cGAS intranuclear tethering in reconstituted HeLa cGAS knockout (KO) cells (Extended Data Fig. 8g ). Hence, while the bipartite cGAS1–NCP1 interface forms the primary anchoring motif between cGAS and the nucleosome, the secondary cGAS1–NCP2 interface contributes to the formation of higher-order complexes. Effect of structure-based mutations on cellular activity To determine the functional relationship between nucleosome binding and cGAS inhibition in cells, we focused on motifs on cGAS that interact with the acidic patch (R255A and R236A) (Fig. 4a–c ) and with the nucleosomal DNA in cis (that is, NCP1–cGAS1; K350A and L354A), the two key in vitro interfaces for cGAS–nucleosome binding (Fig. 2a ). Consistent with our in vitro assays and extending recent work 9 , the cGAS mutants R236A and R255A as well as K350A/L354A were strongly defective in nuclear tethering when expressed in HeLa cGAS KO cells (Fig. 4d ). Using fluorescence recovery after photobleaching (FRAP), we detected differences between the mutants in their degree of intranuclear mobility, with cGAS R255A showing the highest, R236A intermediate and K350A/L354A the lowest mobility relative to WT cGAS (Fig. 4e , Extended Data Fig. 9a ). Notably, the degree of dissociation correlated well with cellular cGAS responses, with R255A expression triggering the highest cGAMP levels, followed by R236A, and cGAS K350/L354A showing negligible activity (Fig. 4f ). Fig. 4: Effect of structure-based mutations on cellular cGAS activity. a , b , Electrostatic surface representation of the NCP disc surface alone ( a ) or with cGAS (pink ribbon) ( b ) 31 . The electrostatic potential is shown from red (−7) to blue (+7) k T/e. c , A magnified view of contacts between cGAS and the acidic patch of the nucleosome. d , Differential nuclear salt fractionation probed for cGAS and H2B by immunoblot from HeLa cGAS KO cells reconstituted with doxycycline-inducible WT cGAS or cGAS mutants after 2 days of doxycycline treatment. The experiments in d were independently repeated at least three times with similar results. e , HeLa cGAS KO cells were transfected with WT cGAS–GFP or mutant cGAS–GFP and the immobile fraction of nuclear-localized cGAS was assessed by FRAP. Data are mean ± s.d. of n = 3 (left) and n = 4 (right) independent experiments. One-way ANOVA with post-hoc Dunnett multiple comparison test (left) or two-tailed Student’s t -test (right). f , cGAMP production from HeLa cGAS KO cells reconstituted with doxycycline-inducible WT cGAS or cGAS mutants after 2 days of doxycycline treatment. Data are mean ± s.d. of n = 4 (left) and n = 5 (right) independent experiments. One-way ANOVA with post-hoc Dunnett multiple comparison test (left) or two-tailed Student’s t -test (right). g , HeLa cGAS KO cells reconstituted with doxycycline (Dox)-inducible WT cGAS or cGAS mutants and treated with doxycycline for 24 h were co-cultured with BJ fibroblasts for 24 h. Cells were lysed and mRNA levels of IFI44 (left) and IFIT2 (right) were assessed as indicated. Data are presented as fold induction relative to cells without doxycycline and shown are the mean ± s.d. of n = 4 independent experiments. Two-way ANOVA with post-hoc Tukey multiple comparison test; NS, not significant. Individual data points are from biological replicates. For gel source data, see Supplementary Fig. 1 . Full size image We next examined whether the expression of the two most striking cGAS mutants, R236A and R255A, stimulates a type I interferon response. Activation of cGAS not only promotes conventional, cell-autonomous signalling but also elicits cellular activation in trans through the transfer of cGAMP 25 . We found that cGAS mutants triggered only modest upregulation of interferon-stimulated genes when induced in a synchronized manner in mono-cultures of HeLa cells (Extended Data Fig. 9b ). This effect may be due to negative-feedback regulation at the level of STING 26 , 27 , resulting in non-responsiveness towards intracellular cGAMP accumulation over time (Extended Data Fig. 9c, d ). By contrast, in co-culture with human BJ fibroblasts, which serve as naive acceptor cells, the expression of cGAS mutants in HeLa cells induced strong upregulation of interferon-stimulated genes and WT cGAS had no such effect (Fig. 4g ). Collectively these findings suggest that disrupting the interaction of cGAS with the acidic patch of nucleosomes is in itself sufficient to trigger innate immune activation. Discussion We provide the structural basis for cGAS inhibition by nucleosomes: a bipartite interface involving contacts to both the acidic patch of H2A–H2B dimers and the nucleosomal DNA that ‘traps’ cGAS in an inactive state, in which cGAS can neither engage dsDNA in a manner required for canonical dsDNA sensing nor undergo the dimerization/oligomerization reaction that is required for its catalytic activity. On the basis of our work, we propose that cGAS uses a ‘missing-self’ recognition strategy to reliably discriminate between self and non-self DNA: instead of focusing on pathogen-specific features that promote activation as is the case for many pattern recognition receptors 28 , cGAS exploits the suppressive activity of nucleosomes, leveraging essentially ‘inbuilt identifiers’ of eukaryotic genomes, to avert aberrant activity. The motifs responsible for cGAS interactions with the nucleosome are well conserved within cGAS homologues that utilize DNA for the regulation of their catalytic activity (Extended Data Fig. 9e ). We propose that the inhibitory interaction of cGAS with nucleosomes is a key element of a multi-layered regulation strategy that allows cGAS to reside in the nucleus without undergoing persistent activation 29 , 30 . Methods Cell culture and generation of modified cell lines Cells were maintained in DMEM (Life Technologies) containing 10% (v/v) FCS, 1% (v/v) penicillin (10,000 IU)/streptomycin (10 mg) (BioConcept), 4.5 g/l d -glucose and 2 mM l -glutamine. HeLa cells are from Sigma (93021013-1VL) and grown under 5% CO 2 and 20% O 2 . Foreskin fibroblasts (BJ-5ta) were purchased from the American Type Culture Collection (ATCC; CRL-4001) and cultured at 5% O 2 . cGAS KO HeLa cells were generated according to the CRISPR–Cas9 technology as described previously 32 . The single guide RNA sequences ((5′-3′) forward: CAC CGA GAC TCG GTG GGA TCC ATC G; (5′-3′) reverse: AAA CCG ATG GAT CCC ACC GAG TCT C) were cloned into the plasmid pSpCas9(BB)-2A-GFP (PX458) (52961, AddGene). Plasmid (1 μg) was transfected into HeLa cells with Lipofectamine 2000 (Life Technologies). Single cells were plated into wells of a 96-MW plate. Single cells were selected for GFP expression and expanded to get clones, which were tested for the KO phenotype by sequencing and immunoblotting. Clones without cGAS were functionally validated. Lentiviral vectors were produced as described previously and HEK 293T cells used for this purpose were a kind gift from Dr. D. Trono, EPFL 33 . In brief, HEK 293T cells were transfected with pCMVDR8.74, pMD2.G plasmids and the puromycin-selectable lentiviral vector pTRIPZ containing the open reading frame of the protein of interest by the calcium phosphate precipitation method. The supernatant containing lentiviral particles was harvested at 48 h and 72 h, pooled and concentrated by ultracentrifugation. All cell lines used were checked for mycoplasma contamination by PCR on a regular base and no contamination was found. Mutagenesis PCR Point mutants of human cGAS (amino acids 155–522) were generated by site-directed PCR mutagenesis based on the QuickChange Primer Design method (Agilent) using PrimeSTAR Max DNA Polymerase (Takara) and suitable primers. Each mutated gene was cloned into a pTRIPZ vector for lentivirus production, a pET28 vector for expression of a C-terminal 6 × His-Halo fusion protein in Escherichia coli and a pIRESneo3 vector for fluorescence recovery after photobleaching (FRAP) experiments. Fractionation of cellular nuclei BJ-5ta fibroblasts ( n = 800,000) were seeded into 10-cm culture dishes. After 2 days, cells were treated with doxorubicin (D1515, Sigma) or aclarubicin (FBM-10-1099, Focus Biomolecules) as indicated, and differential salt fractionations were obtained as follows: cells were lysed in a lysis buffer containing 10 mM HEPES (pH 7.4), 10 mM KCl, 0.5% NP-40 with protease inhibitors (cOmplete, Mini, EDTA-free Protease Inhibitor Cocktail; 11836170001, Sigma). Lysates were cleared by centrifugation and supernatant A was recovered. Pellet was resuspended in a lysis buffer containing 20 mM HEPES (pH 7.4) and 0.25 M NaCl. Supernatant B was recovered after centrifugation. The same procedure was repeated as described above with lysis buffers containing increasing NaCl concentrations (0.5 M, 0.75 M and 1 M NaCl). PLA The Duolink In Situ Detection Reagents Red Kit (DUO94001, DUO92002 and DUO 92004, Sigma) was used for the proximity ligation using the following antibodies: histone H2B (1:150; ab52484, Abcam), histone H4 (1:150; ab31830, Abcam) and cGAS (1:150; D1D3G; 15102, Cell Signaling). Cells were seeded onto coverslips (12 mm, Roth) at 40,000 cells per coverslip, fixed with 100% (v/v) methanol for 3 min and blocked with 5% BSA in PBS for 1 h at room temperature. Cells were incubated with the primary antibody in PBS containing 5% (w/v) BSA for 16 h at 4 °C in a humid chamber. After washing with 1× Buffer A (Sigma), cells were processed for the PLA. Briefly, cells were incubated in anti-mouse IgG Duolink In Situ PLA Probe MINUS (1:5; Sigma) and anti-rabbit IgG Duolink In Situ PLA Probe PLUS (1:5; Sigma) for 1 h at 37 °C. Thereafter, cells were washed with Buffer A and incubated in 1× Duolink ligation buffer containing DNA ligase (1:40; Sigma) for 30 min at 37 °C. After incubation, cells were washed in Buffer A and incubated in 1× amplification buffer (Sigma) with DNA polymerase (1:80; Sigma) for 100 min at 37 °C. Cells were washed in Buffer A and incubated for 30 min at 37 °C in 1× Detection Solution Red (Sigma). Cells were then washed in Buffer B (Sigma). Cells were mounted with the medium Duolink In Situ Mounting Medium with DAPI (Sigma). Images were acquired using a ×63/1.4 oil objective on a confocal laser scanning microscope (LSM700, Zeiss). Confocal imaging was performed with Z-sections for at least 10 randomly chosen fields. Maximum intensity projection was applied on the images. The number of PLA-positive signal per cell within the DAPI-positive area was counted using the Cell Counter plugin in Fiji. cGAMP measurement HeLa cells reconstituted with WT cGAS or cGAS mutants were plated (0.075 × 10 6 cells/ml) in the presence of doxycycline (0.1–1 μg/ml) for 2 days. Cells were harvested by trypsinization (trypsin-EDTA (0.05%), Life Technologies) for 5 min. Cell pellets were lysed in RIPA lysis buffer containing 50 mM Tris, 150 mM NaCl, 1% (w/v) sodium deoxycholate, 0.03% (v/v) SDS, 0.005% (v/v) Triton X-100, 5 mM EDTA, 2 mM sodium orthovanadate and cOmplete Protease Inhibitor Cocktail (Roche) (pellet from one well of a six-well plate in 130 μl of RIPA) for 30 min on ice. Lysed cells were centrifuged for 5 min at 18,200 g and 4 °C. Diluted supernatants were used for cGAMP ELISA assay (Cayman 2′-3′-cGAMP ELISA kit, 501700) according to the manufacturer’s instructions. Protein concentration in the supernatant was measured using BCA Pierce Protein assay kit and was used to normalize cGAMP concentration. Immunoblotting Protein extracts were loaded into 10% or 15% SDS-polyacrylamide gels. cGAS was blotted onto nitrocellulose membrane (0.45 μm, Bio-Rad) and histones were transferred on polyvinylidene difluoride membrane (0.2 μm, Bio-Rad). The primary antibody was incubated in 5% BSA diluted in PBS 1× overnight at 4 °C. The secondary antibodies anti-mouse or anti-rabbit horseradish peroxidase (HRP)-conjugated antibodies were incubated for 1 h at room temperature. Proteins were visualized with the enhanced chemiluminescence substrate ECL (Pierce, Thermo Scientific) and imaged using the ChemiDox XRS Bio-Rad Imager. The following antibodies were used in this study: histone H2B (1:1,000; ab52484, Abcam), histone H2A (1:1,000; ab18255, Abcam), histone H4 (1:1,000; ab31830, Abcam); cGAS (1:1,000, D1D3G; 15102, Cell Signaling), histone H3 (1:1,000; 9715, Cell Signaling), STING (1:1,000; D2P2F, Cell Signaling), lamin A/C (1:1,000; SAB4200236, Sigma), GAPDH (1:1,000; AM4300, Life Technologies), donkey anti-rabbit HRP (1:5,000; 711-036-152, Jackson ImmunoResearch) and donkey anti-mouse HRP (1:5,000; 715-036-151, Jackson ImmunoResearch). Confocal imaging of endogenous cGAS and H2B Cells ( n = 40,000) were seeded on coverslips (12 mm, Roth). At 48 h after seeding, cells were fixed with 100% (v/v) methanol for 3 min and blocked with PBS containing 5% (w/v) BSA for 1 h at room temperature. Cells were incubated with the primary antibodies (1:150; rabbit anti-cGAS (15102S, CST) and mouse anti-H2B (1:150; ab52484, Abcam)) in PBS containing 5% (w/v) BSA for 16 h at 4 °C in a humid chamber. Afterwards, coverslips were washed with PBS three times and then incubated in PBS containing 5 μg/ml of DAPI and the secondary antibodies (1:1,000; goat anti-rabbit IgG (H+L) Alexa Fluor 488 conjugate (A11008, Thermo) and donkey anti-mouse IgG (H+L) Alexa Fluor 568 conjugate (A10037, Thermo)). At 1 h post-incubation, coverslips were washed three times with PBS 1× and mounted on microscope slides (15545650, Thermo) using Fluoromount-G (0100-01, SouthernBiotech). Images were acquired using a ×63/1.40 HC Plan-Apochromat oil immersion objective on a SP8-STED 3× confocal microscope (Leica). cGAS labelled with Alexa Fluor 488 was imaged with an excitation laser of 488 nm and an emission window of 492–532 nm, detected with a hybrid detector. Nuclei counterstained with DAPI were imaged with an excitation laser of 405 nm and an emission window of 410–480 nm, detected by a photomultiplier detector. H2B was labelled with Alexa Fluor 568 and imaged with an excitation laser of 561 nm and an emission window of 560–620 nm, detected by a hybrid detector. Images were acquired with a voxel size of 0.0655 × 0.0655 × 1 μm 3 . FRAP HeLa cGAS KO cells were plated on 35-mm glass bottom culture dishes (part number P35G-1.5-14-C, MatTek Corporation) at 5,000 cells per dish. On the next day, cells were transfected with plasmids encoding hcGAS-GFP WT or cGAS mutants (pIRESneo3 hs cGas GFP, pIRESneo3 hs cGAS K350A L354A GFP, pIRESneo3 hs cGAS R236A GFP and pIRESneo3 hs cGAS R255A GFP) using Lipofectamin 2000 following the manufacturer’s instructions. After 24 h, cells were used for FRAP experiments, which were performed on a Zeiss LSM 710 confocal microscope at 37 °C with a W-Plan Apochromat ×63/1.0 objective. A circle with a diameter of 1.33 μm (10 pixels) within the hcGAS–GFP signal located inside the nucleus was partially photobleached with a 488-nm laser (100% power) with 20 iterations within 0.200 s. Time-lapse images were acquired over a 20-s time course after photobleaching with 0.200-s intervals with a laser power between 0.4% and 0.6%. Images were processed by Fiji and normalized on FRAP Analyser (software developed at the University of Luxembourg) using the single normalization + full-scale method. FRAP data were fitted to a binding + diffusion circular model using the FRAP Analyser. Immobile fraction extracted data were plotted in GraphPad Prism 8. Fluorescence polarization Flc-labelled 21-bp dsDNA oligonucleotide (5ʹ-Flc-GACCTTTGTTATGCAACCTAA-3ʹ) was used as a fluorescent tracer. Increasing amounts of WT or K394E cGAS (0.3–2,500 nM) were mixed with tracer (10 nM final concentration) in a 384-well microplate (784076, Greiner) at room temperature. The interaction was measured in a buffer containing 20 mM HEPES pH 7.4, 500 μM TCEP, 40 mM NaCl, 10 mM KCl and 0.1% (v/v) pluronic acid. PHERAstar FS microplate reader (BMG Labtech) equipped with a fluorescence polarization filter unit was used to determine the changes in fluorescence polarization. The polarization units were converted to fraction bound as described previously 34 . The fraction bound was plotted versus cGAS concentration and fitted assuming a 1:1 binding model to determine the dissociation constant ( K d ) using Prism 7 (GraphPad). Since the oligonucleotide that was used contained a fluorescent label, we refer to these as apparent K d ( K app ). All measurements were performed in triplicates. For the competitive titration assays, the cGAS bound to the fluorescent oligo tracer was back-titrated with unlabelled dsDNA (21, 147, 167 and 227 bp) or nucleosomes (146, 167 and 227 bp). These counter-titration experiments were carried out by mixing tracer (10 nM) and cGAS (300 nM), and titrating increasing concentration of the unlabelled competitor (0–2.5 μM). The fraction bound was plotted versus competitor concentration and the data were fitted with a non-linear regression curve to obtain the IC 50 values in Prism 7 (GraphPad). At least two technical replicates were performed per experiment. Cellular activation assays HeLa cells reconstituted with WT cGAS or cGAS mutants were seeded (0.075 × 10 6 cells/ml) in six-well plates in the presence of doxycycline (1 μg/ml) for 16 h or 40 h. Stimulation with dsDNA (90 bp) was carried out as previously described 33 . Briefly, dsDNA (1.6 μg/ml) was transfected using Lipofectamine 2000 (Life Technologies) and cells were harvested 4 h later. For co-culture studies, HeLa cells were treated with doxycycline as described above. After 24 h, cells were collected, washed, and 19,000 cells were mixed with human BJ fibroblasts (0.095 × 10 6 cells/ml) and incubated overnight. Quantitative real-time PCR Cells were lysed using RLT buffer (79216, Qiagen). RNA was extracted according to the manufacturer’s protocol (Qiagen RNeasy Mini kit) and treated with RNase-free DNase (EN0521, Thermo Scientific). RNA (500 ng) was reverse transcribed (RevertAid, EP0442, Thermo Fisher Scientific) and analysed by RT–quantitative PCR in duplicates or triplicates using the Maxima SYBR Green/ROX qPCR Master Mix (K0223, Thermo Fisher Scientific). The quantitative PCR reactions were run on a QuantStudio 5 Real-Time PCR system. GAPDH was used for normalization. Primer sequences (5′–3′): GAPDH : forward GAGTCAACGGATTTGGTCGT, reverse GACAAGCTTCCCGTTCTCAG; IFI44 : forward GAT GTG AGC CTG TGA GGT CC, reverse CTT TAC AGG GTC CAG CTC CC; IFIT2 : forward GCGTGAAGAAGGTGAAGAGG, reverse GCAGGTAGGCATTGTTTGGT; and CGAS : forward GCACGTGAAGATTTCTGCACC, reverse TGACTCAGAGGATTTTCTTTCGG. The sequence of the sense strand of the 90-mer DNA is as follows (5′–3′): TACAGAT CTACTAGTGATCTATGACTGATCTGTACATGATCTACATACAGATCTACTAGTGATCTATGACTGATCTGTACATGATCTACA. Expression and purification of recombinant cGAS Truncated human cGAS (155–522) WT or mutants were expressed and purified from E. coli strain BL21 (DE3). Plasmids expressing His 6 -Halo-tagged truncated human cGAS were induced with 2 mM IPTG at 18 °C for 20 h. Bacteria were collected by centrifugation and lysed by sonication in lysis buffer (20 mM HEPES pH 8.0, 300 mM NaCl, 20 mM imidazole, 1 mM DTT and protease inhibitor). After centrifugation, clear lysate was incubated with Ni-NTA beads (Qiagen), washed with lysis buffer and 20 mM HEPES pH 8.0, 1 M NaCl, 20 mM imidazole, 1 mM DTT and eluted with 20 mM HEPES pH 7.5, 500 mM NaCl and 250 mM imidazole. Eluted cGAS was subjected to size-exclusion chromatography using a Superdex 200 16/60 column in 20 mM HEPES pH 7.5, 300 mM KCl and 1 mM DTT. The protein was flash frozen in liquid nitrogen and stored at −80 °C. EMSAs For the cGAS mutants, biotinylated nucleosomes (31583, Active motifs) were incubated with serial dilutions of recombinant cGAS at room temperature for 30 min in PBS in a sample volume of 10 μl. The binding reactions contained 40 μg ml −1 of nucleosomes and cGAS proteins ranged from 100 to 12 μg ml −1 with twofold increase. After the reaction, 5 μl glycerol was added. Reactions were detected by electrophoresis on a 5% PAGE gel in 0.5× TBE buffer at 10 mA for 1 h 15 min. The gels were incubated for 15 min in SYBR Safe containing PBS and were scanned using the Typhoon FLA-9500 imager (GE Healthcare) and imaged using the ChemiDoc XRS Bio-Rad Imager and Image Lab 6.0.0 software. For the EMSA using the LANA peptide, biotinylated nucleosomes (40 μg ml −1 ; 31583, Active motifs) were incubated with the LANA from 0.6 mg ml −1 to 78 μg ml −1 , twofold dilutions at room temperature for 5 min in PBS for a sample volume of 4 μl. Recombinant cGAS WT (40 pmol) in 4 μl PBS was added and incubated for 30 min at room temperature. After the reaction, 5 μl glycerol was added. Reactions were detected by electrophoresis on a 5% PAGE gel in 0.5× TBE buffer at 10 mA for 1 h 15 min. The gels were incubated for 15 min in SYBR Safe containing PBS and were scanned using the ChemiDoc XRS Bio-Rad Imager and Image Lab 6.0.0 software (Bio-Rad). cGAS in vitro competition assay Human cGAS (50 nM) was mixed with recombinant histones and H2A–H2B dimer (5–0.3 uM) or nucleosomes (75–2 nM) or nucleosome fibres (6.5–0.1 nM) in 10 mM HEPES pH 8.0, 10 mM KCl and 1 mM MgCl 2 . 0.1 mg ml −1 HT-DNA, 10 μCi [α- 32 P]ATP and 1 mM GTP were added and left to react for 12 h at 37 °C. Reaction solution (1 μl) was spotted onto TLC plates (HPTLC silica gel, 60 Å pores, F 254 ; 1055480001, Merck Millipore), and the nucleotides were separated with 5 mM NH 4 Cl 17% EtOH as the mobile phase at 25 °C for 30 min. The plates were visualized by autoradiography and scanned using the Typhoon FLA-9500 Imager (ImageQuanTool, GE Healthcare). Images were processed using Image Lab 6.0.0 software (Bio-Rad) to quantify the intensity of the spots corresponding to cGAMP. After normalizing by cGAMP levels in the absence of chromatin for each individual mutant, the IC 50 was calculated using GraphPad Prism. Large-scale production of 601 DNA Production of 601 DNA was performed as previously described 35 . Briefly, a plasmid carrying 32 copies of the Widom ‘601’ sequence, each flanked by EcoRV restriction sites, was purified from an 8L 2YT culture of transformed E. coli DH5α cells using alkaline lysis, followed by isopropanol precipitation, RNase A treatment of the suspended pellet and subsequent chromatography on Sepharose 6. After isopropanol precipitation, the 601 sequences were released from the plasmid by digestion with EcoRV (12 ml total volume containing 132 μl EcoRV for 40 h). The 601 DNA fragment was isolated by PEG precipitation with 14.5% PEG, and further purified by ethanol-acetate precipitation and subsequent chloroform-phenol extraction to yield 9.2 mg of 601 DNA. Nucleosome preparation Histones were prepared and octamers made as previously described 36 . Nucleosomes were reconstituted through overnight gradual dialysis from TEK2000 (10 mM Tris pH 7.5, 1 mM EDTA and 2 M KCl) into TEK10 (10 mM Tris pH 7.5, 1 mM EDTA and 10mM KCl) with dialysis buttons using octamer:DNA ratios of 1.3, 1.4 and 1.5 each in 0.9 ml total volume containing 6.7 μM DNA. After recovering the material from the dialysis buttons, the nucleosomes were concentrated using Amicon centrifugal concentrators with a 30 kDa MWCO to yield a nucleosome concentration of 0.86 mg ml −1 . Cryo-electron microscopy sample preparation 601Widom sequence NCPs (147 bp) and purified human cGAS were mixed in a 1:1.5 molar ratio in gel filtration buffer (20 mM HEPES pH 7.4, 300 mM KCl and 250 μM TCEP) and dialysed for 24 h against low-salt dialysis buffer (20 mM HEPES pH 7.4, 50 mM KCl and 250 μM TCEP). Thereafter, the dialysed complex was concentrated using an Amicon Ultra 0.5-ml centrifugal filter (Merck Millipore) and applied to a GraFix 15 gradient of 10–30% sucrose containing top solution (20 mM HEPES pH 7.4, 50 mM KCl and 250 μM TCEP, 10% w/v sucrose) and bottom solution (20 mM HEPES pH 7.4, 50 mM KCl and 250 μM TCEP, 30% w/v sucrose and 1.5% glutaraldehyde). The gradient ultracentrifugation was carried out at 30,000 r.p.m. for 18 h at 0 °C using a AH-650 swinging-bucket rotor. Fractions (100 μl) were collected and analysed by both native PAGE and SDS–PAGE. Thereafter, the peak fractions were combined and dialysed overnight (20 mM HEPES pH 7.4, 50 mM KCl and 250 μM TCEP) to remove sucrose. The resulting complex sample was concentrated with an Amicon-Ultra 0.5-ml centrifugal filter to 1 mg ml −1 as determined by measuring protein concentration at Abs280. Quantifoil holey carbon grids (R 1.2/1.3 200-mesh, Quantifoil Micro Tools) were glow discharged with Solarus plasma cleaner (Gatan) for 30 s in a H 2 /O 2 environment. Sample (3 µl) was applied to grids and blotted for 3 s at 4 °C and 100% humidity in a Vitrobot Mark IV (FEI), and immediately plunged into liquid ethane. Cryo-electron microscopy data acquisition First, for the cGAS (K394E)–NCP complex, two data sets were collected for GraFix-crosslinked samples using the Titan Krios electron microscope (Thermo Fisher Scientific) at 300 keV, with zero energy loss (slit 20 eV). Automatic data collection was done using EPU (Thermo Fisher Scientific) on a Cs-corrected (CEOS GmbH), with micrographs recorded using a Gatan K2 summit direct electron detector (Gatan). The acquisition was performed at a nominal magnification of ×130,000 in EFTEM nanoprobe mode, yielding a pixel size of 0.86 Å at the specimen level. All data sets were recorded with the 100-μm objective aperture and with a total dose of 45 e − /Å 2 , recording 40 frames. The targeted defocus values ranged from −0.5 to −2 μm. Similarly, few micrographs were recorded for the two non-crosslinked samples, the WT cGAS–NCP complex and the dimerization mutant (K394E) cGAS–NCP complex. Second, for the cGAS(WT)–NCP complex, a dataset was collected for GraFix-crosslinked samples using the Glacios (Thermo Fisher Scientific) electron microscope at 200 keV. Automatic data collection was done using EPU (Thermo Fisher Scientific) on a Cs-corrected (CEOS GmbH), with micrographs recorded using a Falcon 3EC Direct Electron Detector. The acquisition was performed at a nominal magnification of ×150,000 in EFTEM nanoprobe mode, yielding a pixel size of 0.68 Å at the specimen level. All data sets were recorded with the 100 μm objective aperture and with a total dose of 35 e − /Å 2 , recording 40 frames. The targeted defocus values ranged from −0.5 to −2 μm. Cryo-electron microscopy image processing First, for the cGAS (K394E)–NCP complex, on-the-fly evaluation of the data was performed with CryoFLARE (in house development; ) 37 . Micrographs below an EPA limit of 5 Å were used for further processing. A total of 2,890 micrographs were acquired in two sessions. Drift correction was performed with the RELION3 motioncor in which a motion-corrected sum of all 40 frames was generated with and without applying a dose-weighting scheme and CTF was fitted using gCTF 38 on the non-dose-weighted sums. A small set of particles (54,000) was picked using crYOLO 39 and imported to cryoSPARC 40 . After 2D classification, an ab initio model was generated. This model was used as an initial 3D map for further 3D classification in RELION3 for the two data sets independently. In data set 1, the particles (13,943) included in the class that contained two cGAS and two NCPs were imported into cryoSPARC 40 and subjected to non-uniform refinement and later refined to 4.1 Å (Extended Data Table 1(a) ). In data set 2, the particles (87,323) included in the class that contained one cGAS and one NCP were subjected to local refinement in cryoSPARC and refined to 3.8 Å. The particles that were used for the 4.1 Å and 3.8 Å maps were merged and refined to 3.3 Å in RELION3 41 . CTF refinement and signal subtraction was done for the density, accounting for 1cGAS–1NCP complex. 3D classification followed by non-uniform refinement in cryoSPARC led to a 3.1 Å map. Second, for the cGAS (WT)–NCP complex, a total of 5,007 movies were acquired for the cGAS (WT)–NCP complex. Full frame motion correction followed by patch CTF was performed using cryoSPARC. Particle picking was done using template picker for 700 images and a 2D template was generated. Selected 2Ds were later used to generate an ab initio model and template picking of particles for the rest of the images in cryoSPARC. A total of 142,743 particles out of 404,087 were selected from 2D classification to do a homo-refinement. 3D hetero-refinement with two classes was carried out with 56,747 (40%) particles, giving a map that was later locally refined to 5.1 Å. The same class with 56,747 particles was further used for particle subtraction for the cGAS1–NCP1 complex and further local refinement was carried out to obtain a 4.7 Å map. The resolution values reported for all reconstructions are based on the gold-standard Fourier shell correlation (FSC) curve 42 . High-resolution noise substitution has been used for correcting the effects of soft masking for the related FSC curves. All of the maps have been filtered based on local resolution estimated with MonoRes (XMIPP) 43 and were later sharpened using the localdeblur_sharpen protocol (XMIPP). Model building and refinement A nucleosome model from PDB entry 6R8Y 44 and a human cGAS model from PDB entry 4LEV 16 were used as initial references for the cryo-electron microscopy map interpretation. The models were rigid-body docked using Chimera 45 and COOT 46 . Sequence reassignment to 6R8Y with the 147-bp 601 Widom sequence 47 along with the human histone sequence was done. The starting model for cGAS (4LEV) was refined against the corresponding crystallographic structure factors with Phenix 48 and Rosetta 49 to resolve some of the geometry outliers. Restraints for the covalently attached crosslinker were generated with JLigand 50 , Phenix and Rosetta. Model building and refinement of the cryo-electron microscopy structures were carried out iteratively with COOT 46 , Phenix 48 and Rosetta 49 using reference model restraints for cGAS (torsional angles) derived from the template model (see above). The reference model restrains were generated with Phenix 48 and converted to Rosetta constraints. Residues at the interface with the NCP were not restrained. In case of the dimeric nucleosome–cGAS, the refinement was with reference model restraints derived from the higher-resolution monomer complex (torsional angles). Model validation was done with Phenix 48 and MOLPROBITY 51 . Side chains without sufficient density were marked with zero occupancy. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The electron density reconstructions and corresponding final models for NCP1–cGAS1–cGAS2–NCP2 and NCP1–cGAS1 were deposited in the Electron Microscopy Data Bank (accession codes: EMDB-10694 and EMDB-10695) and in the PDB (accession codes: 6Y5D and 6Y5E). The electron density reconstructions for NCP1–WT cGAS1–WT cGAS2–NCP2 and NCP1–WT cGAS1 were deposited in the Electron Microscopy Data Bank (accession codes: EMDB-11006 and EMDB-11005).
How do molecules involved in activating our immune system discriminate between our own DNA and foreign pathogens? Researchers from the Thomä group, in collaboration with the EPFL, deciphered the structural and functional basis of a DNA-sensing molecule when it comes in contact with the cell's own DNA, providing crucial insights into the recognition of self vs. non-self DNA. DNA within our cells is compacted and stored in the nucleus in the form of chromatin (DNA wraped around histone proteins, forming nucleosomes, the basic unit of chromatin). DNA found outside the nucleus, in the cytoplasm, is an important signal that triggers immune responses indicating the presence of an intracellular pathogen or a potentially cancerous cell. DNA sensing is carried out by cGAS, an enzyme responsible for recognizing and binding naked DNA. When activated, cGAS synthesizes cyclic GMP-AMP, which in turn initiates the body's so-called "innate" immune system—the first-line-of-defense part of our immune system. Until now, cGAS was thought to function predominantly in the cytoplasm, detecting foreign, non-self, DNA such as viruses. But recent studies suggested that cGAS is also present inside the nucleus. This was puzzling given the possibility that the enzyme is activated by its own DNA triggering an unwanted inflammatory response against its own DNA. Intrigued by this observation, researchers from the Thomä group used structural biology as a discovery tool and found that cGAS is present in the nucleus in an inactive state. They teamed up with the Ablasser lab at the EPFL to decipher the mechanism of cGAS inactivation by chromatin in cells. Taking advantage of the capability of the Thomä lab in cryo-electron microscopy (cryo-EM), the researchers derived the structure of cGAS bound to a nucleosome. They found that cGAS directly engages the histone proteins of nucleosomes. Once bound to the nucleosome, cGAS is "trapped" in a state in which it is unable to engage or sense naked DNA. It is then also unable to synthesize GMP-AMP and remains inactivated. cGAS, when present in the nucleus of healthy cells, is thus inactivated by chromatin, and does not participate in innate immune signaling in response to its own DNA. Ganesh Pathare, a postdoc in the Thomä lab and one of the first authors of the study, comments: "The cGAS-nucleosomes structures provide the structural and functional basis for cGAS inhibition by chromatin. cGAS is an important protein for the innate immune response in the cell, required for the fight against viruses but also for detecting transformed or cancerous cells. cGAS activity is also often misguided in autoimmune diseases. Our study provides crucial insights into cGAS regulation and the mechanism of self DNA vs. non-self DNA recognition. This creates exciting opportunities for future therapeutic intervention in a wide range of diseases". This study was published in the 26 November 2020 issue of Nature.
10.1038/s41586-020-2750-6
Chemistry
Water behavior breakthrough opens a crucial door in chemistry
"Water clustering on nanostructured iron oxide films." Lindsay R. Merte, et al. Nature Communications 5, Article number: 4193 DOI: 10.1038/ncomms5193. Received 12 May 2013 Accepted 22 May 2014 Published 30 June 2014 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms5193
https://phys.org/news/2014-07-behavior-breakthrough-crucial-door-chemistry.html
Abstract The adhesion of water to solid surfaces is characterized by the tendency to balance competing molecule–molecule and molecule–surface interactions. Hydroxyl groups form strong hydrogen bonds to water molecules and are known to substantially influence the wetting behaviour of oxide surfaces, but it is not well-understood how these hydroxyl groups and their distribution on a surface affect the molecular-scale structure at the interface. Here we report a study of water clustering on a moiré-structured iron oxide thin film with a controlled density of hydroxyl groups. While large amorphous monolayer islands form on the bare film, the hydroxylated iron oxide film acts as a hydrophilic nanotemplate, causing the formation of a regular array of ice-like hexameric nanoclusters. The formation of this ordered phase is localized at the nanometre scale; with increasing water coverage, ordered and amorphous water are found to coexist at adjacent hydroxylated and hydroxyl-free domains of the moiré structure. Introduction Motivated by applications in diverse fields such as electrochemistry, geochemistry, atmospheric chemistry, corrosion and catalysis, the structure of water adsorbed on solid surfaces has been a topic of strong and sustained interest over the past decades 1 , 2 , 3 , 4 . Due to the relative strength and particular directionality of interactions between water molecules, which are delicately balanced against molecule–surface interactions, the structures of water monolayers on various surfaces exhibit surprising diversity. This has been recently demonstrated in scanning tunnelling microscopy (STM) studies 5 , 6 , 7 , 8 , which allow direct visualization of hydrogen bonding networks. The most detailed insight into water adsorption has been achieved for single-crystal metal surfaces, where the structure of water layers is determined mainly by the dimensions of the surface unit cell and the strength of metal–oxygen bonding 6 . The adsorption and clustering of water on metal oxide surfaces are very relevant for applications, as these materials are ubiquitous under ambient and aqueous conditions. Oxides add an additional level of complexity compared with metals, as water molecules can bond both to metal cations and oxide anions, and in addition, adhere strongly to OH groups formed by water dissociation at lattice sites or at abundant structural defects 3 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 . Although these fundamental aspects of water adsorption on oxide surfaces have been well-established, comparatively few microscopy studies have been conducted to provide detailed information about the structures formed and the influence of defects. As a result, understanding of water adsorption on oxides remains far behind that of adsorption on metals. An additional relevant factor in the adsorption of water on the surfaces of natural or man-made materials is the fact that many such materials have nanometre-scale dimensions, and thus present an adsorption landscape for water which is inhomogeneous. Such inhomogeneity can lead to the formation of distinct nanometre-scale water structures, as observed on a boron nitride nanomesh 18 , or can disrupt the formation of an ordered phase, as observed on stepped Pt surfaces 19 . On oxide surfaces, an inhomogeneous distribution of defects or the presence of different facets can lead to a strongly varying distribution of OH groups, creating large variations in hydrophilicity over short length scales 16 . In general it is not well-understood how nanoscale variations in surface structure impact the formation and structure of the first few wetting layers on oxide surfaces. Monolayer FeO grown on Pt(111), first reported by Vurens et al. 20 , exhibits a 2.5 nm moiré superstructure due to the ~10% lattice mismatch between the FeO film and the substrate 21 . The resulting modulation in the film’s structure, best understood in terms of periodic variation in Fe-O buckling normal to the surface, creates an inhomogeneous adsorption potential 22 , so that adsorbed atoms and molecules have been found to form arrays with the periodicity of the film’s superstructure instead of forming larger, more random aggregates 23 , 24 . Although previous temperature-programmed desorption (TPD), infrared reflection absorption spectroscopy and ultraviolet photoelectron spectroscopy studies of water adsorbed on ultra-thin FeO(111) films have shown rather weak interactions between water and the surface 12 , 25 , 26 , no microscopy studies have been reported to date and the potential effect of the films’ moiré structure on the adsorption has not been considered. Additionally, as demonstrated in previous studies 27 , 28 , 29 , OH groups can be introduced to the FeO surface in a well-controlled manner by exposure to atomic hydrogen, enabling us to directly and systematically study the effects of hydroxylation on H 2 O adsorption and clustering. As will be discussed below, the OH groups formed in this manner also show a distinct spatial distribution following the film’s moiré structure, allowing us to examine the impact of nanometre-scale variations in defect density. We report here a combined experimental and theoretical study of water adsorption, clustering and desorption on bare and hydroxylated monolayer FeO/Pt(111) at low temperatures. Our results show that hydroxyl groups on the FeO film promote the formation of an ordered, ice-like phase, in contrast to the disordered structure formed on the bare surface. The OH groups are localized at one particular domain of the FeO film’s moiré unit cell, such that the surface exhibits alternating hydrophilic and hydrophobic regions with a periodicity of ~2.5 nm. Despite the small length scale, it is found that the ordered phase only forms at the hydrophilic domains—not at adjacent hydrophobic domains. Results Hydroxylation of FeO with atomic hydrogen An STM image of the bare FeO/Pt(111) surface is displayed in Fig. 1a , with a structural model of the film shown in Fig. 1b . As is clear in the STM image, the FeO film exhibits a distinct moiré superstructure, which can be described in terms of three high-symmetry domains. These domains are distinguished by the local stacking sequence of the Fe and O layers relative to the high-symmetry points of the Pt(111) surface. At the FCC domain (square), Fe ions reside above fcc hollow sites and O ions reside above hcp hollow sites; at the HCP domain (triangle), Fe ions reside above hcp hollow sites and O ions reside above Pt atoms; at TOP domains (circle), Fe ions reside above Pt atoms and O ions reside above fcc hollow sites. Unambiguous identification of the different domains in STM images has been achieved through intentional creation of defects in the FeO film 30 , 31 . It has been shown that, under the imaging conditions of Fig. 1a , the bright points in the TOP domains correspond to Fe ions while the bright points at the FCC domains correspond to O ions 30 . Figure 1: Bare and hydroxylated FeO films. ( a ) STM image of the bare FeO/Pt(111) film (65 × 65 Å 2 , 65 mV, 3 nA). The moiré unit cell and high-symmetry domains are indicated. Circle, square, and triangle are used to denote TOP, FCC and HCP domains, respectively. ( b ) Ball model of the FeO/Pt(111) film. The ~25 Å moiré unit cell is indicated as are the three high-symmetry domains. OH groups are shown (with white dots for H atoms) in the preferred FCC domain of the moiré unit cell in their preferred (√3 × √3) R30° arrangement. ( c ) STM image (140 × 140 Å 2 , 0.7 V, 0.4 nA) of the hydroxylated FeO film with an OH coverage of 0.05 ML, acquired at 160 K after additionally dosing ~0.02 ML water (not visible). ( d ) Pair correlation function for OH groups showing peaks at the second nearest neighbour and fifth nearest neighbour positions, indicating local (√3 × √3) R30° ordering. Inset: distribution of OH groups within the moiré cell, showing the preference for occupation of FCC domains. Full size image Figure 1c shows an STM image of an FeO film which has been exposed to atomic hydrogen, leading to protonation of ~5% of the FeO lattice oxygen ions (see Fig. 1b ). Throughout this work we refer to these protonated O ions as hydroxyls or OH groups, and to the surface prepared in this way as being hydroxylated. The image in Fig. 1c was acquired at 160 K after additionally dosing ~0.02 monolayer (ML) water onto the surface to promote H atom mobility 29 , allowing equilibration of the structure, and to ensure that the effects of adsorbed water on the spatial distribution of OH groups are accounted for. This results in a more well-ordered distribution compared with what is observed directly after H atom exposure (see Supplementary Fig. 1 ). The OH groups appear in STM images as bright protrusions 31 , and due to rapid diffusion at this temperature, the coadsorbed H 2 O is not visible. From the distribution of OH groups relative to the moiré structure ( Fig. 1d , inset), we find a strong preference for occupation of FCC domains, followed by partial occupation of HCP domains. The local arrangement of OH groups is also not random, as evidenced by their pair correlation function extracted from this image, plotted in Fig. 1d . Here the prominent peak at the second-nearest-neighbour position (corresponding to a separation of √3· a , where a is the 3.1 Å FeO lattice parameter) and smaller peak at the fifth-nearest-neighbour position (corresponding to a separation of 3· a ) indicate a strong tendency towards local (√3 × √3) R30° ordering. Such an arrangement is depicted in Fig. 1b . Density functional theory (DFT) calculations show that the (√3 × √3) R30° ordering results essentially from repulsion between neighbouring OH groups; the binding energy of a single H atom in the FCC domain is −2.75 eV, while the differential binding energy of a second H atom at a nearest-neighbour site is −2.65 eV and at a second-nearest-neighbour site is −2.73 eV. Thus, the second hydroxyl at the second-nearest-neighbour site is 0.08 eV more stable than that in the nearest-neighbour configuration. Effect of hydroxylation on water clustering Figure 2a shows an STM image of water on bare FeO at 110 K, following exposure at ~130 K. Adsorbed water forms two-dimensional islands on the terraces and elongated clusters along the step edges. The irregular shapes of the water islands on the terraces result from the moiré structure, with voids formed preferentially at one type of domain, appearing brightest in the STM image (see contrast-enhanced inset in Fig. 2a ), several of which are marked with grey dots. Under the imaging conditions employed here (+1.4 V, 0.2 nA), these bright regions in STM images of the bare FeO film correspond to TOP domains 30 where Fe ions occupy sites above Pt atoms. The tendency of water to avoid wetting these domains is an indication that interactions between water molecules and Fe ions play no significant role in the structure; we find by DFT calculations that water monomers adsorb most strongly (−0.21 eV adsorption energy) at Fe sites of TOP domains, where a larger in-plane atomic spacing and smaller Fe-O rumpling make these cations more easily accessible compared with the FCC and HCP domains, where monomer adsorption energies of −0.11 and −0.13 eV were calculated. The strong preference for water to occupy FCC and HCP domains rather than TOP domains is therefore attributed to the significantly weaker electrostatic field above O atoms at TOP domains caused by the smaller rumpling of the FeO 22 , 32 , 33 . Figure 2: Water adsorption on bare FeO. ( a ) STM image (480 × 480 Å 2 ) of water adsorbed on the bare FeO/Pt(111) film, acquired at 110 K. The moiré unit cell is indicated in blue and the positions of TOP domains in the vicinity of three H 2 O islands are indicated with grey dots. The rectangular area shows enhanced contrast of the bare FeO film, where bright spots are seen corresponding to the TOP domains. The image shows two terraces (higher terrace in the upper right) separated by a single-atomic-height step in the Pt(111) substrate. A cyclic colour scale is used to improve contrast. ( b ) High-magnification STM image of a H 2 O island on the bare FeO film. This image is the average of four successive STM measurements at 110 K. The grid marks the moiré unit cells, and the different domains are labelled as in Fig. 1b . Scale bar, 20 Å. ( c ) The same STM image as in b after subtraction of the long-range height variations. Arrows indicate some of the 4–5 Å pores observed in the structure. Scale bar, 20 Å. Full size image A high-resolution STM image of one water island on the bare FeO film is shown in Fig. 2b . Despite some enhanced noise in such images, suggesting possible diffusion of the molecules or interactions with the STM tip, sequential images of the same island acquired over a period of ~2 min show the structure to be essentially static (see Supplementary Fig. 2 ). To increase the signal-to-noise ratio, four sequentially-acquired STM images were averaged to obtain the image shown in Fig. 2b . To further enhance the molecular-scale contrast, we removed the long-range variation in apparent height of the water layer by subtraction of a polynomial background (see Supplementary Fig. 3 ), and the result is displayed in Fig. 2c . Although individual water molecules are not resolved, the image clearly shows an absence of crystalline order, indicating the formation of an amorphous hydrogen-bonded network. Several pores are discernible with diameters in the range of 4–5 Å, some of which are indicated in Fig. 2c . These pores arise from ring structures incorporating roughly 5–6 H 2 O molecules, similar to those observed in ordered structures on metal surfaces 4 , 5 . The apparent height of the water layer varies between the different domains of the FeO moiré structure. Considering the very weak chemical interactions between the molecules and the surface, these differences may indicate variations in the height of the water layer above the FeO film. According to DFT calculations 30 the heights of the FeO oxygen ions above the Pt interface layer follow the order TOP (307 pm)>FCC (304 pm)>HCP (298 pm), whereas the apparent height of the water layer above the FCC domains is consistently larger than that over the HCP domains and the single occupied TOP domain by 15–30 pm. This observation is consistent with the stronger electrostatic field at the FCC domains, causing a greater tendency towards vertical orientation of the water molecules, with O–H bonds directed towards the surface 34 . The presence of surface hydroxyls on the FeO film, introduced using atomic hydrogen, changes the adsorption behaviour of water entirely, as demonstrated in Fig. 3a . Here instead of extended islands, we observe small, separated water clusters confined to the FCC domains of the moiré unit cells. Thus, the strong affinity of water for surface OH groups enables the FeO film, which itself exhibits a relatively flat potential energy surface for water adsorption, to act as a nanoscale template for water clusters. Figure 3: Water adsorption on hydroxylated FeO. ( a ) STM image (600 × 600 Å 2 ) of water adsorbed on a hydroxylated FeO film with an OH coverage of 0.05 ML, acquired at 110 K. The moiré unit cell is indicated in blue. The image shows two terraces (higher terrace in the upper left) separated by a single-atomic-height step in the Pt(111) substrate. A cyclic colour scale is used to improve contrast. ( b ) High-magnification images of H 2 O clusters on hydroxylated FeO showing a hexagonal ring structure. Scale bars, 10 Å. ( c ) Schematic model of the hexagonal ring structure on hydroxylated FeO inferred from STM measurements; upper section gives a cross-section view; bottom an atop view. Full size image In addition to altering the size and shape of water clusters formed on FeO, the presence of OH groups has a striking effect on the molecular-scale ordering of the water within the clusters. Figure 3b shows high-magnification STM images of H 2 O clusters of various sizes observed on a hydroxylated FeO film. The clusters exhibit a distinct hexagonal ring motif with apparent (√3 × √3) R30° symmetry, similar to ring structures that have been observed on several metal surfaces 2 . Assuming that the preferential (√3 × √3) R30° ordering of OH groups observed on FeO ( Fig. 1c,d ) is retained upon exposure to water, half of the H 2 O molecules in the hexagonal structure should lie flat, accepting hydrogen bonds from the surface OH while the other half should be oriented vertically, donating hydrogen bonds to surface O ions. A schematic model of the proposed structure is depicted in Fig. 3c . DFT calculations of water clusters on FeO To gain further insight into the water structures observed on the hydroxylated surface, we carried out DFT+U calculations using, for simplicity, a single cyclic water hexamer 35 , 36 adsorbed at the FCC domain of the FeO film populated with OH groups arranged in a (√3 × √3) R30° pattern (see Fig. 1b ). The cyclic hexamer is the basic structural motif of ice I h . The most stable isomer in the gas phase has S 6 symmetry and each H 2 O molecule simultaneously acts as a H-bond donor and acceptor ( Supplementary Fig. 4 ). The relaxed, lowest-energy structure of an S 6 -like water hexamer adsorbed onto hydroxylated FeO, whose symmetry is reduced upon adsorption to that of point group C 3 , is depicted in Fig. 4a,b . Three of the six H 2 O molecules are oriented parallel to the surface, accepting hydrogen bonds from the surface OH (green dashed lines in Fig. 4a ), and the remaining three H 2 O molecules are oriented perpendicular to the surface, with one O–H bond oriented downwards towards a surface O atom (white dashed lines in Fig. 4a ). Figure 4: Atomic structures of water clusters on hydroxylated FeO. Tilted side ( a , c , e ) and top ( b , d , f ) views of three cyclic water hexamers, differing in the orientations of the molecules, adsorbed on the hydroxylated FeO film as determined by DFT+U calculations. All numerical values for bond lengths are in Å. Green (white) dashed lines in a , c , e indicate hydrogen bonds between the hexamer and the surface where the water acts as a hydrogen bond acceptor (donor). Green (white) dashed lines in b , d , f indicate hydrogen bonds where the parallel water is accepting (donating) a hydrogen bond. Note the transfer of one H + ion from the surface to one water molecule, forming a hydronium (H 3 O + ) ion, in the C s structure, highlighted by dashed white ovals in c , d . Blue, purple, red and white spheres indicate Pt, Fe, O and H atoms, respectively. H and O atoms in H 2 O are highlighted by yellow and green spheres, respectively. Full size image The hydrogen bonds inside the cyclic ring have two different general lengths, which alternate around the ring. This can be understood by focusing on one parallel-oriented water molecule: The accepted H-bond is longer (1.85–1.87 Å), whereas the donated H-bond is shorter (1.53–1.55 Å). Inside the cyclic ring the average length of H-bonds is 1.70 Å, slightly longer (by 0.05 Å) than that of a cyclic water hexamer in the gas phase ( Supplementary Fig. 4 ), due to the influence of hydrogen bonds with the surface. The H-bonds between the parallel H 2 O and surface OH are in the range of 1.57–1.61 Å, while the H-bonds between the perpendicular H 2 O and surface O are longer (1.72–1.81 Å). The parallel H 2 O molecules are slightly tilted relative to the surface, with an average vertical displacement of 0.35 Å between the two H in the parallel H 2 O. This displacement is smaller than the value for the S 6 hexamer in the gas phase (0.78 Å, projected along the S 6 axis). The average adsorption energy of H 2 O in the C 3 hexamer ( Fig. 4a,b ), relative to isolated, gas-phase H 2 O molecules, is calculated to be −0.68 eV per H 2 O. For reference, the adsorption energy of an isolated H 2 O molecule hydrogen bonded to a surface OH is −0.62 eV and that of an isolated molecule on the bare FeO surface ranges from −0.1 to −0.2 eV, depending on the moiré domain. H 2 O adsorbed as a C 3 hexamer on the bare FeO surface ( Supplementary Fig. 5 ) has a mean adsorption energy of −0.42 eV per H 2 O molecule (referenced to isolated gas-phase H 2 O), nearly all of which is due to intermolecular hydrogen bonding; for a similar S 6 cyclic hexamer in the gas phase ( Supplementary Fig. 4 ) we calculate a formation energy of −0.39 eV per H 2 O. In bulk ice I h , where all H 2 O molecules are found in a tetrahedral bonding arrangement, each donating and accepting two hydrogen bonds to its neighbours, the positions of the protons (that is, the orientations of the water molecules) do not exhibit crystalline order but are instead random, within the constraint that the H-bonds remain saturated, and a transition to an ordered structure only occurs at very low temperatures 37 . Breaking of the bulk symmetry, as occurs at surfaces and in nanostructures, has the potential to induce proton ordering by eliminating the near-degeneracy of structures with water molecules in different orientations 38 , 39 . To investigate the extent of proton ordering in H 2 O nanoclusters on FeO, we performed additional calculations using a different cyclic hexamer where one of the H 2 O molecules acts as a double H-bond donor, and a second acts as a double H-bond acceptor ( Fig. 4c,d ). This cyclic H 2 O hexamer has C s symmetry and is 0.89 eV less stable in the gas phase than the S 6 hexamer discussed above. Remarkably, when adsorbed on hydroxylated FeO, the C s hexamer is nearly isoenergetic with the adsorbed C 3 hexamer; the calculated average adsorption energy is −0.66 eV per H 2 O. A third hexamer, characterized by C 3v symmetry, with three double H-bond donors and three double H-bond acceptors ( Fig. 4e,f ), was likewise similar in energy to the adsorbed C 3 and C s hexamers; although unstable in the gas-phase calculation (it converged to the S 6 structure), on hydroxylated FeO the adsorption energy was calculated as −0.67 eV per H 2 O. Although the differences in energy between these configurations are too small to predict which will be dominant in experimentally-observable H 2 O clusters 40 , 41 , it is clear that the hydroxylated FeO surface significantly stabilizes bonding configurations that otherwise are unfavourable. This very likely leads to a significantly greater degree of proton disorder than would be expected based on gas-phase results. In the gas phase, the striking differences in stability of the three different cyclic hexamers result from the cooperative nature of hydrogen bonds in water 42 , 43 . Formation of a hydrogen bond between water molecules involves a polarization of each molecule’s electronic charge density towards the region between the two molecules where the hydrogen bond is formed. This polarization depletes electronic charge from the covalent O-H bonds of the acceptor molecule, increasing its tendency to act as a donor to another H 2 O molecule, which in turn experiences a similar polarization, and so on. The result is, depending on the configuration, an enhancement or diminishment in mean hydrogen bond strength compared with what would be expected based on pairwise interactions, that is, that observed in an H 2 O dimer. In the S 6 water hexamer, the position of each molecule as a simultaneous H-bond donor and acceptor takes full advantage of these effects and a corresponding increase in stability is observed. In the C s and C 3v hexamers, however, molecules are found which either donate two hydrogen bonds without accepting any, or which accept two hydrogen bonds without donating any. This unfavourable configuration leads to a reduction in hydrogen bond strength, significantly destabilizing the structures. However, by interaction with the hydroxylated FeO surface such effects are almost fully compensated. In the C 3v structure ( Fig. 4e,f ), the three double-donor H 2 O molecules accept strong hydrogen bonds from surface OH groups, with calculated O–H distances of 1.39–1.43 Å, while the three double-acceptor H 2 O molecules donate relatively strong bonds to the substrate O ions, with calculated O–H distances of 1.61–1.65 Å. The effects are even more pronounced for the C s isomer, where polarization of the double-donor H 2 O molecule is sufficient to draw the OH proton away from the surface, spontaneously forming an H 3 O + ion (indicated by the dashed oval in Fig. 4c,d ). The double-acceptor H 2 O at the opposite side of the molecule likewise interacts strongly with the surface, with a calculated O–H distance of 1.48 Å. Cooperative effects are also apparent in the adhesion of the C 3 hexamer ( Fig. 4a,b ). The weak interaction between the hexamer and the bare FeO surface is reflected in the H-bond lengths of the molecules with OH bonds directed towards the surface, which are calculated to be 2.06–2.07 Å (see Supplementary Fig. 5 ). The strong hydrogen bonds formed at surface OH groups on hydroxylated FeO impart substantial donor character to the associated flat-lying H 2 O molecules, which then form strong donor bonds to the neighbouring vertical H 2 O molecules, as reflected by the alternating bond lengths around the ring mentioned above. As a secondary effect, the acceptance of this strong H-bond by the vertically-oriented H 2 O molecules leads to a significant contraction of the hydrogen bond lengths between these molecules and the surface O ions, to 1.72–1.81 Å. Even for the FeO film, which should be considered a rather acidic oxide (on the basis of the relative strengths of donor and acceptor H-bonds with water 44 ), O ions adjacent to OH groups act as H-bond acceptors as a result of cooperative hydrogen bonding. These interactions effectively separate the hexamer into three associated dimers, each held together by an especially strong hydrogen bond (1.53–1.55 Å), qualitatively similar to what has been observed for water on TiO 2 (110) 17 and predicted for MgO(100) 45 and O/Ru(0001) 46 . The substantial donor character imparted to the ‘parallel’ H 2 O molecules by H-bonding to OH groups on hydroxylated FeO will create an effective repulsive interaction between these complexes, providing an additional driving force for the formation of a structure with (√3 × √3) R30° ordering of the underlying H atoms, independent of the intrinsic distribution of H atoms on the surface described above. A similar effect has been observed in mixed OH/H 2 O monolayers on Pt(111) (OH in this case referring to a distinct molecule coadsorbed with water), where very stable, ordered hexagonal structures form at a 1:1 (OH:H 2 O) ratio as a result of alternating H-bond donor (H 2 O) and acceptor (OH) molecules 47 , 48 . Such a tendency towards donor/acceptor ordering is expected to occur as a general feature of water adsorption on hydroxylated oxide surfaces, to the extent this is compatible with the geometric features of the specific surface. Desorption kinetics To investigate the effect of hydroxylation on the desorption behaviour of water, we conducted TPD measurements on the bare FeO film and on films with various OH coverages ( Fig. 5 ). For each surface, prepared initially by exposure of the FeO film to atomic hydrogen, H 2 O was dosed at 130 K and TPD traces were measured up to 250 K, after which the sample was re-cooled to 130 K and the next measurement was conducted. Reaction of OH groups to form H 2 O, leading to irreversible reduction of the surface, occurs only above ~400 K 28 . Similar to previous observations for a two-layer thick FeO(111) film 25 , the TPD traces from the bare film show a single peak (denoted α ) for sub-monolayer coverages at 164–168 K and an additional peak at ~158 K (denoted γ ) due to adsorption on top of the first wetting layer. On the hydroxylated FeO surface a new feature appears in the TPD spectra at higher temperatures, 167–186 K (denoted β ). With increasing OH coverage the β feature becomes more prominent and shifts to higher temperatures. As can be seen by comparing the traces plotted in red in Fig. 5a–e , growth of the β feature occurs at the expense of the α feature. Figure 5: Effect of hydroxylation on H 2 O desorption kinetics. TPD measurements (2 K s −1 linear ramp) of H 2 O from ( a ) bare FeO/Pt(111), and hydroxylated FeO films with OH coverages of ( b ) 0.05 ML, ( c ) 0.09 ML, ( d ) 0.12 ML, ( e ) 0.18 ML. Dashed lines indicate primary desorption features (see text); traces obtained with an initial H 2 O coverage of 0.44 ML are plotted in red for easy comparison. ( f ) Plot of the peak temperature of the highest-temperature desorption feature ( α for bare FeO, β for hydroxylated FeO) for various OH coverages and H 2 O exposures. Full size image Interpretation of the measured TPD traces is assisted by STM measurements of different quantities of water adsorbed on FeO surfaces with varying OH coverages. The STM images in Fig. 6 illustrate the effects of the relative H 2 O and OH coverages on the size and molecular structure of adsorbed water clusters at 110 K. Figure 6a,b shows surfaces where small and large quantities of water were dosed onto surfaces with high and low coverages of OH groups, respectively. On the surface ( Fig. 6a ) with an excess of OH groups, whose coverage is sufficient to populate HCP domains as well as FCC domains, we observed that the islands exhibit a clear hexagonal ring structure at both the FCC and HCP domains. In contrast, on the FeO surface with a low coverage of OH groups and an excess of water ( Fig. 6b ), the hexagonal ring structure is only observed at FCC domains; at the HCP domains, which are unoccupied by OH groups, a disordered structure is observed similar to that found on the bare FeO film. Figure 6: Effects of OH and H 2 O coverages. STM images (250 × 250 Å 2 ) of H 2 O adsorbed on hydroxylated FeO with various OH coverages and H 2 O exposures. ( a ) 0.1 ML H 2 O, 0.12 ML OH, ( b ) 0.25 ML H 2 O, 0.05 ML OH, ( c ) 0.1 ML H 2 O, 0.05 ML OH. Insets in a and b are high-resolution images of H 2 O clusters on the respective surfaces, showing the different structures at HCP domains with and without OH groups. Scale bars, 10 Å. Full size image Accordingly, under the conditions of the STM experiments, adsorbed water occurs in two distinct phases: an ordered phase associated with OH groups, and an amorphous phase associated with the bare FeO surface. Although it cannot be ruled out that a single mixed phase is formed at temperatures higher than 110 K, we tentatively assign the α and β TPD features to desorption of water from bare and hydroxylated regions on the FeO film, respectively. Note also that the α feature only appears in the TPD spectra when a water coverage approximately twice the OH coverage is exceeded, indicating that the β feature corresponds to a phase incorporating a maximum H 2 O:OH ratio of 2:1, as is the case for the hexagonal ring structure discussed above. Alternatively, assignment of the α feature to a mixed phase, in which the OH groups are dispersed uniformly remains a possibility. However, this possibility appears less likely, since a shift in the α peak as a function of the OH:H 2 O ratio would be expected, which was barely observed. Note, however, that at the highest H 2 O coverages in the presence of OH groups, the α state includes two components, indicating that the OH groups do have some effect on the desorption kinetics, though the cause of this is not clear at present. As noted above, the β TPD feature, associated with water adsorption at OH groups, shifts to higher temperatures both with increasing H 2 O coverage and increasing OH coverage. The peak positions in the different experiments are plotted together in Fig. 5f , along with the positions of the α peak of the bare film for comparison. Variation in the β desorption temperature with OH coverage may reflect differences in H 2 O cluster size due to isolation of OH groups in separate moiré unit cells, since the number of OH groups per cell determines the maximum number of water molecules that can be stabilized in each. This is supported by comparison of the STM images shown in Fig. 6a,c , obtained on surfaces with the same quantity (0.1 ML) of water adsorbed onto FeO films with OH coverages of 0.12 and 0.05 ML, respectively. On the surface with the lower OH coverage ( Fig. 6c ), a large number of smaller clusters is observed, whereas on the surface with high OH coverage ( Fig. 6a ) a smaller number of larger clusters is seen. The corresponding difference in desorption temperature for these two systems is ~4 K ( Fig. 5f ). It is not clear from the present results whether these subtle effects, apparently due to cluster size, result from increasing thermodynamic stability with increasing size or from more complex kinetic factors. Finally, a feature of the TPD data worth noting is the invariance of the second-layer γ desorption temperature irrespective of the surface OH coverage ( Fig. 5a–e ). This result shows that the enhanced hydrophilicity conferred to the surface by the OH groups is lost after formation of the first H 2 O monolayer. This is similar to hydrophobic water layers studied previously on hydrophobic Pt(111) 49 and graphite 50 surfaces as well as on a hydrophilic kaolinite surface 51 . In all these cases, the preferential downward orientation of O–H bonds in the outermost water layers left surfaces devoid of ‘dangling H-bonds’ which could bind strongly to the next layer of water. Our TPD results thus provide further support for the ‘H-down’ model of the hexagonal water structure on the hydroxylated FeO film, and indicate that, in spite of the very weak interactions with the surface, the structure formed on the bare FeO surface likewise contains few or no H 2 O molecules with O–H bonds directed away from the surface. The broader implications of this result are not trivial; under ambient temperature conditions, thermal flipping of O–H bonds away from the surface can be sufficient to create hydrophilic interactions on a surface which is hydrophobic at low temperatures, as exemplified by recent MD simulations of water interacting with kaolinite 52 , 53 . Discussion Our STM, DFT and TPD studies addressing water clustering on bare and hydroxylated FeO surfaces reveal that highly localized hydrophilic domains are formed upon hydroxylation of a moiré-structured FeO monolayer on Pt(111), such that adsorbed water forms nanometre-sized clusters rather than extended two-dimensional islands. The water clusters exhibit a hexameric ring structure stabilized by hydrogen bonding with surface OH groups, in contrast to the larger islands on the bare surface which exhibit an amorphous structure. The STM and DFT results suggest that within the hexagonal structure, half of the water molecules accept H-bonds from surface OH groups while the other half donate H-bonds to surface O ions, resulting in an H 2 O:OH ratio of 2:1. This 2:1 ratio is supported by TPD measurements, which show a single desorption feature for H 2 O coverages up to approximately twice the initial OH coverage. Remarkably, when this ratio is exceeded, excess water adsorbed on the surface is neither incorporated into the hexagonal structure, forming a phase with a diluted OH concentration, nor is it induced to form an ordered structure by proximity with the hexagonal phase. Rather, by STM, we observe a coexistence of ordered and disordered water structures on a scale of 1 nm. This result highlights the very localized influence of OH groups on the structure of the adsorbed water monolayer, in contrast to what has been observed on metal surfaces, where only small concentrations of defects are sufficient to induce large-scale formation of ordered structures locked into simple registry with the substrate 4 . Detailed investigation, by DFT, of orientational ordering of water molecules in hexagonal H 2 O nanoclusters shows that cooperative hydrogen bonding effects play a substantial role in determining the details of the structures and the interactions between water molecules and the surface. Stronger molecule–surface interactions are found to compensate for weaker intermolecular interactions in certain structures—in one case through spontaneous formation of an H 3 O + ion—so that otherwise unfavourable configurations at the cluster edges are energetically comparable to the more balanced H-bond donor–acceptor configuration which is by far most stable in the gas phase. The discovered templating effect of the hydroxylated FeO film allows control of the dispersion of adsorbates, making it an intriguing surface for the study of small clusters of water and other hydrophilic species. It would be interesting to investigate the orientational ordering and proton transfer dynamics of H 2 O in few-molecule clusters at lower temperatures, where such structures can be fully or partially frozen. One can further envision using the hydroxylated FeO film to control the bottom-up synthesis of nanostructures based on hydrophilic components or precursors; though the templating effects of the bare FeO film itself are rather weak, being observed most clearly only at very low temperatures 23 , 24 , hydrogen bonding with surface OH groups should significantly enhance ordering, enabling directed self-assembly at higher temperatures. Methods STM and TPD Measurements Monolayer FeO films were grown 20 , 54 by evaporation of Fe onto clean, sputtered and annealed Pt(111) surfaces followed by heating in 1 × 10 −6 mbar O 2 at 1,000 K. STM measurements were performed using a home-built variable-temperature Aarhus STM 55 , mounted in an ultra-high vacuum (UHV) chamber with a base pressure of ~1 × 10 −10 mbar. For STM experiments, water exposures were performed by removing the cooled sample ( T =110 K) from the STM block using the in-vacuum transfer arm so that the sample faced the water inlet. The tip of the transfer arm was pre-cooled by pressing it against the cold STM block for several minutes, and the dosing procedure was performed as quickly as possible (<1 min) to keep the sample temperature as low as possible (<135 K). Deionized water, degassed by several freeze-pump-thaw cycles, was dosed onto the surface using 1 ms pulses from a binary piezoelectric valve. Calibration of water coverages was performed by deposition onto clean Pt(111), where water forms an overlayer with known density 7 of 1.1 × 10 15 cm −2 . The surface coverage of 1 ML is defined here as the density of O atoms in the FeO film, 1.2 × 10 15 cm −2 . It was determined that a single pulse from the piezoelectric doser produced a water coverage of ~0.02 ML on the surface. Unless otherwise noted, STM images of water structures were acquired at +1.0–1.5 V sample bias and tunnelling currents of 0.1–0.2 nA. STM image processing was conducted using the Gwyddion software package 56 . TPD experiments were carried out using a commercial UHV surface analysis system (SPECS), equipped with a Hiden quadrupole mass spectrometer (QMS) fitted with a glass shroud with a 4-mm entrance aperture as well as an X-ray source and electron energy analyzer for X-ray photoelectron spectroscopy, a SPECS variable-temperature STM-150 (Aarhus), and similar equipment for sample preparation as on the home-built STM system. For these experiments, a hat-shaped Pt(111) crystal 7 mm in diameter was used, with a type K thermocouple spot-welded to the side of the crystal. Liquid nitrogen cooling allowed the sample to be cooled to ~110 K, and a filament placed behind the sample provided radiative and electron-beam heating up to ~1,200 K. Water was dosed at 130 K by backfilling the chamber with a leak valve and a Eurotherm temperature controller was used to provide the linear heating ramp for the TPD experiments, which was set at 2 K s −1 for all measurements. For experiments in both chambers, H atoms were produced from H 2 using thermal gas crackers and their coverage was determined by STM. Exposures were performed with the sample at room temperature, with the exception of the highest OH coverage (0.18ML), where the sample was held at 130 K to prevent reduction of the FeO film 28 . DFT Calculations All calculations were performed using the Vienna Ab initio Simulation Package 57 , 58 based on spin-polarized DFT. We used the DFT+U approach by Dudarev et al. 59 to correct the on-site Coulomb interaction between Fe 3 d orbitals. As in previous DFT studies of the FeO/Pt(111) system 30 , 60 , 61 we used the projector augmented wave method 62 , 63 and chose the parameters describing the on-site Coulomb interaction between Fe 3 d orbitals as U eff = U — J =3 eV. To model the 1 ML FeO/Pt(111) surface we used the experimentally observed (√91 × √91)R5.2° unit cell consisting of one layer of Fe atoms and one layer of O atoms supported on a three-layer Pt(111) slab. The FeO film, all adsorbates and the top Pt layer were fully relaxed, while the bottom two layers were fixed at bulk positions using the experimental value for the Pt–Pt spacing in the (111) plane of 2.77 Å (that is, an average Fe–Fe spacing of 3.09 Å). The Brillouin zone was sampled with the gamma point only. Exchange and correlation were described by the GGA-PW91 exchange-correlation functional 64 ; a kinetic energy cutoff of 400 eV was used. The initial guess for the magnetic structure is based on a row-wise anti-ferromagnetic structure, with a magnetic defect, due to the odd number of Fe atoms at the TOP domain 30 . Additional information How to cite this article: Merte, L. R. et al. Water clustering on nanostructured iron oxide films. Nat. Commun. 5:4193 doi: 10.1038/ncomms5193 (2014).
(Phys.org) —A multi-institutional team has resolved a long-unanswered question about how two of the world's most common substances interact. In a paper published June 30, 2014, in the journal Nature Communications, Manos Mavrikakis, the Paul A. Elfers professor of chemical and biological engineering at the University of Wisconsin-Madison, and his collaborators report fundamental discoveries about how water reacts with metal oxides. The paper, "Water clustering on nanostructured iron oxide films," opens doors for greater understanding and control of chemical reactions in fields ranging from catalysis to geochemistry and atmospheric chemistry. "These metal oxide materials are everywhere, and water is everywhere," Mavrikakis says. "It would be nice to see how something so abundant as water interacts with materials that are accelerating chemical reactions." These reactions play a huge role in the catalysis-driven creation of common chemical platforms such as methanol, produced on the order of 10 million tons per year for uses including fuel and as raw material for chemicals production. "Ninety percent of all catalytic processes use metal oxides as a support," Mavrikakis says. "Therefore, all of the reactions including water as an impurity or reactant or product would be affected by the insights developed." Chemists understand how water interacts with many non-oxide metals, which are very homogeneous. Metal oxides are trickier: An occasional oxygen atom is missing, causing what Mavrikakis calls "oxygen defects." When water meets with one of those defects, it forms two adjacent hydroxyls—a stable compound comprising one oxygen atom and one hydrogen atom. Mavrikakis, with assistant scientist Guowen Peng and PhD student Carrie Farberow, along with researchers at Aarhus University in Denmark and Lund University in Sweden, investigated how hydroxyls affect water molecules around them, and how that differs from water molecules contacting a pristine metal oxide surface. The Aarhus researchers generated data on the reactions using scanning tunneling microscopy (STM). The Wisconsin researchers then subjected the STM images to quantum mechanical analysis that decoded the resulting chemical structures, defining which atom is which. "If you don't have the component of the work that we provided, there is no way that you can tell from STM alone what the atomic-scale structure of the water is, when absorbed on various surfaces" Mavrikakis says. The project yielded two dramatically different pictures of water-metal oxide reactions. "On a smooth surface, you form amorphous networks of water molecules, whereas on a hydroxylated surface, there are much more structured, well-ordered domains of water molecules," Mavrikakis says. In the latter case, the researchers realized that hydroxyl behaves as a sort of anchor, setting the template for a tidy hexameric ring of water molecules attracted to the metal's surface. Mavrikakis' next step is to examine how these differing structures react with other molecules, and to use the research to improve catalysis. Mavrikakis sees lots of possibilities outside his own field. "Maybe others might be inspired and look at the geochemistry and/or atmospheric chemistry implications, such as how these water cluster structures on atmospheric dust nanoparticles could affect cloud formation, rain, and acid rain," Mavrikakis says. Other researchers might also look at whether other molecules exhibit similar behavior when they come into contact with metal oxides, he adds. "It opens the doors to using hydrogen bonds to make surfaces hydrophilic or attracted to water, and to templating these surfaces for the selective absorption of other molecules possessing fundamental similarities to water," Mavrikakis says. "Because catalysis is at the heart of engineering chemical reactions, this is also very fundamental for atomic-scale chemical reaction engineering." While the research fills part of the foundation of chemistry, it also owes a great deal to state-of-the-art research technology. "The size and nature of the calculations we had to do probably was not feasible until maybe four or five years ago, and the spatial and temporal resolution of scanning tunneling microscopy was not there," Mavrikakis says. "So it's advances in the methods that allow for this new information to be born."
10.1038/ncomms5193
Medicine
Progress made in transplanting pig hearts into baboons
Matthias Längin et al. Consistent success in life-supporting porcine cardiac xenotransplantation, Nature (2018). DOI: 10.1038/s41586-018-0765-z Journal information: Nature
http://dx.doi.org/10.1038/s41586-018-0765-z
https://medicalxpress.com/news/2018-12-transplanting-pig-hearts-baboons.html
Abstract Heart transplantation is the only cure for patients with terminal cardiac failure, but the supply of allogeneic donor organs falls far short of the clinical need 1 , 2 , 3 . Xenotransplantation of genetically modified pig hearts has been discussed as a potential alternative 4 . Genetically multi-modified pig hearts that lack galactose-α1,3-galactose epitopes (α1,3-galactosyltransferase knockout) and express a human membrane cofactor protein (CD46) and human thrombomodulin have survived for up to 945 days after heterotopic abdominal transplantation in baboons 5 . This model demonstrated long-term acceptance of discordant xenografts with safe immunosuppression but did not predict their life-supporting function. Despite 25 years of extensive research, the maximum survival of a baboon after heart replacement with a porcine xenograft was only 57 days and this was achieved, to our knowledge, only once 6 . Here we show that α1,3-galactosyltransferase-knockout pig hearts that express human CD46 and thrombomodulin require non-ischaemic preservation with continuous perfusion and control of post-transplantation growth to ensure long-term orthotopic function of the xenograft in baboons, the most stringent preclinical xenotransplantation model. Consistent life-supporting function of xenografted hearts for up to 195 days is a milestone on the way to clinical cardiac xenotransplantation 7 . Main Xenotransplantation of genetically multi-modified α1,3-galactosyltransferase-knockout pig hearts that express human CD46 and thrombomodulin (blood group 0) was performed using the clinically approved Shumway’s orthotopic technique 8 . Fourteen captive-bred baboons ( Papio anubis , blood groups B and AB) served as recipients. All recipients received basic immunosuppression, similar to that described previously 5 : induction therapy included anti-CD20 antibody, anti-thymocyte-globulin, and the monkey-specific anti-CD40 mouse/rhesus chimeric IgG4 monoclonal antibody (clone 2C10R4) 9 or our own humanized anti-CD40L PASylated (conjugated with a long, structurally disordered Pro-Ala-Ser amino acid chain) antigen-binding fragment (Fab) 10 . During maintenance therapy methylprednisolone was reduced gradually, whereas mycophenolate mofetil and anti-CD40 monoclonal antibody or anti-CD40L PASylated Fab treatment remained constant (Extended Data Table 1 ). Postoperative treatment of the recipients has been described elsewhere 11 . In group I ( n = 5), donor organs were preserved with two clinically approved crystalloid solutions (4 °C custodiol HTK (histidine-tryptophan-ketoglutarate) or Belzer’s UW solution), each perfused after cross-clamping the ascending aorta before excision of the porcine donor organ. The hearts were kept in plastic bags filled with ice-cold solution and surrounded by ice cubes (static preservation). The results of group I were disappointing. Despite short ischaemic preservation periods (123 ± 7 min), the animals survived for only 1 day ( n = 3), 3 days ( n = 1) and 30 days ( n = 1) (Fig. 1a ). The four short-term survivors were successfully taken off cardiopulmonary bypass (CPB) and three could be extubated, but all were lost due to severe systolic left heart failure in spite of a high dose of intravenous catecholamines (Extended Data Fig. 1 ). This so-called ‘perioperative cardiac xenograft dysfunction’ (PCXD) 12 has been observed in 40 to 60% of the orthotopic cardiac xenotransplantation experiments described in the literature 4 . The only 30-day survivor (which received a heart preserved with Belzer’s UW solution) gradually developed left ventricular myocardial hypertrophy and stiffening, resulting in progressive diastolic left ventricular failure associated with increased serum levels of troponin T, an indicator of myocardial damage (Fig. 1b ). Increased serum bilirubin levels (Fig. 1c ) and several other clinically relevant chemical parameters (Table 1 ) indicated associated terminal liver disease. Upon necropsy, marked cardiac hypertrophy (Fig. 1e ) with a thickened left ventricular myocardium and a decreased left ventricular cavity was evident (Fig. 1f ). Fig. 1: Survival, laboratory parameters, necropsy and histology after orthotopic xenotransplantation. a , Kaplan–Meier curve of survival of groups I (black; n = 5 animals), II (red; n = 4 animals) and III (magenta; n = 5 animals). Two-sided log-rank test, P = 0.0007. b , c , Serum concentrations of cardiac troponin T ( b ) and bilirubin ( c ). d , Left ventricular (LV) masses of xenografted hearts from animals 9 (group II), 11 and 13 (both group III); note increased graft growth after discontinuation of temsirolimus (arrow). e – g , Front view of the porcine donor heart and own heart of baboon 3 ( e , left and right, respectively; group I) and transverse cuts of the porcine donor hearts (left) and the baboons’ own hearts (right) of animals 3 ( f ) and 11 ( g ). Note the extensive left ventricular hypertrophy and reduction of left ventricular cavity of the donor organ of baboon 3 in contrast to animal 11. h , i , Haematoxylin and eosin staining of the left ventricular myocardium of the donor (left) and the liver of the recipient (right). Scale bars, 100 μm. h , The myocardium of animal 9 showed multifocal cell necroses with hypereosinophilia, small vessel thromboses, moderate interstitial infiltration of lymphocytes, neutrophils and macrophages. The liver of this animal had multifocal centrolobular cell vacuolizations and necroses as well as multifocal intralesional haemorrhages. i , The myocardium of baboon 11 had sporadic infiltrations of lymphocytes, multifocal minor interstitial oedema whereas the liver had small vacuolar degeneration of hepatocytes (lipid type). j , Wheat germ agglutinin-stained myocardial sections of a sham-operated porcine heart (left), and the hearts transplanted into animals 9 (centre) and 11 (right). Scale bar, 50 μm. e – j , n = 4, groups I/II; n = 3, group III; n = 1, control; one representative biological sample for each group is shown for group I/II, group III and control ( j ). k , Quantitative analysis of cardiomyocyte cross-sectional areas. Data are mean ± s.d., P values are indicated, one-way analysis of variance (ANOVA) with Holm – Sidak’s multiple comparisons test ( n = 3 biologically independent samples with 5–8 measurements each). l , Western blot analysis of myocardium from transplanted hearts of animals 11 and 12 showed reduced mTOR phosphorylation (p-mTOR) compared to age-matched control samples. n = 2, group III; n = 2, controls. For gel source data, see Supplementary Fig. 1 . Source data Full size image Table 1 Serum levels of liver and heart enzymes, platelet counts and prothrombin ratio at the end of experiments that lasted longer than two weeks Full size table To reduce the incidence of the PCXD that was observed in group I, we explored new ways to improve xenograft preservation. In group II ( n = 4), the same immunosuppressive regime as in group I was used, but the pig hearts were preserved with an 8-°C oxygenated albumin-containing hyperoncotic cardioplegic solution that contained nutrition, hormones and erythrocytes 13 . From explantation until transplantation, the organs were continuously perfused and oxygenated using a heart-perfusion system. During implantation surgery, the hearts were intermittently perfused every 15 min until the aortic clamp was opened at the end of transplantation. After non-ischaemic continuous organ perfusion (206 ± 43 min), all four baboons in group II could easily be taken off CPB, showed better graft function compared to animals in group I and required less catecholamine support (Extended Data Fig. 1 ). No organ was lost owing to PCXD. One experiment had to be terminated on the fourth postoperative day because of a technical failure; the other three animals lived for 18, 27 and 40 days (Fig. 1a ). Echocardiography during the experiments revealed increasing hypertrophy of the left ventricular myocardium as measured by left ventricular mass 14 , 15 (Fig. 1d ), left ventricular stiffening and decreasing left ventricular filling volumes (Extended Data Fig. 2a ). Graft function remained normal throughout the experiments, but diastolic relaxation gradually deteriorated (Supplementary Video 1 ). Troponin T levels were consistently above normal range and increased markedly at the end of each experiment (Table 1 and Fig. 1b ) and simultaneously platelet counts decreased whereas lactate dehydrogenase (LDH) increased (Table 1 and Extended Data Fig. 3 a, b ), suggesting thrombotic microangiopathy as described for heterotopic abdominal cardiac xenotransplantation 5 , 16 . In addition, secondary liver failure developed: increasing serum bilirubin concentrations (Fig. 1c ) and a decrease in prothrombin ratio and reduction in cholinesterase indicated a reduction in liver function, while increased serum activities of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) pointed to liver damage (Table 1 ). At necropsy, the weight of group II hearts had more than doubled (on average 259%) compared to the time point of transplantation. Histology confirmed myocardial cell hypertrophy (Fig. 1j, k ) and revealed multifocal myocardial necroses, thromboses and immune cell infiltration (Fig. 1h ); in the liver, multifocal cell necroses were observed (Fig. 1h ). Taken together, these alterations are consistent with diastolic pump failure and subsequent congestive liver damage resulting from massive cardiac overgrowth. However, immunofluorescence analyses of the myocardium and plasma levels of non-galactose-α1,3-galactose xenoreactive antibodies 17 did not indicate humoral rejection of the graft (Fig. 2 and Extended Data Fig. 4 ). Fig. 2: Quantitative evaluation of antibodies, complement and fibrin in myocardial tissue and serum levels of non-galactose-α1,3-galactose xenoreactive antibodies. a – e , Quantitative evaluation of fluorescence intensities ( n = 9 biologically independent samples with 5–10 measurements per experiment; for representative images see Extended Data Fig. 4 ). Raw integrated densities are shown for IgM ( a ), IgG ( b ), C3b/c ( c ), C4b/c ( d ) and fibrin ( e ). Group I (animal 3), black; group II (animals 6, 8, 9), red; group III (animals 11–14), magenta. C3b/c and C4b/c values are compared to those of controls measured in healthy pig hearts. Data are mean ± s.d. f , g , Levels of non-galactose-α1,3-galactose xenoreactive IgM and IgG antibodies in baboon plasma; antibody binding to α-galactosyltransferase-knockout porcine aortic endothelial cells that express human CD46 and thrombomodulin was analysed by fluorescence-activated cell sorting. Values are expressed as mean fluorescence intensity. Animals 6, 9 and 10 received an anti-CD40L PASylated Fab, the others were treated with an anti-CD40 monoclonal antibody. Plasma from a baboon who rejected a heterotopically intrathoracic transplanted pig heart served as positive control (grey). Source data Full size image To prevent diastolic heart failure, we investigated means of reducing cardiac hypertrophy. The following modifications were made for group III ( n = 5): recipients were weaned from cortisone at an early stage and received antihypertensive treatment (pigs have a lower systolic blood pressure than baboons, around 80 compared to approximately 120 mm Hg, respectively) and additional temsirolimus medication was used to counteract cardiac overgrowth. After heart perfusion times of 219 ± 30 min, all five animals were easily taken off CPB, comparable to group II (Extended Data Fig. 1 ). None of the recipients in group III showed PCXD; all reached a steady state with good heart function after four weeks. One recipient (10) developed recalcitrant pleural effusions that were caused by occlusion of the thoracic lymph duct and was therefore euthanized after 51 days. Two recipients (11, 12) lived in good health for three months until euthanasia, according to the study protocol (Fig. 1a ). In these three recipients, echocardiography revealed no increase in left ventricular mass (Fig. 1d ); graft function remained normal with no signs of diastolic dysfunction (Extended Data Fig. 2b and Supplementary Video 2 ). Biochemical parameters of heart and liver functions as well as LDH levels and platelet counts were normal or only slightly altered throughout the experiments (Table 1 , Fig. 1b, c and Extended Data Fig. 3a, b ), consistent with normal histology (Fig. 1i ). Histology of left ventricular myocardium showed no signs of hypertrophy (Fig. 1j, k ), and western blot analysis of the myocardium revealed phosphorylation levels of mTOR that were lower than non-transplanted age-matched control hearts (Fig. 1l ). Similar to group II, there were no signs of humoral graft rejection in group III (Fig. 2 and Extended Data Fig. 4 ). The study protocol for group III was extended aiming at a graft survival of six months. The last two recipients in this group (13, 14) were allowed to survive in good general condition for 195 and 182 days, with no major changes to platelet counts or serum LDH and bilirubin levels (Fig. 1a, c and Extended Data Fig. 3a, b ). Intravenous temsirolimus treatment was discontinued on day 175 and on day 161. Up to this point, systolic and diastolic heart function was normal (Supplementary Video 3 ). Thereafter, increased growth of the cardiac graft was observed in both recipients (Fig. 1d ), emphasizing the importance of mTOR inhibition in the orthotopic xenogeneic heart xenotransplantation model. Similar to the changes observed in group II, the smaller recipient 13 developed signs of diastolic dysfunction, which was associated with elevated serum levels of troponin T and the start of congestive liver damage (increased serum ALT and AST levels, decreased prothrombin ratio and cholinesterase); platelet counts remained within normal ranges (Table 1 , Fig. 1b, c and Extended Data Fig. 3a, b ). Histology confirmed hepatic congestion and revealed multifocal myocardial necroses without immune cell infiltrations or signs of thrombotic microangiopathy. In the larger recipient 14, who had to be euthanized simultaneously with animal 13, the consequences of cardiac overgrowth were minimal. Here we show consistent survival of life-supporting pig hearts in non-human primates for at least three months that meets the preclinical efficacy requirements for the initiation of clinical xenotransplantation trials as suggested by an advisory report of the International Society for Heart and Lung Transplantation 7 . Two steps were key to success. First, non-ischaemic porcine heart preservation was found to be important for the survival of the xenografted hearts. Xenografted hearts from group I that underwent ischaemic static myocardial preservation with crystalloid solutions (as used for clinical allogeneic procedures) showed PCXD in four out of five cases, necessitating higher amounts of catecholamines. This phenomenon is clearly similar to ‘cardiac stunning’, the occurrence of which has been known since the early days of cardiac surgery and does not represent hyperacute rejection 4 . By contrast, in groups II and III (non-ischaemic porcine heart preservation by perfusion) 13 , all nine recipients came off CPB easily since their cardiac outputs remained unchanged compared to baseline. The short-term results achieved in these groups were excellent even by clinical standards. The second key step was the prevention of detrimental xenograft overgrowth. Previous pig-to-baboon kidney and lung transplantation experiments have suggested that growth of the graft depends more on intrinsic factors than on stimuli from the recipient such as growth hormones 18 . The massive cardiac hypertrophy in our group-II recipients indicates a more complex situation. Notably, a transplanted heart in this group had a 62% greater weight gain than the non-transplanted heart of a sibling in the same time span (Extended Data Fig. 2c ). In group III, cardiac overgrowth was successfully counteracted by a combination of treatments: (i) decreasing the blood pressure of the baboons to match the lower porcine levels; (ii) tapering cortisone at an early stage—cortisone can cause hypertrophic cardiomyopathy in early life in humans 19 ; and (iii) using the sirolimus prodrug temsirolimus to mitigate myocardial hypertrophy. Sirolimus compounds are known to control the complex network of cell growth by inhibiting both mTOR kinases 20 . There is clinical evidence that sirolimus treatment can attenuate myocardial hypertrophy and improve diastolic pump function 21 , 22 , as well as ameliorate rare genetic overgrowth syndromes in humans 23 . In addition to the effects of human thrombomodulin expression in the graft 5 , 24 , temsirolimus treatment may prevent the formation of thrombotic microangiopathic lesions even further by reducing collagen-induced platelet aggregation and by destabilizing platelet aggregates formed under shear stress conditions 25 . In summary, our study demonstrates that consistent long-term life-supporting orthotopic xenogeneic heart transplantation in the most relevant preclinical model is feasible, facilitating clinical translation of xenogeneic heart transplantation. Methods Animals Experiments were carried out between February 2015 and August 2018. Fourteen juvenile pigs of cross-bred genetic background (German Landrace and Large White, blood group 0) served as donors for heart xenotransplantation. All organs were homozygous for α1,3-galactosyltransferase knockout (GTKO), and heterozygous transgenic for human CD46 (hCD46) and human thrombomodulin (hTM) 24 (Revivicor and Institute of Molecular Animal Breeding and Biotechnology). Localization and stability of hCD46 and hTM expression were verified post mortem by immunohistochemistry (Extended Data Fig. 5 ). Donor heart function and absence of valvular defects were evaluated seven days before transplantation by echocardiography. Fourteen male captive-bred baboons ( P. anubis , blood groups B and AB) were used as recipients (German Primate Centre). The study was approved by the local authorities and the Government of Upper Bavaria. All animals were treated in compliance with the Guide for the Care and Use of Laboratory Animals (US National Institutes of Health and German Legislation). Anaesthesia and analgesia Baboons were premedicated by intramuscular injection of ketamine hydrochloride 6–8 mg kg −1 (ketavet 100 mg ml −1 ; Pfizer) and 0.3–0.5 mg kg −1 midazolam (midazolam-ratiopharm; Ratiopharm). General anaesthesia was induced with an intravenous bolus of 2.0–2.5 mg kg −1 propofol (propofol-lipuro 2%; B. Braun Melsungen) and 0.05 mg fentanyl (fentanyl-janssen 0.5 mg; Janssen-Cilag), and maintained with propofol (0.16 ± 0.06 mg kg −1 min −1 ) or sevoflurane (1–2 vol% endexpiratory; sevorane, AbbVie) and bolus administrations of fentanyl (6–8 μg kg −1 , repeated every 45 min) as described elsewhere 11 . Continuous infusion of fentanyl, ketamine hydrochloride and metamizole (novaminsulfon-ratiopharm 1 g per 2 ml; Ratiopharm) was applied postoperatively to ensure analgesia. Explantation and preservation of donor hearts Pigs were premedicated by intramuscular injection of ketamine hydrochloride 10–20 mg kg −1 , azaperone 10 mg kg −1 (stresnil 40 mg ml −1 ; Lilly Deutschland) and atropine sulfate (atropinsulfat B. Braun 0.5 mg; B. Braun Melsungen). General anaesthesia was induced with an intravenous bolus of 20 mg propofol and 0.05 mg fentanyl and maintained with propofol (0.12 mg kg −1 min −1 ) and bolus administrations of fentanyl (2.5 μg kg −1 , repeated every 30 min). After median sternotomy and heparinization (500 IU kg −1 ), a small cannula was inserted into the ascending aorta, which was then cross-clamped distal of the cannula. In group I, the heart was perfused with a single dose of 20 ml kg −1 crystalloid cardioplegic solution at 4 °C: custodiol HTK solution (Dr. Franz Köhler Chemie) was used for the hearts for animals 2, 4 and 5, Belzer’s UW solution (Preservation Solutions) was used for the hearts for animals 1 and 3. The appendices of the right and left atrium were opened for decompression. The heart was then excised, submersed in cardioplegic solution and stored on ice. In groups II and III, hearts were preserved as described previously 13 , using 3.5 l of an oxygenated albumin-containing hyperoncotic cardioplegic nutrition solution with hormones and erythrocytes at a temperature of 8 °C in a portable extracorporeal heart preservation system consisting of a pressure- and flow-controlled roller pump, an O 2 /CO 2 exchanger, a leukocyte filter, an arterial filter and a cooler/heater unit. After aortic cross-clamping, the heart was perfused with 600 ml preservation medium, excised and moved into the cardiac preservation system. A large cannula was introduced into the ascending aorta and the mitral valve was made temporarily incompetent to prevent left ventricular dilation; the superior vena cava was ligated; however, the inferior vena cava, pulmonary artery and pulmonary veins were left open for free outlet of perfusate. The heart was submersed in a reservoir filled with cold perfusion medium and antegrade coronary perfusion commenced through the already placed aortic cannula. The perfusion pressure was regulated at exactly 20 mm Hg. During implantation, the heart was intermittently perfused for 2 min every 15 min. Implantation technique The recipient’s thorax was opened at the midline. Unfractionated heparin (500 IU kg −1 ; heparin-natrium-25000-ratiopharm, Ratiopharm) was given and the heart–lung machine connected, using both venae cavae and the ascending aorta. CBP commenced and the recipient was cooled to 30 °C in group I, and 34 °C in groups II and III. After cross-clamping the ascending aorta, the recipient’s heart was excised at the atrial levels, both large vessels were cut. The porcine donor heart was transplanted using Shumway’s and Lower’s technique 8 . A wireless telemetric transmitter (Data Sciences International) was implanted in a subcutaneous pouch in the right medioclavicular line between the fifth and sixth rib. Pressure probes were inserted into the ascending aorta and the apex of the left ventricle, an electrocardiogram lead was placed in the right ventricular wall. Immunosuppressive regimen, anti-inflammatory and additive therapy Immunosuppression was based on the previously published regimen 5 , with C1 esterase inhibitor instead of cobra venom factor for complement inhibition (Extended Data Table 1 ). Induction consisted of anti-CD20 antibody (mabthera; Roche Pharma), ATG (thymoglobuline, Sanofi-Aventis), and either an anti-CD40 monoclonal antibody (mouse/rhesus chimeric IgG4 clone 2C10R4, NIH Non-human Primate Reagent Resource, Mass Biologicals; courtesy of K. Reimann; animals 1–3, 5, 7, 8, 11–14) or humanized anti-CD40L PASylated Fab (XL-Protein and Wacker-Chemie; animals 4, 6, 9, 10). Maintenance immunosuppression consisted of mycophenolate mofetil (CellCept, Roche; trough level 2–3 μg ml −1 ), either the anti-CD40 monoclonal antibody (animals 1–3, 5, 7, 8, 11–14) or anti-CD40L PASylated Fab (animals 4, 6, 9, 10), and methylprednisolone (urbasone soluble, Sanofi-Aventis). Anti-inflammatory therapy included an IL-6-receptor antagonist (RoActemra, Roche), TNF inhibitor (enbrel, Pfizer) and an IL-1-receptor antagonist (Kineret, Swedish Orphan Biovitrum). Additive therapy consisted of acetylsalicylic acid (aspirin, Bayer Vital), unfractionated heparin (heparin-natrium-25000-ratiopharm, Ratiopharm), C1 esterase inhibitor (berinert, CSL Behring), ganciclovir (cymevene, Roche), cefuroxime (cefuroxim, Hikma) and epoetin beta (neorecormon 5000IU, Roche P). Starting from 10 mg kg −1 per day, methylprednisolone was slowly reduced by 1 mg kg −1 every 10 days in group I and II; in group III, methylprednisolone was tapered down to 0.1 mg kg −1 within 19 days. Also in group III, temsirolimus (torisel, Pfizer) was added to the maintenance immunosuppression, administered as daily short intravenous infusions aiming at rapamycin trough levels of 5–10 ng ml −1 . Group III also received continuous intravenous antihypertensive medication with enalapril (Enahexal, Hexal AG, Holzkirchen, Germany) and metoprolol tartrate (Beloc, AstraZeneca), aiming at mean arterial pressures of 80 mm Hg and a heart rate of 100 b.p.m. Haemodynamic measurements After induction of general anaesthesia, a central venous catheter (Arrow International) was inserted in the left jugular vein and an arterial catheter (Thermodilution Pulsiocath; Pulsion Medical Systems) in the right femoral artery. Cardiac output and stroke volume were assessed by transpulmonary thermodilution and indexed to the body surface area of the recipient using the formula 0.083 × B 0.639 where B is body weight in kg. Measurements were taken after induction of anaesthesia and 60 min after termination of CPB in steady state and recorded with PiCCOWin software (Pulsion Medical Systems). All data were processed with Excel (Microsoft) and analysed with GraphPad Prism 7.0 (GraphPad Software). Quantification of left ventricular mass, left ventricular mass increase and fractional shortening Transthoracic echocardiographic examinations were carried out under analgosedation at regular intervals using an HP Sonos 7500 (HP) and a Siemens Acuson X300 (Siemens); midpapillary short axis views were recorded. Left ventricular end diastolic diameter (LVEDD) and left ventricular end systolic diameter (LVESD), interventricular septum thickness at end diastole (IVSd) and posterior wall thickness at end diastole (PWd) were measured; the mean of three measurements was used for further calculations and visualization (Excel and PowerPoint, Microsoft). Left ventricular mass was calculated using equation ( 1 ), relative left ventricular mass increase and left ventricular (LV) fractional shortening (FS) was calculated using equations ( 2 ) and ( 3 ) according to previously published methods 14 , 15 . $${\rm{LV}}\hspace{2.77626pt}{\rm{mass}}\hspace{2.77626pt}{\rm{(g)\; =\; 0}}{\rm{.8(1}}{{\rm{.04((LVEDD\; +\; IVSd\; +\; PWd)}}}^{3}-{\rm{LVEDD))\; +\; 0}}{\rm{.6}}$$ (1) $${\rm{L}}{\rm{V}}\hspace{2.77626pt}{\rm{m}}{\rm{a}}{\rm{s}}{\rm{s}}\hspace{2.77626pt}{\rm{i}}{\rm{n}}{\rm{c}}{\rm{r}}{\rm{e}}{\rm{a}}{\rm{s}}{\rm{e}}\hspace{2.77626pt}({\rm{ \% }})=(({\rm{L}}{\rm{V}}\hspace{2.77626pt}{{\rm{m}}{\rm{a}}{\rm{s}}{\rm{s}}}_{{\rm{e}}{\rm{n}}{\rm{d}}}/{\rm{L}}{\rm{V}}\hspace{2.77626pt}{{\rm{m}}{\rm{a}}{\rm{s}}{\rm{s}}}_{{\rm{s}}{\rm{t}}{\rm{a}}{\rm{r}}{\rm{t}}})-1)\times 100$$ (2) $${\rm{FS}}\hspace{2.77626pt}{\rm{( \% )\; =\; ((LVEDD\; -\; LVESD)/LVEDD)}}\times 100$$ (3) Necropsy and histology Necropsies and histology were performed at the Institute of Veterinary Pathology and the Institute of Pathology. Specimens were fixed in formalin, embedded in paraffin and plastic, sectioned and stained with haematoxylin and eosin. Histochemical analysis Cryosections (8 μm) were generated using standard histological techniques. Cardiomyocyte size was quantified as the cross-sectional area. In brief, 8-μm thick cardiac sections of the left ventricle were stained with Alexa Fluor 647-conjugated wheat germ agglutinin (Life Technologies) and the nuclear dye 4′,6-diamidino-2-phenylindole (DAPI, Life Technologies). Images were acquired with a 63× objective using a Leica TCS SP8 confocal microscope; SMASH software (MATLAB, ) was used to determine the average cross-sectional area of cardiomyocytes in one section (200–300 cells per section and 5– 8 sections per heart). Immunofluorescence staining Myocardial tissue biopsies were embedded in Tissue-Tek (Sakura Finetek) and stored frozen at −80 °C. For immunofluorescence staining, 5-μm cryosections were cut, air-dried for 30 to 60 min and stored at −20 °C until further analysis. The cryosections were fixed with ice-cold acetone, hydrated and stained using either one-step direct or two-step indirect immunofluorescence techniques. The following antibodies were used: rabbit anti-human C3b/c (DAKO), rabbit anti-human C4b/c-FITC (DAKO), goat anti-pig IgM (AbD Serotec), goat anti-human IgG–FITC (Sigma-Aldrich) and rabbit anti-human fibrinogen–FITC (DAKO). Secondary antibodies were donkey anti-goat IgG–Alexa Fluor 488 (Thermo Fisher Scientific), sheep anti-rabbit Cy3 (Sigma-Aldrich). Nuclear staining was performed using DAPI (Boehringer, Roche Diagnostics). The slides were analysed using a fluorescence microscope (DM14000B; Leica). Five to ten immunofluorescence pictures per marker were acquired randomly and the fluorescence intensity was quantified using ImageJ software, version 1.50i ( ) on unmanipulated TIFF images. All pictures were taken under the same conditions to allow correct quantification and comparison of fluorescence intensities. Assessment of non-galactose-α1,3-galactose antibody levels Plasma levels of non-galactose-α1,3-galactose baboon IgM and IgG antibodies were measured by flow cytometry following the consensus protocol published previously 17 . In brief, GTKO/hCD46/hTM porcine aortic endothelial cells were collected and suspended at 2 × 10 6 cells per ml in staining buffer (PBS containing 1% BSA). Plasma samples were heat-inactivated at 56 °C for 30 min and diluted 1:20 in staining buffer. Porcine aortic endothelial cells were incubated with diluted baboon plasma for 45 min at 4 °C. Cells were then washed with cold staining buffer and incubated with goat anti-human IgM–RPE (Southern Biotech) or goat anti-human IgG–FITC (Thermo Fisher) for 30 min at 4 °C. After rewashing with cold staining buffer, cells were resuspended in PBS, fluorescence was acquired on FACS LSRII (BD Biosciences) and data were analysed using FlowJo analysis software for detection of mean fluorescence intensity (MFI) in the FITC channel or in the RPE channel. Data were then plotted using Prism 7 (GraphPad Software). Western blot analysis For protein extraction, heart samples were homogenized in Laemmli sample buffer and the protein content was estimated using the bicinchoninic acid (BCA, Merck) protein assay. Then, 20 μg total protein was separated by 10% SDS–PAGE and transferred to PDVF membranes (Millipore) by electroblotting. Membranes were washed in Tris-buffered saline solution with 0.1% Tween-20 (Merck) (TBS-T) and blocked in 5% w/v fat-free milk powder (Roth) for 1 h at room temperature. Membranes were then washed again in TBS-T and incubated in 5% w/v BSA (Roth) of the appropriate primary antibody overnight at 4 °C. The following antibodies were used: rabbit anti-human p-mTOR (5536; Cell Signaling), rabbit anti-human mTOR (2983; Cell Signaling) and rabbit anti-human GAPDH (2118; Cell Signaling). After washing, membranes were incubated in 5% w/v fat-free milk powder with a horseradish peroxidase-labelled secondary antibody (goat anti-rabbit IgG; 7074; Cell Signaling) for 1 h at room temperature. Bound antibodies were detected using an enhanced chemiluminescence detection reagent (ECL Advance Western Blotting Detection Kit, GE Healthcare) and appropriate X-ray films (GE Healthcare). After detection, membranes were stripped (2% SDS, 62.5 mM Tris-HCl, pH 6.7, 100 mM β-mercaptoethanol) for 30 min at 70 °C and incubated with the appropriate second antibody. Immunohistochemical staining Myocardial tissue was fixed with 4% formalin overnight, paraffin-embedded and 3-μm sections were cut and dried. Heat-induced antigen retrieval was performed in Target Retrieval solution (S1699, DAKO) in a boiling water bath for 20 min for hCD46 and in citrate buffer, pH 6.0, in a steamer for 45 min for hTM, respectively. Immunohistochemistry was performed using the following primary antibodies: mouse anti-human CD46 monoclonal antibody (HM2103, Hycult Biotech) and mouse anti-human thrombomodulin monoclonal antibody (sc-13164, Santa Cruz). The secondary antibody was a biotinylated AffiniPure goat anti-mouse IgG (115-065-146, Jackson ImmunoResearch). Immunoreactivity was visualized using 3,3-diaminobenzidine tetrahydrochloride dihydrate (DAB) (brown colour). Nuclear counterstaining was done with haemalum (blue colour). Statistical analysis For survival data, Kaplan–Meier curves were plotted and the Mantel–Cox log-rank test was used to determine significant differences between groups. For haemodynamic data, statistical significance was determined using unpaired and paired two-sided Student’s t -tests as indicated; data presented as single measurements with bars as group means ± s.d. For histochemical analysis, a one-way ANOVA with Holm–Sidak’s multiple comparisons was used to determine statistical significance; data are mean ± s.d.; P < 0.05 was considered significant. No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. Reporting summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this paper. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Change history 03 April 2019 In this Letter, Mayuko Kurome and Valeri Zakhartchenko have been added to the author list (affiliated with Institute of Molecular Animal Breeding and Biotechnology, Gene Center, LMU Munich, Munich, Germany). The author list and ‘Author contributions’ section have been corrected online; see accompanying Amendment.
A large team of researchers from several institutions in Germany, Sweden, Switzerland and the U.S. has transplanted pig hearts into baboons and kept them alive for an extended period of time. In their paper published in the journal Nature, the group describes changes they made to heart transplant procedures and how well they worked. Christoph Knosalla with the German Heart Center Berlin has written a News & Views piece on the work done by the team in the same journal issue. As populations age in advanced countries, medical providers confront a growing number of health issues, one of which is how to deal with cardiac damage from heart disease. Currently, the only treatment for such patients is a transplant. But every year, the list grows longer, and many people die waiting for a donor. One possible solution to this problem is to breed animals that are anatomically similar to humans and use their hearts. But thus far, research in this area has run into a wall—baboons that receive pig hearts, for example, rarely live for more than a month. In this new effort, the researchers claim to have overcome the problems associated with transplanting pig hearts into baboons, and in so doing have brought some optimism to the idea of doing the same for humans. Prior research has shown that there are two main reasons that pig hearts fail to thrive in baboons: damage that occurs during the procedure, and size differences between pigs and baboons. The normal procedure for preserving a heart after removal from an animal before transplantation has been immersing it in cold water. But this method has been found to damage heart tissue. To prevent that damage, the researchers periodically pumped a solution of blood mixed with nutrients and hormones and added extra oxygen. Doing so appeared to prevent heart damage. Pigs are bigger than baboons and thus have bigger hearts—if a pig heart grows to its normal size inside of a baboon, it runs out of room and fails. To prevent this from happening, the researchers gave the baboons medicine to bring their blood pressure down to the level found in pigs. They also gave the baboons a drug that prevents heart growth. And finally, they modified the hormone treatment given to the baboons to prevent immunosuppression—they tapered the cortisone treatments to prevent heart overgrowth. By combining the techniques, the researchers found they were able to extend the lives of the baboon recipients. Two lived for three months—the length of the study—and another two lived for six months before they were euthanized.
10.1038/s41586-018-0765-z
Medicine
Estimation of R0 for the spread of SARS-CoV-2, a difficult key factor
Juan Pablo Prada et al, Estimation of R0 for the spread of SARS-CoV-2 in Germany from excess mortality, Scientific Reports (2022). DOI: 10.1038/s41598-022-22101-7 Journal information: Scientific Reports
https://dx.doi.org/10.1038/s41598-022-22101-7
https://medicalxpress.com/news/2022-10-r0-sars-cov-difficult-key-factor.html
Abstract For SARS-CoV-2, R0 calculations in the range of 2–3 dominate the literature, but much higher estimates have also been published. Because capacity for RT-PCR testing increased greatly in the early phase of the Covid-19 pandemic, R0 determinations based on these incidence values are subject to strong bias. We propose to use Covid-19-induced excess mortality to determine R0 regardless of RT-PCR testing capacity. We used data from the Robert Koch Institute (RKI) on the incidence of Covid cases, Covid-related deaths, number of RT-PCR tests performed, and excess mortality calculated from data from the Federal Statistical Office in Germany. We determined R0 using exponential growth estimates with a serial interval of 4.7 days. We used only datasets that were not yet under the influence of policy measures (e.g., lockdowns or school closures). The uncorrected R0 value for the spread of SARS-CoV-2 based on RT-PCR incidence data was 2.56 (95% CI 2.52–2.60) for Covid-19 cases and 2.03 (95% CI 1.96–2.10) for Covid-19-related deaths. However, because the number of RT-PCR tests increased by a growth factor of 1.381 during the same period, these R0 values must be corrected accordingly (R0corrected = R0uncorrected/1.381), yielding 1.86 for Covid-19 cases and 1.47 for Covid-19 deaths. The R0 value based on excess deaths was calculated to be 1.34 (95% CI 1.32–1.37). A sine-function-based adjustment for seasonal effects of 40% corresponds to a maximum value of R0 January = 1.68 and a minimum value of R0 July = 1.01. Our calculations show an R0 that is much lower than previously thought. This relatively low range of R0 fits very well with the observed seasonal pattern of infection across Europe in 2020 and 2021, including the emergence of more contagious escape variants such as delta or omicron. In general, our study shows that excess mortality can be used as a reliable surrogate to determine the R0 in pandemic situations. Introduction The basic replication number (R0) of a virus describes the average number of secondary infections caused by an infected individual in an immunologically still naive population 1 . R0 is a key factor in predicting the spread of a virus in a population. It is also used to estimate the proportion of individuals required in a population to achieve herd immunity 2 . In addition, the magnitude of R0 can also be used to predict whether a respiratory virus in temperate climates will develop a seasonal pattern of infection (as observed with influenza viruses and endemic coronaviruses) rather than continuous transmission throughout the year 3 . R0 is influenced not only by intrinsic characteristics of the pathogen, such as its infectivity and mode of transmission, but also by characteristics of the population under study such as the population density 4 . For respiratory viruses, there are several such extrinsic characteristics that have a significant impact on the probability of transmission and thus on R0: The density of a population, the number of persons living in a household and their average vulnerability to infections, other social factors that affect the number of close contacts between infected and uninfected persons (e.g., use of public transportation, work laws when ill, etc.), and also the climate of the area in which the population is located 5 . Based on data from 425 confirmed cases in Wuhan, R0 of SARS-CoV-2 was estimated to be 2.2 6 . Another report estimating R0 based on case reports in Wuhan yielded a higher R0 of 5.7 7 . This wide range of values is also reflected in a number of other analyses in which R0 was determined to be between 1.95 (WHO estimate) and 6.49 (all reviewed in Ref. 8 ). The German Robert-Koch-Institute (RKI) assumes an R0 in the range of 2.8–3.8 9 based on systematic reviews 10 , 11 , 12 . All these estimations of R0 have in common that they are based on incidences of SARS-CoV-2 infections detected by RT-PCR. These estimations are therefore not only dependent on the characteristics of the population under study, but also on testing strategies (e.g. representative sampling, symptom-based testing, contact-based testing of index patients, etc.) as well as rapidly increasing numbers of available and performed tests during the early weeks of the pandemic (at least, if no mathematical corrections for this increase were performed). Because SARS-CoV-2 infections have led to excess mortality in many countries worldwide 13 , the increase in excess mortality can be used as a surrogate for SARS-CoV-2 infections in order to calculate R0 independent of testing strategies and testing capacity. Here, we determined R0 for SARS-CoV-2 infections in Germany during the early phase of the pandemic in February and March 2020 based on Covid-19-associated excess mortality. For comparison, we also calculated R0 from incidence data of SARS-CoV-2 infections corrected by the increase in test capacities, as well as R0 from incidence data of RT-PCR-confirmed Covid-19-related deaths. Methods Databases The number of Covid-19 cases, Covid-19-related deaths and SARS-CoV-2-RT-PCR-tests was accessed from the Robert-Koch-Institute (RKI) website 14 . The definition of “Covid-19 case” used here is that of the RKI, which does not use the date of receipt of a positive PCR sample, but rather the date of illness, which in some cases is several days earlier 13 . In accordance with Section 11 (1) of the IfSG (Infektionsschutzgesetz), the public health authorit only reports cases of illness or death and evidence of pathogens that meet the case definition in accordance with Section11 (2) IfSG. Excess mortality was calculated from data of the Federal Statistical Office 15 . All used datasets can be downloaded as excel file from the Supplementary Material S1 . Mobility data was taken from the Apple website 16 , which provided the movement data of Apple cell phones from different countries to be used for scientific evaluation in the context of the Covid pandemic. As of April 14, 2022, Apple is no longer providing COVID-19 mobility trends reports. The datafile used for this study can be accessed in the Supplement Sect. S5 . All methods were performed in accordance with the Declarations of Helsinki. Calculation of excess mortality To calculate excess mortality per calendar week, the number of weekly deaths in 2020 was subtracted from the mean of weekly deaths in 2016–2019 in line with the definition used by the Federal Statistical Office to calculate excess mortality in Germany 15 . For calculation of “adjusted excess deaths”, the excess mortality in calendar week 10 was tared to 0 in all age groups and the values of the following calendar weeks were adjusted accordingly. Calculation of R0 R0 was determined using the R package from Obadia et al. 17 in R version 3.60. We selected the exponential growth method of the package for calculation of R0. The mean serial interval (average time between successive infection cases) was simulated following a gamma distribution with mean equal to 4.7 (± SD 2.9) 18 . Weekly incidence values of excess mortality were converted to simulated daily incidence values using a gamma distribution. The R-script can be downloaded from the Supplementary Material S2 . Results Determination of the time period that can be used for the calculation of R0 The governments of Germany and its states have taken several measures to contain the SARS-CoV-2 epidemic in Germany in early 2020, including canceling mass events (implemented March 9), closing schools (implemented March 16), closing stores (except grocery stores and pharmacies) and implementing social distancing rules prohibiting personal contact outside the family (implemented March 23) (Fig. 1 A). All of these measures, as well as widespread media coverage of the SARS-CoV-2 epidemic in Germany, likely had an impact on the spread of SARS-CoV-2. Therefore, to estimate the value of R0 in Germany, it is imperative to include only data from time points that either predate the implementation of these measures or from time points when these measures could not yet have had an impact on the observed parameter used to calculate R0. As shown in Fig. 1 A, people in Germany started to reduce their mobility from March 12, i.e., even a few days earlier than social distancing was officially introduced. Because the incubation period between SARS-CoV-2 infection and the onset of Covid-19 symptoms is on average 5–6 days 9 , behavioral changes can lead to an impact on the number of disease cases no earlier than 5–6 days later (i.e. March 17–18). However, because the number of confirmed SARS-CoV-2 infections already peaked on March 14 (see Fig. 2 B) and because we wanted to avoid underestimating R0 in our calculations by possibly including values from an already flattening curve, we added an additional safety margin of 2 days to the first measurable behavioral changes and included for our R0 calculations incidence data of Covid-19 disease cases up to and including March 15 (calendar week 12) without risking that behavioral changes may have had an impact on the R value determined (Fig. 1 C). Figure 1 Identifying SARS-CoV-2 datasets unaffected by policies or behavioral changes for estimating R0. ( A ) Mobility data (driving) for Germany, provided by Apple 16 (driving: red; transit: blue; walking: green). The first change in mobility trends is observed for March 13. ( B ) Data provided by RKI for Covid-cases (black) and Covid-19-related deaths (red) were fitted by gamma distribution. The maxima of the two curves are 25 days apart. ( C ) Graphical representation of the date up to which data from Covid-19 disease cases or Covid-19 death cases can be used to determine R0 without affecting the outcome through policy actions or societal responses. Full size image Figure 2 Calculation of R0 from Covid-19 disease incidence numbers and Covid-19 related deaths. ( A ) Data reported by the RKI for the number of performed SARA-CoV-2 RT-PCR-tests. ( B ) Data reported by the RKI for Covid-19-cases (blue symbols, left y-axis) and Covid-19 related deaths (red symbols, right y-axis). ( A,B ) Date were fitted to an exponential growth curve with a serial interval of 4.7 (± SD 2.9) to calculate R0. Dotted lines in B represent the dates for political interventions (03/09/2020 cancellation of mass events, 03/16/2020 closing of schools, 03/23/2020 closing of shops and social distancing). Dark blue and dark red symbols represent data points that were considered for the determination of R0, light blue and light red symbols represent later data points that were not considered for the calculation. Full size image The RKI provides different epidemiological datasets that can be used for calculations of R0 of SARS-CoV-2 such as daily numbers of detected cases and daily numbers of CoViD-19-related deaths. We fitted the data to a gamma distribution and determined the difference between the peaks of the curves. The mean time between the occurrence of Covid-19 disease cases and Covid-19-related deaths was 25 days (Fig. 1 B). Covid-19-related deaths can therefore be used for the determination of R0 at significantly later time points than the occurrence of Covid-19 disease cases without risking of compromising the R0 value by behavioral changes. Therefore, to determine R0 from reported Covid-19 deaths (as well as from Covid-19-related excess mortality), we used records up to and including April 11 (calendar week 15) (Fig. 1 C). Calculation of R0 from incidence data of Covid-19 disease cases and Covid-19 deaths In the initial phase of the pandemic, testing capacities were significantly smaller than the actual number of infections. The steep rise in the number of reported cases during this period might therefore also due in significant part to the sharp increase in the number of SARS-CoV-2 RT-PCR test performed. For our calculations of R0, we use incidence data up to and including calendar week 12 for Covid-19 disease cases and up to and including calendar week 15 for Covid-19 deaths. While data for the number of tests performed are not available for the period before calendar week 11, the RKI provides at least the number of tests performed from week 11 onwards 14 . As depicted in Fig. 2 A, a significant increase in the number of tests performed can be observed in the calendar weeks 11 to 13. To determine what impact this increase in testing numbers had on reported Covid-19 incidences, we determined the growth rate of testing during this period. The growth rate of testing yields an “R0 of tests” of 1.38 (Fig. 2 A), meaning that even if the number of infections remained constant during this period of time, an apparent increase of 1.38 in R0 would be observed. It follows that R0 values from incidence figures must be corrected by this factor. From the raw incidence data, we obtain an R0 of 2.56 for Covid-19 disease cases and an R0 of 2.03 for Covid-19 death cases (Fig. 2 B). However, these values must still be corrected for the growth rate of testing (R0 corrected = R0 uncorrected /“R0 of tests”), resulting in a corrected R0 of 1.86 for Covid-19 disease cases and an R0 of 1.47 for Covid-19-death cases. The R0 value derived from deaths is slightly lower than the R0 value determined from Covid-19 incidence values. This may be due to the fact that in the initial phase of the pandemic, severely ill cases (and thus individuals at higher risk of death) were preferentially tested, while milder and asymptomatic cases were increasingly included in testing as the testing capacity expanded. Such a change in testing strategy inevitably introduces a bias toward higher R0 values when calculated from Covid-19 incidence data compared with Covid-19 death data. However, even if we correct the incidence values for test capacity dynamics, this way of determining R0 still remains subject to many uncertainties: First, the exact numbers of tests performed in the first weeks of the pandemic were not collected for Germany, so that an accurate estimate of the dynamics of testing capacity is not possible. Second, the incidence data do not come from representative samples in the general population, but mainly from symptomatic patients and persons with whom they came into contact. Therefore, this dataset contains a disproportionate number of infections from nursing homes and hospitals, where symptomatic infections are overrepresented and where transmission probabilities are most likely different from what would be expected in the general population. Therefore, the R0 values calculated above are unlikely to be representative of the spread of the virus in the general population. Calculation of R0 from excess mortality To address this problem, we also determined R0 based on excess mortality data in Germany during early 2020. The Federal Statistical Office of Germany lists all deaths that occur in Germany, regardless of their cause 15 . Because SARS-CoV-2 infection has led to increased excess mortality in many countries, these data can be used as surrogate markers for the spread of SARS-CoV-2 infections. And because excess mortality is independent of the number or strategy of SARS-CoV-2 testing, it provides a representative picture for the spread of infections in the general population. Figure 3 A shows the incidence of deaths with confirmed SARS-CoV-2 infection. The data set shown here is the same as that in Fig. 2 B, but this time as weekly incidence and subdivided into different age groups. The (uncorrected) R0 value here is 1.95, similar to 2.03 from Fig. 2 B. From this figure, it can be seen that the peak of Covid-19 related mortality is between calendar week 10 and 20. Figure 3 B shows excess mortality (in relation to average weekly deaths in 2016–2019) in the different age groups, and one can see a parallel trend to the confirmed Covid-related deaths between calendar weeks 10 and 20. Based on the respective values of calendar week 10, from which an increase in excess mortality is observed in all Covid-19 relevant age groups, we plotted the change in all values in Fig. 3 C. From this adjusted excess mortality, we obtained an R0 of 1.34 (95% CI 1.32–1.37) for the sum of all age groups for the spread of SARS-CoV-2 in the general population in Germany. For the individual age groups, the R0 values were very similar: age 90+: 1.38 (1.34–1.42), age 80–89: 1.37 (1.35–1.40), age 70–79: 1.31 (1.26–1.36), age 60–69: 1.22 (1.16–1.29), age 50–59: 1.59 (1.45–1,74), age 30–49: 1.16 (1.06–1.26), age 0–29: 1.89 (95% CI 047–5.56). Figure 3 Calculation of R0 from excess mortality. ( A ) Covid-19 related deaths as weekly incidence in different age groups. ( B ) Excess deaths in 2020 in different age groups based on comparison with average weekly deaths in 2016–2019. ( C ) Excess deaths presented in B, but adjusted to 0 for week 10 in each age group, so that relative changes related to Covid-19 become better visible. Full size image Influence of influenza-related excess deaths on Covid-19 related excess deaths Due to the lack of representative measurements, Covid-19 related excess mortality is the only infection parameter that is free from bias due to changes in testing strategy or testing numbers. However, excess mortality data are subject to other confounding factors that may have an impact on the calculation of R0: The Covid-19 pandemic reached Germany at a time when seasonal influenza activity in Germany was already subsiding (see Fig. 4 C). Influenza-related excess mortality and Covid-19 related excess mortality are therefore superimposed in the total excess mortality data sets. If influenza mortality had been significantly elevated in 2020 compared with previous years (2016–2019), this could mask Covid-19 related effects, particularly if influenza mortality rates that were already falling again coincided with an incipient increase in Covid-19 mortality rates. However, a look at mortality in previous years shows that influenza-related mortality in Germany in 2020 was significantly lower compared with previous years because of the two exceptionally strong influenza years 2017 and 2018 (Fig. 4 A). As a result, under-mortality was observed in Germany in calendar weeks 10–14 compared with previous years, rather than excess-mortality (Fig. 4 B). Thus, if there was an effect of influenza-related deaths on the calculation of R0 for Covid-19 infections, it was one that resulted in an overestimate of R0 rather than an underestimate. Thus, the R0 calculated here of 1.34 (95% CI 1.32–1.37) should be regarded as a maximum value, whereas the actual R0 of SARS-CoV-2 infections in Germany is likely to be even lower. Figure 4 Comparison of Influenza-related and Covid-19 related mortality in Germany. ( A ) Daily deaths in Germany from 2016 to 2021 (data taken from the Federal Statistical Office of Germany). ( B ) Comparison of daily number of deaths in 2020 (red line) with average number of daily deaths in 2016–2019 (blue line). ( C ) Positive rate for influenza infections in Germany for calendar weeks 1–20 from 2016 to 2020 (data taken from the RKI influenza survey). Full size image Seasonal effects on R0 The seasonal effect on R0 can be approximated as a sinusoidal pattern with a maximum in January as the coldest month in Germany and a minimum in July as the warmest month, with an approximate 40% reduction in July compared to January 3 (Fig. 5 , dotted line). As our calculation of R0 was based on the infection situation in March 2020, it can be expected that the R0 value determined in this way is about 20% lower than the maximum value that would be reached in January if the pandemic would have reached Germany earlier. According to this model, R0 determined in March with a value of R0 March = 1.34 would reach its maximum in January with a value of R0 January = 1.68 and fall to a minimum of R0 July = 1.01 in July (Fig. 5 A). Figure 5 Seasonal influence on R0 and herd immunity. ( A ) The seasonal effect on R0 can be assumed as a sine function with a maximum in January and a 40% lower minimum in July (dashed line, right y-axis). This translates into an oscillating R0 with a maximum in January (of R0 January = 1.68) and a minimum in July (R0 July = 1.01) (blue line, left y-axis). R0 calculations are based on the values calculated for March (R0 March = 1.34) (blue dot) ( B ) Herd immunity similarly to R0 oscillates in a seasonal pattern. Full size image Herd immunity is dependent on R0 (herd immunity = (1 − 1/R0)), and therefore also fluctuates seasonally. Thus, herd immunity against the original SARS-CoV-2 strain would oscillate between 40% in January and below 1% in July (Fig. 5 B). This explains why the number of SARS-CoV-2 infections in summer 2020 not only declined again in “lockdown” countries such as Germany (and remained low in these countries even after the suspension of policy measures during the summer), but also why the same seasonal pattern was observed in countries with little or no countermeasures against SARS-CoV-2 13 . With a seasonal increase in the threshold of herd immunity, the following winter (2020/2021) fueled again the spread of SARS-CoV-2 Germany, until infection numbers dropped again in spring 2021. At this time, about 3.8 million SARS-CoV-2 infections have been reported to the RKI, corresponding to 4.5% of the population (May 2021) 19 and a serological survey of blood donors revealed a seropositivity rate of 14% in April 2021 20 , showing a substantial underestimation of SARS-CoV-2 infections from RT-PCR-data alone. Together with the enrollment of the Covid-vaccine campaign, immunity in the German population has reached up to 40% by end of May 2021, and that coincided with the emergence of the delta variant, which can now be interpreted as an escape variant that overcame the 40% herd immunity restrictions of the original SARS-CoV-2 strains by higher contagiousness (and therefore also a higher R0). Since the population in Germany was no longer naïve towards SARS-CoV-2 during summer of 2021 when the delta variant began its expansion in Germany, it is not possible to determine an R0 for this variant. However, the RKI calculates daily R e values based on a 7-day period for Germany, and the values for the delta variant reached 1.3 during July/August 2021 21 . With a seasonal variation of 40%, this value corresponds to a theoretic maximum in December with an R e = 2.2, translating into a 55% threshold for winter herd immunity. Discussion The early SARS-CoV-2 infection spread in Germany with an R0 of 1.34 (95% CI 1.32–1.37). This value is much lower than what had been expected based on R0 determinations from the literature, where values between 2 and 3 became consensus 8 , 9 . Although the German RKI has not published an R0 estimation for Germany, it provided daily estimations of R based on a four-day-period. These daily R values during the first two weeks of March 2020 were in the range of 2.2–3.2 22 . Based on the reporting data for positive RT-PCR results from the “our world in data” database of Oxford University, an R0 of 3.37 was determined for Germany 23 , but these calculations used RT-PCR reporting data rather than data for Covid-19 disease cases and therefore are not fully comparable with the calculations from the RKI. The discrepancies between these high values and the rather low R0 estimates in our manuscript are primarily due to the fact that in these earlier estimations the R0 values were not corrected by a factor accounting for the substantial increase in test capacity during this period. If we use our uncorrected R0 estimate based on Covid-19 case numbers for comparison (R0 = 2.56, Fig. 2 B), it is in the same order of magnitude as the values calculated by the RKI. A high R0 of the order of 3 would likely have resulted in a lack of seasonal progression, as a seasonal effect was estimated to reduce R0 by only 40% based on observations in endemic coronaviruses 3 . Accordingly, health authorities expected an unrestrained spread of the virus for Germany, whereupon a series of policy measures were adopted aiming to actively reduce the incidence of infection. In retrospect, however, a clearly seasonal occurrence is evident not only for Germany, but also for all other countries in temperate climates, in particular also for Sweden, where hardly any measures have been taken to contain the spread of SARS-CoV-2 in the general population 13 . The core of this study is the calculation of R0 from excess mortality data. Although this type of calculation is free of sources of bias that affect conventional calculations of R0 from infection incidence data (e.g., increase in number of tests, changes in testing strategy, increase in reporting awareness), we would like to point out some limitations in the presenting study as well: Excess mortality is dependent on multiple factors, and pandemic-related medical shortages could lead to an increase in excess mortality that is independent of a direct effect of SARS-CoV-2. Such a bias would lead to an overestimation of R0. However, we believe that such an effect, if present at all, is likely to have played only a minor role: First, there was no significant reduction in medical care in Germany during the Covid pandemic, so no significant secondary effects on mortality rates would be expected. Moreover, our R0 calculations are significantly lower than most other calculations, so that empirically such an effect is unlikely to have played a major role. Another point is that the calculation of excess mortalities takes into account not only the striving figures of the respective period under consideration (in our case spring 2020), but also the mortality figures of a reference period before that (in our case the past 4 years 2016–2019). An increase or decrease in excess mortality in the period under consideration is therefore always dependent on the development of death rates in previous years. We have tried to consider this bias at least qualitatively (see Fig. 4 ), showing that this effect might have led to an overestimation of R0 in our calculation. The use of excess mortality as a surrogate for the spread of infection in a population requires the assumption that the proportion of particularly vulnerable groups (the elderly and patients with preexisting conditions) in the total incidence of infection does not change significantly during the analysis period. Data from the RKI on the age distribution of Covid infections show a constant age distribution of infections in the early phase (weeks 10 to 12) of the pandemic in Germany 24 , so we can assume that the number of Covid-related deaths is indeed a reliable surrogate marker for infection incidence. Factors such as changing seroprevalence or changing variants of SARS-CoV-2 were not included in the calculation of R0. This simplification seems appropriate to us because in the early phase of the pandemic, seroprevalence was still very low and thus could have only marginal influence on the spread of infection. In addition, at the time of analysis (until week 12 in 2020), in Germany and Europe there were almost exclusively the very closely related SARS-CoV-2 clades 20A, 20B, 20C and 20D, for which a similar transmissibility can be assumed 25 . Finally, the calculation of R0 depends on the size of the serial interval: the larger the serial interval, the larger the calculated value for R0. A large number of articles have now appeared in the literature determining the serial interval of SARS-CoV-2 infection at begin of the pandemic. In a meta-analysis of 56 articles, a range of 1–9.99 was determined 18 . The German RKI assumes a serial interval of 4 days in its estimates for calculating Re. The SI of 4.7 days chosen in this paper thus leads to a slight overestimation of R0 compared to the parameters used by the RKI. In Supplement 4 we have shown a calculation of R0 for different SI. The concept of herd immunity in respiratory pathogens such as coronaviruses does not imply permanent protection of the population against seasonal reemergence of these pathogens, since the immunity achieved may decrease over time, especially in asymptomatically infected patients 26 . Instead, the achievement of herd immunity in respiratory viruses leads to a strong selection pressure for escape mutations (classical immune escape or increased contagiousness), which can then give rise to new waves of infection 27 . For this reason, respiratory viruses such as influenza- or coronaviruses remain endemic, despite broad immunity, which will probably also be the case for SARS-CoV-2. Conclusion Our study shows that the R0 value of SARS-CoV-2 can be calculated from excess mortality data. We also introduce here the concept of a seasonally adjusted R0 value, which should be reported as a range (R0 January –R0 July ) rather than a static value. We determined an R0 value of 1.34 for infections in March 2020 (R0 March = 1.34), corresponding to a seasonal range of R0 January = 1.68 and a minimum in July (R0 July = 1.01). This rather low range of R0 values is much more consistent with observations of pandemic progression than many earlier and much higher estimates of the R0 value. The massive expansion of testing capacity in the early phase of the pandemic, combined with changes in testing strategy, was a major cause of the overestimation of the R0 value. Excess mortality can be determined independently of SARS-CoV-2 testing capacity in many countries, and therefore can be a valuable tool in future pandemics to provide reliable values for the rate of spread of an emerging pathogen in a population when representative samples of pathogen spread are not available. Data availability All data generated or analysed during this study are included in this published article and its Supplementary Information files. Change history 21 December 2022 A Correction to this paper has been published:
In 2020, at the beginning of the pandemic, the whole of Germany was fascinated by the so-called R-value, which was published daily in the media. If it was above 1, it was clear that the pandemic would continue to spread. A value below 1, on the other hand, promised a decline in the number of infections. Values of more than 2, as calculated by the Robert Koch Institute during this time, did not bode well: they stood for an exponential spread of SARS-CoV-2. However, a study now published in the journal Scientific Reports concludes that in reality, the R-value was significantly lower than previously assumed. Scientists from the Institute of Virology and Immunobiology and the Chair of Bioinformatics at the Julius-Maximilians-Universität Würzburg (JMU) are responsible for these calculations. Lead authors are virologist Carsten Scheller and bioinformatician Thomas Dandekar. R0 value: Basis of many predictions "The so-called basic replication number R0 of a virus describes the average number of people that an infected person infects in a population that has not yet had any contact with the virus," Carsten Scheller says. Accordingly, R0 is a key factor when it comes to making predictions about the spread of a virus or to estimate how many people need to become infected to achieve the goal of herd immunity. "In addition, the R0 value can be used to predict whether a respiratory virus in temperate climates will develop more of a seasonal pattern of infection, such as influenza viruses, or whether there will be continuous transmission throughout the year," the virologist said. Estimates vary widely In the case of SARS-CoV-2, the R0 values determined varied significantly. An initial estimate based on data from 425 confirmed cases in Wuhan yielded a value of 2.26, while later calculations yielded 5.77. World Health Organization estimates ranged from 1.95 to 6.49, and the German Robert Koch Institute assumes an R value in the range of 2.8 to 3.89 based on systematic reviews. "What all these estimates and calculations have in common is that they are based on the incidence of SARS-CoV-2 infections detected with a PCR test," says Carsten Scheller. The problem here is that these estimates depend not only on the characteristics of the population studied, but also on testing strategies and the increasing number of tests available and performed in the first weeks of the pandemic—at least if no mathematical corrections are made for this increase. Excess mortality as a basis To rid its calculations of such uncertain influencing factors, the team at the University of Würzburg decided to use a different data basis: the increase in mortality compared to pre-pandemic years, known as excess mortality. "Since SARS-CoV-2 infection has led to increased excess mortality in many countries, these data can be used as a surrogate marker for the spread of coronavirus. And because excess mortality is independent of the number or strategy of tests performed, it provides a representative picture of the spread of infection in the general population," the virologist said. However, the team had to consider one important aspect: the question of the right time to collect the data. Germany, for example, had taken early measures to contain the SARS-CoV-2 epidemic—ranging from a ban on mass gatherings (March 9) to school closures (March 16) to a complete lockdown (March 23). All of these restrictions, as well as widespread media coverage, may have had an impact on the spread of the coronavirus, according to the researchers. "For this reason, we used only deaths up to and including April 11, 2020, for our calculations. Since the infections of these deaths already occurred on average 25 days before death, the measures adopted to contain the infections could not have had any influence on the calculated value of R0," explains bioinformatician Thomas Dandekar. Expansion of test capacities leads to overestimation On this basis, the team concludes, "We determined an R0 value of 1.34 for infections in March 2020, which corresponds to a seasonal range of 1.68 in January and a minimum of 1.01 in July." This rather low range of R0 values is much more consistent with observations of pandemic progression than many earlier and much higher estimates, according to those involved in the study. The main reasons for this overestimation, they believe, is the massive expansion of testing capacity in the early stages of the pandemic, combined with changes in testing strategy. This relatively low R0 value contributed significantly to a decline in infection numbers in the spring of 2020, according to the research group. The effects of the lockdown on the spread of the virus, therefore, may not have been as high as it might seem. Given a similar trend, therefore, any cost-benefit consideration of lockdown measures would have to be different than in the past. New virus variants may have different R-values What significance do these findings have now, given that other SARS-CoV-2 variants have long been sweeping the country? "The new, usually more contagious variants should certainly have a different R0 value in a human population unaffected by the virus than the original variant. However, this can never be calculated again, because the population has built up a great deal of immunity through infections it has already undergone and also through vaccinations," explains Carsten Scheller. However, the lower R0 value of the original variant now provides a better view of the individual waves of infection. Finally, a low R value also means that herd immunity is achieved early on. Then the virus has no way to spread further, with the consequence that new variants are constantly being formed. "So the individual waves, caused by so-called immune-escape variants, are something quite normal," says the virologist. Once about a third of the population is infected, these waves of the original variant break off on their own, which is important to know for planning countermeasures, he says. Limitations must be taken into account However, the authors of the study now published are aware that their model is also subject to certain limitations. For example, pandemic-related medical shortages could lead to an increase in excess mortality that is independent of a direct effect of viral infections. Such a bias would lead to an overestimation of the R value. In addition, statements about excess mortality always refer to a specific reference period, such as the previous year or, as in the case of this study, the preceding four years. Accordingly, an increase or decrease is always dependent on the development of mortality rates in the preceding years. And finally, it is a prerequisite for the determination of correct values that the share of particularly vulnerable groups in the total infection incidence does not change significantly during the analysis period. Moreover, over time, the more infectious viral variants are preferentially passed on and eventually dominate the incidence of infection. If all of these factors were taken into account, the authors of the study are convinced that excess mortality would be a valuable tool in future pandemics to determine reliable values for the rate of spread of an emerging pathogen in a population. Policy decisions could be better adapted to reality from this basis.
10.1038/s41598-022-22101-7
Medicine
Tumor-like spheres help scientists discover smarter cancer drugs
Smitha Kota et al, A novel three-dimensional high-throughput screening approach identifies inducers of a mutant KRAS selective lethal phenotype, Oncogene (2018). DOI: 10.1038/s41388-018-0257-5 Journal information: Oncogene
http://dx.doi.org/10.1038/s41388-018-0257-5
https://medicalxpress.com/news/2018-05-tumor-like-spheres-scientists-smarter-cancer.html
Abstract The RAS proteins are the most frequently mutated oncogenes in cancer, with highest frequency found in pancreatic, lung, and colon tumors. Moreover, the activity of RAS is required for the proliferation and/or survival of these tumor cells and thus represents a high-value target for therapeutic development. Direct targeting of RAS has proven challenging for multiple reasons stemming from the biology of the protein, the complexity of downstream effector pathways and upstream regulatory networks. Thus, significant efforts have been directed at identifying downstream targets on which RAS is dependent. These efforts have proven challenging, in part due to confounding factors such as reliance on two-dimensional adherent monolayer cell cultures that inadequately recapitulate the physiologic context to which cells are exposed in vivo. To overcome these issues, we implemented a high-throughput screening (HTS) approach using a spheroid-based 3-dimensional culture format, thought to more closely reflect conditions experienced by cells in vivo. Using isogenic cell pairs, differing in the status of KRAS , we identified Proscillaridin A as a selective inhibitor of cells harboring the oncogenic KRas G12V allele. Significantly, the identification of Proscillaridin A was facilitated by the 3D screening platform and would not have been discovered employing standard 2D culturing methods. Introduction The RAS proteins are a family of small GTP-binding proteins comprised of HRAS, NRAS and two splice forms of KRAS (KRAS4A and KRAS4B), which function as a switch cycling between “ON” (GTP-bound) or “OFF” (GDP-bound) conformations. They mediate signals from the extracellular environment into intracellular signaling pathways and function as master regulators in almost every aspect of cellular behavior including cell proliferation, differentiation and cell death. Given these functions, the involvement of RAS proteins in pathological conditions, such as cancer, is no surprise. Indeed, the RAS genes are the most frequently mutated oncogenes, with oncogenic mutations found in approximately 30% of all cancers [ 1 , 2 ]. Examples include mutations of KRAS found in pancreatic carcinomas (>90%), lung adenocarcinomas (>30%) and colorectal tumors (>40%). Mutations in HRAS are found mostly in bladder (>15%) and head and neck squamous cell carcinoma (>10%). NRAS mutations are found mostly in melanoma (>30%) and multiple myeloma (18%) [ 3 ]. Mutations in RAS genes cluster most frequently to codons 12, 13 and 61, all within the G-domain of the protein that is involved in nucleotide binding and hydrolysis [ 4 ]. This results in an oncogenic version of the protein that is preferentially in the “ON” state. Given the oncogenic role of RAS proteins and prevalence of mutations, they have been of great interest as therapeutic targets for decades. However, efforts to target RAS proteins directly have proven challenging [ 4 ]. The majority of efforts have focused on attacking the catalytic G-domain, as well as interference with functionally required post-translational modifications (PTMs). In the case of attacking the G-domain, the use of nucleotide analogs is problematic given the affinities of RAS proteins to GTP, the intracellular concentration of GTP and kinetics of GTP hydrolysis. Similarly, the identification of small molecules that can specifically bind to RAS proteins has proven extremely difficult. In the case of PTMs, processing of the RAS C-terminus involves farnesylation, palmitoylation, methylation and proteolysis, all processes that could potentially be targeted. Several approaches to target these processes have been attempted, again with little success [ 4 ]. Other efforts have focused on identifying indirect targets through which RAS proteins drive tumorigenesis or on which RAS proteins are dependent. Efforts to develop inhibitors of downstream effector pathways have provided a number of targets that are currently in various stages of development. Examples include the mitogen-activated protein kinase (MAPK) and phosphoinositide 3-kinase (PI3K) pathways, which are well documented as required for RAS-driven transformation and tumorigenesis. Attempts to target these pathways have focused on development of kinase inhibitors against effectors of these pathways including RAF, mitogen-activated protein kinase (MEK) and PI3K. However, inhibition of either of these pathways alone seems to be insufficient for multiple reasons including powerful feedback mechanisms, the activation of alternate signaling pathways and the co-opting of new effectors [ 4 ]. As an alternative to targeting downstream effectors, synthetic lethality (SL) approaches have been utilized to identify targets that are non-essential when inhibited in normal cells, but are required for the viability of tumor cells expressing an oncogenic allele of KRAS [ 5 , 6 ]. Initially, the approaches taken were candidate based, relying on the known functions of select effectors. Examples include the small G-protein Rac1, the cyclin-dependent kinase CDK4, nuclear factor-κB and cyclin D1 (reviewed in Gysin et al. [ 4 ]). More recently, efforts to identify SL interactions have relied on unbiased loss-of-function approaches using RNA interference libraries to knockdown specific mRNAs in cells harboring an oncogenic RAS mutation [ 4 ]. Such efforts typically rely on lentiviral or retroviral libraries to introduce short hairpin RNA into RAS mutant cell lines. Examples include the identification of STK33—a calcium/calmodulin-dependent serine/threonine kinase [ 7 ], TBK1—a non-canonical IκB kinase [ 8 ], Polo-like kinase 1 [ 9 ], Wilm’s tumor 1 (WT1) and Snail2 [ 10 , 11 ]. These efforts have proven extremely challenging and in some cases the SL of hits could not be reproduced [ 12 , 13 , 14 ]. Another approach taken toward identification of oncogenic RAS-specific dependencies involves high-throughput screening (HTS) of small molecule libraries. The scope of these efforts has been much more limited when compared with genetic screens described above, albeit a number of compounds have been identified including sulfinyl cytidine, triphenyltetrazolium, erastin and tolperisone [ 15 , 16 , 17 , 18 , 19 ]. Further development of many of these leads is awaiting identification and confirmation of their respective targets in relevant tumor models. One potential limitation of previous screening efforts was the reliance on two-dimensional (2D) cell culture conditions, in which cells were directly plated onto plastic. These conditions vastly oversimplify the conditions to which cancer cells are exposed in vivo and are likely to be a major confounding factor. Indeed, cells grown on polystyrene in 2D lose many of the characteristics they possess under physiological conditions. Moreover, there is extensive data showing that cells behave differently when grown in 2D versus 3D conditions, mainly due to different cell–cell and cell–matrix interactions [ 20 , 21 , 22 ]. To overcome many of the challenges presented by first-generation screening efforts, we developed a 3D screening approach that is amenable to HTS small molecule screening using assay conditions that more closely reflect the conditions experienced by cells in vivo. Using this approach, we identified a number of cardiac glycosides (CGs) that exhibit preferential inhibition of pancreatic ductal adenocarcinoma cells carrying oncogenic KRAS mutations. Results Development of the 3D spheroid-based screening assay To identify small molecules that are SL to oncogenic KRAS and overcome many of the challenges faced in previous screening efforts, we sought to conduct a screening campaign using a primary screening platform that more closely reflects conditions experienced by tumor cells in vivo. Toward this goal, we developed a 3D spheroid-based primary screening assay that could be applied in a HTS small molecule screening campaign. In addition, to validate the specificity of validated hits against mutant KRAS, we developed an isogenic pair of cell lines that differ in the status of KRAS. Specifically, we employed the BxPC-3 pancreatic epithelial tumor cells that are wild-type for KRAS and generated stable clones expressing KRAS wild-type (BxPC-3-KRAS WT ) or mutant (BxPC-3-KRAS G12V ) alleles. From several stable clones isolated, we confirmed expression of wild-type or mutant alleles by DNA sequencing (data not shown) and selected clones expressing similar levels of KRAS protein compared with BxPC3 parental cell line. (Fig. 1 a, b). Fig. 1 Characterization of the BxPC-3 isogenic cell pair. a Analysis of KRAS expression levels in BxPC-3-KRAS G12V and BxPC-3-KRAS WT stable cell lines. Individual clones were isolated and evaluated for the expression of KRAS by western blotting analysis using anti-KRAS or anti-Vinculin (loading control) antibodies. b Analysis of KRAS expression levels in selected BxPC-3-KRAS G12V and BxPC-3-KRAS WT stable cell lines and BxPC-3-parental cell line. Confirmation of spheroidicity of c BxPC-3-KRAS WT or d BxPC-3-KRAS G12V cells by confocal imaging. Z-stack images were taken at 10 μm increments from the equator of Hoechst-stained spheroids of BxPC-3-KRAS G12V and BxPC-3-KRAS WT on a GE IN Cell 6000 Analyzer (10 × objective, f = 1.18 AU). Maximum intensity projection along the z axis of the 12 individual planes aligned in Image J to generate an intensity projection biased by color scale are shown in the left panel. e , f Determination of cell viability assay conditions using CellTiter-Glo 3D (CTG3D). BxPC-3 cells were seeded at increasing numbers in a 384-well spheroid plate, grown for 24 h and treated with CTG3D to assess viability. Relative luminescence of cells was determined at 48 h post-seeding, using a ViewLux microplate imager (PerkinElmer). Error bars = S.D. The data shown represent the mean of three independent replicates with triplicate data points Full size image Cells harvested from 2D monolayer culture were subsequently tested for their ability to form multi-cellular spheres over a period of 24 h. Both BxPC-3-KRAS WT and BxPC-3-KRAS G12V cells formed spheroids as determined using light microscopy and confirmed by confocal microscopy of Hoescht-stained spheroids. Multiple Z-stack images were collected at 10 μm increments and aligned in Image J to generate a composite intensity projection biased by color scale (Fig. 1c, d ). We next determined the linearity of spheroid growth using 3D CellTiter-Glo reagent, which allows for determination of cell number based on ATP levels. Linearity of detection was confirmed in the range of 1000–10,000 cells seeded (Fig. 1e, f ). Execution of a 3D screen to identify selective inhibitors of mutated KRAS We first determined the HTS readiness of the assay in 384-well format by examining the results from two separate experiments performed on two separate days ( N = 4 plates) using dimethyl sulfoxide (DMSO) transfers for test wells. We were able to determine the averages of sample field precent coefficient of variation at 6.6% ± 1.6%, S:B at 89.14 ± 2.75, a Z score of 0.80 ± 0.05 and a Z’ score of 0.87 ± 0.07 (Fig. 2a, b ). We then screened the BxPC-3-KRAS G12V spheroids in the primary assay against the Spectrum Collection (MicroSource) at a final assay concentration of 12.4 μM. The assay statistics yielded an average Z’ = 0.82 ± 0.07, S:B = 247.8 ± 7.8. Utilizing a hit cut-off of 3 standard deviations plus average of all samples tested, we found 55 hits that had a percent response >46.81%, equating to a 2.3% hit rate (Fig. 2c ). In order to identify selective inhibitors of mutant KRAS spheroids and eliminate nonspecific hits, we employed a counter screen against BxPC-3-KRAS WT spheroids. For the pilot screen using the BxPC-3-KRAS WT cells, the overall assay statistics yielded an average Z’ = 0.72 ± 0.04, S:B = 258.4 ± 8.3. Utilizing a standard activity hit cut-off of 3 standard deviations plus average of all samples tested, we found 63 hits that had a percent response >50.69% equating to a 2.6% hit rate (Fig. 2d ). Of the hits identified, 51 hits inhibited both cell types to a similar extent, whereas 4 hits show selectivity toward BxPC-3-KRAS G12V spheroids and 12 toward the BxPC-3-KRAS WT spheroids (Fig. 3a and Appendix 1 ). Fig. 2 Spectrum Library Screen of BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells in 3D and 2D formats. a , b Initially, 1280 compounds from the Spectrum Library were screened in duplicate on 3D against BxPC-3-KRASG12V to validate the 3D assay. a The activity of the compounds was plotted (duplicate data but showing single point percent response), with high control, low control and hit cut-off shown. b Correlation plot for the two replicate screening datasets. c – f Activity of 2400 compounds on BxPC-3-KRAS WT and BxPC-3-KRAS G12V cells in 3D and 2D formats (singlicate showing single point percent response along with high control, low control and hit cut-off shown): c 3D format BxPC-3-KRAS G12V , d 3D format BxPC-3-KRAS WT , e 2D format BxPC-3-KRAS G12V , f 2D BxPC-3 KRAS WT Full size image Fig. 3 Comparison of performance of compounds in BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells in 3D and 2D formats. Primary screening results of Spectrum library against BxPC-3-KRAS WT and BxPC-3-KRAS G12V in 3D and 2D formats. a Four-way Venn diagram of active compounds identified from the four screens. A hit was identified as any compound with % inhibition > the corresponding screen hit cut-off. The numbers in parentheses are the numbers of hits specific for that cell line. The numbers in the boxes represent the number of compounds found to be active in those overlapping assays. b – e Correlation plots of the % inhibition values of compounds in each of the screens: b BxPC-3-KRAS WT , 2D vs. 3D. c BxPC-3-KRAS G12V , 2D vs. 3D. d BxPC-3-KRAS G12V vs. BxPC-3-KRAS WT , 2D. e BxPC-3-KRAS G12V vs. BxPC-3-KRAS WT , 3D Full size image Comparison of 2D and 3D format assays To compare the 3D screening format with a traditional 2D monolayer assay, we carried out a comprehensive analysis and compared the performance of the BxPC-3-KRAS G12V and BxPC-3-KRAS WT cell-based assays under 2D conditions. Again, we used the Spectrum library at a final assay concentration of 12.4 μM. The assay statistics gave an average Z’ = 0.82 ± 0.05, S:B = 130.5 ± 5.70. Utilizing a standard activity hit cut-off of 3 standard deviations plus average of all samples tested, we found 70 hits that had average inhibition rates >55.22%, equating to a 2.9% hit rate (Fig. 2e ). For the BxPC-3-KRAS WT cells, the overall assay statistics gave an average Z’ = 0.87 ± 0.03, S:B = 128.2 ± 3.3. Utilizing a standard activity hit cut-off of 3 standard deviations plus average of all samples tested, we found 76 hits that had average inhibition rates >55.52%, equating to a 3.2% hit rate (Fig. 2f ). Of the hits identified, 66 hits inhibited both cell types to a similar extent, whereas 4 hits show selectivity toward BxPC-3-KRAS G12V cells and 10 toward the BxPC3-KRAS WT cells (Fig. 3a and Appendix 1 ). Comparing the performance of the cells between 2D and 3D formats suggests that over all the cells in the 3D assay format are generally more resistant to cytotoxicity (Figs. 3b, c ). These results are in agreement with the IC 50 values determined for a number of well-characterized anti-neoplastic agents, which when tested in 2D and 3D assay formats indicate that in 3D format the IC 50 ’s of these agents tend to be higher (Supplemental Fig. 1A-D ). Importantly, when comparing the responses of the BxPC3-KRAS G12V cells with BxPC3-KRAS WT cells in 2D format, compounds that showed significant inhibition (>3 × S.D.) did not show a preference toward WT over mutant or vice versa. (Fig. 3d ). In contrast, when comparing the responses in 3D formatted assays, several compounds show preferential inhibition toward one of the cell types (Fig. 3e ). Finally, as shown in Fig. 3a , a comparison of the 2D vs 3D responses of the BxPC3-KRAS G12V cells initially identified two compounds as hits in 3D format but not in the 2D format. We pursued these two hits further, testing for their IC 50 using 10-point, threefold serial dilutions done in triplicate in both the BxPC3-KRAS G12V and BxPC3-KRAS WT cells. Unfortunately, only one compound reproduced activity at the original test concentration and neither compound elicited a meaningful concentration-response curve, indicating these hits identified from a single data point are false positives (Supplemental Fig. 2 ). Characterization of top hits At the completion of the pilot assays, we chose to pursue 15 analogs that appeared to be most active against the KRAS mutant in 3D format. To confirm the activity of these and to determine specificity toward mutant KRAS, we retrieved hit compounds from the original source plates and assessed their activity against the BxPC-3-KRAS G12V or BxPC-3-KRAS WT cells, respectively. Each hit was tested at a single dose (~12.4 μM), in triplicate. From the top 15 hits, 14 hits were confirmed as having average inhibition values of >48% (Fig. 4a ). Assessing the activity of the 15 initial hits in the counter screen (BxPC-3-KRAS WT cells) confirmed several of the compounds were selective toward the BxPC-3-KRAS G12V mutant cells. In particular, two cardiotonic glycosides, compound SR-838893 and SR-841251 (Proscillaridin A), displayed the largest difference in response rates between the isogenic cell pair (Fig. 4a and Appendix 1 ). As Proscillaridin A displayed the greatest selectivity against the BxPC-3-KRAS G12V cells, we focused on this hit for further characterization. First, we determined the IC 50 of Proscillaridin A against the BxPC-3-KRAS G12V or BxPC-3-KRAS WT spheroids. Although BxPC-3-KRAS G12V cells displayed an IC 50 = 240 nM, treatment of the BxPC-3-KRAS WT cells with Proscillaridin A did not yield a dose-response and the response remained at 30–40% inhibition at all tested doses (Fig. 4b ). We next assessed the activity of Proscillaridin A against a panel of pancreatic tumor cell lines (AsPC-1, HPAF-II, PANC-1- all carrying KRAS G12V mutations) and immortalized pancreatic ductal cells (hTERT-HPNE E6/E7—wild-type for KRAS ). All cell lines were confirmed to form spheroids under the same conditions used for the BxPC3 cells (data not shown). Proscillaridin A displayed inhibitory activity against all the pancreatic tumor cell lines, with IC 50 ’s similar to BxPC-3-KRAS G12V cells, in the mid-nanomolar range (Fig. 4c ). However, hTERT-HPNE E6/E7 did not respond to Proscillaridin A treatment, similar to the BxPC-3-KRAS WT and parental BxPC3 cells (Fig. 4d ). Fig. 4 Validation of specificity of select inhibitors toward KRAS mutant cells. a Top 15 compounds from the Spectrum library screen were analyzed at 12.4 μM in triplicate against BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells in 3D format. Statistical significance was determined by unpaired t -test. NS = nonsignificant, * p < 0.001, ** p < 0.0001. b – d Concentration-response curves of Proscillaridrin A (SR-841251) on different 3D cell models: b BxPC-3-KRAS G12V and BxPC-3-KRAS WT ; c Pancreatic ductal adenocarcinoma cell lines AspC1, HPAF11 and PANC-1. d Parental BxPC3 or hTERT-HPNE-E6/E7 immortalized pancreatic ductal cells. The data shown represent the mean of three independent experiments with triplicate data points in each. Error bars = S.D. Full size image To compare the activity profile of Proscillaridin A in standard 2D culture conditions versus the 3D spheroid assay, we compared the responses of BxPC-3-KRAS G12V or BxPC-3-KRAS WT cells grown in 2D or 3D conditions. Proscillaridin A displayed strong selectivity toward BxPC-3-KRAS G12V spheroids in 3D, resulting in >90% inhibition compared with <10% inhibition of the BxPC-3-KRAS WT spheroids (Fig. 5a ). This selectivity was lost under 2D conditions where treatment of either BxPC-3-KRAS G12V or BxPC-3-KRAS WT cells with Proscillaridin A at a single dose resulted in about 50% inhibition with no apparent selectivity (Fig. 5b ). As the difference in response to drug treatment could be a reflection of different cellular proliferation rates, we compared the proliferation of BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells grown in 2D or 3D. The BxPC-3-KRAS G12V cells displayed a slightly elevated growth rate, compared with BxPC-3-KRAS WT cells, both in 2D and 3D growth conditions (Figs. 5c, d ). As this difference in growth rates is consistent between the 2D and 3D culture conditions, the selectivity of Proscillaridin A under 3D conditions is unlikely to result from different cell growth rates. Fig. 5 Characterization of select inhibitors in 2D and 3D formats. a , b Validation and specificity of select hits toward KRAS mutant cells in 3D a or 2D b formats. The top seven compounds from the spectrum library screen were analyzed at ~ 12.4 μM in triplicate on BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells in 3D and 2D formats. c , d Evaluation of BxPC-3-KRAS G12V and BxPC-3-KRAS WT cell growth rates in 3D c and 2D d formats. Cells were plated at 2500 cells/well in 3D and 2D formats and the growth rate was evaluated at 24, 48 and 72-h time points using CTG3D or CTG, respectively. Statistical significance was determined by unpaired t -test. NS = nonsignificant, * p < 0.001, ** p < 0.0001. The data shown represent the mean of three independent experiments with triplicate data points in each. Error bars = S.D. e , f Effects of Proscillaridin A on apoptosis of BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells. Proscillaridin A (PA) was tested at 12.4 µM against BxPC-3-KRAS G12V and BxPC-3-KRAS WT in 3D e and 2D f formats, and monitored at different treatment time points by RT-Glo Annexin V. Statistical significance was determined by unpaired t -test. All points represent the mean of eight independent replicates. Error bars = S.D., * p < 0.05 Full size image To determine whether Proscillaridin A has an impact on cell viability, we employed the RT-Glo Annexin V apoptosis assay that measures the exposure of phosphatidylserine (PS) on the outer leaflet of the cell membrane during the apoptotic process, through annexin V binding detected with a luminescence signal. To ascertain the effect of Proscillaridin A in this assay, we tested drug-treated versus vehicle-treated BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells, in both 3D and 2D formats. In the 3D format, Proscillaridin A induced apoptosis at earlier time points and at higher rates in BxPC-3-KRAS G12V cells compared with the BxPC-3-KRAS WT cells. The rate of apoptosis induced by Proscillaridin A became significant in comparison with the vehicle group (* p < 0.05) at ~4 h for BxPC-3-KRAS G12V cells, and ~8 h for BxPC-3-KRAS WT cells. The higher apoptotic rate of BxPC-3-KRAS G12V cells was sustained throughout the experiment (Fig. 5e ). In contrast, the effects of Proscillaridin A treatment on BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells grown in the 2D format did not become apparent until ~24 h and similar rates of apoptosis were observed for both cell types (Fig. 5f ). To determine whether the activity of Proscillaridin A, a well-characterized inhibitor of the Na + /K + -ATPase pump, could be explained by on-target activity, we examined the consequences of interfering with activity of the Na + /K + -ATPase pump by knocking down ATP1A1, the alpha subunit of the transporter. Control or ATP1A1 small interfering RNAs (siRNAs) were transfected into both BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells, and after seeding into spheroid plates, ATP1A1 knockdown and viability were assessed after 48 h. The knockdown of ATP1A1 by siRNA was confirmed by western blotting (Fig. 6a ) and significantly reduced the viability of BxPC-3-KRAS G12V spheroids, but did not have a significant impact on the viability of BxPC-3-KRAS WT spheroids (Fig. 6c ). Interestingly, this selective effect was not replicated in 2D, where ATP1A1 knockdown (Fig. 6b ) did not reduce the viability of either KRAS WT or KRAS G12V cells (Fig. 6d ). These findings suggest the activity of Proscillaridrin A is mediated through the inhibition of the Na + /K + -ATPase transporter. Fig. 6 Assessing the Na + /K + -ATPase as the potential target of Proscillaridin A. a – d BxPC-3-KRAS G12V and BxPC-3-KRAS WT were transfected with siRNA targeting ATP1A1 or control siRNA and knockdown of the ATP1A1 subunit was confirmed by western blotting in cells grown in a 3D format or b 2D format. Viability of BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells transfected with siRNA targeting ATP1A1 or control siRNA grown in grown in c 3D format or d 2D format was determined at 48 h post-transfection using CTG3D or CTG, respectively. Statistical significance was determined by unpaired t -test. NS = nonsignificant, ** p < 0.0001. The data shown represent the mean of three independent experiments with triplicate data points in each. Error bars = S.D. Levels of e pAKT, AKT and f pERK1/2, ERK1/2 were determined in BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells grown in 3D format. Vinculin was used as a loading control Full size image Finally, we examined whether the effects of Proscillaridin A on two main effectors of survival and proliferation, AKT and extracellular signal-regulated kinases 1 and 2 (ERK1/2), could further explain the 3D-selective effects against mutant KRAS spheroids. Interestingly, Proscillaridin A treatment significantly decreased total AKT and ERK1/2 in both KRAS WT and KRAS G12V spheres relative to DMSO-treated controls, but there does not appear to be a selective reduction in the KRAS G12V cells (Fig. 6e, f ). Discussion In this report, we describe a new spheroid-based 3D screening platform optimized for HTS applications. We validated this screening platform and executed a proof of principle screen to identify small molecules that preferentially inhibit the viability of pancreatic adenocarcinoma cells harboring an oncogenic KRAS mutation. These efforts led to identification of a cardiotonic glycoside, Proscillaridin A, as a potent and selective inhibitor of KRAS mutant cells. Importantly, assessment of Proscillaridin A in traditional 2D screening formats suggests that this molecule would not have been identified as a selective hit in a 2D assay, illustrating the utility of the spheroid-based 3D platform to uncover new biology. A major challenge for modern drug screening is presented by the compromise between numerous factors including cost, efficiency and accuracy. Indeed, the majority of cell-based screening platforms in current use rely on 2D monolayers, which are easy to implement and cost effective [ 23 ]. However, these traditional monolayer models have proven of limited value in predicting clinical response to novel agents [ 24 ]. In this respect, 3D-based cell culture offers a model that is thought to more closely reflect the environment experienced by cells in vivo [ 25 ]. In particular, 3D spheroid models are thought to better recapitulate features experienced by tumor cells in vivo such as cell–cell interactions, cell–matrix interactions, hypoxia, heterogeneity of tumors, drug penetration and drug resistance [ 26 , 27 , 28 , 29 , 30 ]. Several examples illustrate this point [ 31 ]. For example, previous studies demonstrate that treatment with fluorouracil (5-FU) has different outcomes depending on the assay format. Although tumor cells grown as a monolayer on polystyrene are highly sensitive to 5-FU, cells gown as 3D spheroids are more resistant. This is thought to be a reflection of the higher and more uniform proliferation of cells grown in 2D [ 32 ]. Another example includes the upregulation in expression of drug-metabolizing proteins in liver cells grown in 3D, which more closely reflects the expression of these proteins in vivo, compared with 2D [ 33 ]. Whether or not the BxPC-3 isogenic cell spheroid-based approach we have developed is indeed a more accurate predictor of an in vivo response remains to be established, and future studies aimed at validating the selectivity of Proscillaridin A toward KRAS mutant tumors in vivo are required. Regardless, the fact remains that this approach is useful in uncovering new biology that would have otherwise not been identified in traditional 2D formats, namely the selective activity of Proscillaridin A against BxPC-3-KRAS G12V spheroids. Proscillaridin A is a CG that inhibits the Na + /K + -ATPase, which functions as a transporter of K + and Na + into and out of the cell, respectively. This function is required for the maintenance of osmotic and ionic balance in all mammalian cells [ 34 ]. Several CGs (also known as cardiotonic steroids) have been identified as potent anti-neoplastic agents [ 35 ]. Indeed, this class of molecules is highly represented in the top hits of our screens, with members demonstrating variable levels of potency and selectivity. Proscillaridin A itself has been previously identified in screens for anti-neoplastic compounds against glioblastoma, osteosarcoma and colon cancer cell lines [ 36 , 37 , 38 ]. The reason(s) underlying Proscillaridin A’s selectivity toward KRAS mutated tumor cells in the 3D spheroid format are unclear. The finding that Proscillaridin A induces apoptosis earlier and at a higher rate in the BxPC-3-KRAS G12V spheroids compared with BxPC-3-KRAS WT spheroids suggests that ATP1A1 is required for cell survival, although the exact molecular mechanisms remain unknown. It is also possible that due to an increased proliferation rate, the BxPC-3-KRAS G12V cells are more dependent on the function of the Na + /K + -ATPase pump for proliferation. However, although the BxPC-3-KRAS G12V cells show a slightly increased proliferation rate compared with the BxPC-3-KRAS WT cells, this difference in proliferation is also observed when the cells are grown in 2D format and Proscillaridin A is not selective in this format. Of note, although selectivity toward an oncogenic mutation has not been previously reported for Proscillaridin A, other CGs were reported to display selectivity in other contexts. In particular, digoxin, digitoxin and ouabain were identified in a screen for SL interaction with STK11 mutant cancer cell lines [ 39 ]. This study suggests the selectivity of the CGs could be attributed to increased cellular stress in the STK11 mutant cells. Specifically, the authors propose that CG treatment leads to increased levels of reactive oxygen species (ROS) and that cells deficient in STK11 are impaired in their stress responses. Thus, it is possible that the stress responses of the BxPC-3-KRAS G12V cells are impaired compared with BxPC-3-KRAS WT cells. Finally, although BxPC3 cells are wild-type for KRAS , they harbor several other mutations ( ) including an in-frame deletion in the BRAF , which results in activation of the kinase [ 40 ]. This suggests that other effector pathways downstream of KRAS might underly the sensitivity to Proscillaridin. Clearly, further studies will be required to determine the basis for Proscillaridin’s selectivity against BxPC-3-KRAS G12V cells. Materials and methods KRAS cell lines Human pancreatic epithelial carcinoma cells were purchased from the ATCC (American Type Culture Collection, Manassas, VA). These include BxPC-3 (ATCC#CRL-1687: human pancreatic epithelial carcinoma), AsPC1 (ATCC#CRL1682: human pancreatic adenocarcinoma), E6/E7 (ATCC#CRL4036: human pancreatic ductal cells–hTERT-HPNE-E6/E7 transformed), HPAFII (ATCC#CRL1997: human pancreatic epithelial adenocarcinoma), PANC1 (ATCC#1469: human pancreatic duct epithelioid carcinoma). Cell lines were authenticated by short tandem repeat DNA profiling (DDC Medical) and were tested every 3 months for mycoplasma contamination and confirmed free of contamination. To create an isogenic pair, the BxPC-3 pancreatic ductal adenocarcinoma cell line, which is wild-type for KRAS [ 29 ], was transfected with an expression plasmid for wild-type KRAS (BxPC-3KRAS WT ) or KRAS G12V (BxPC-3KRAS G12V ) and selected in hygromycin to generate stable clones expressing these alleles. Expression of the introduced alleles was confirmed by isolation of mRNA from the cells, reverse transcription and DNA sequencing. 3D cell culture and 3D luminescent proliferation assay Cells were originally grown and passaged using a 1:3 or 1:6 subcultivation ratio 2 or 3 times per week in standard tissue culture flasks using ATCC guidelines for culture methods. Upon harvest for adaptation to 3D spheroids, flasks were decanted, washed with 1 × phosphate-buffered saline (PBS) (part#14190, Thermo Fisher, Waltham, MA) and subsequently lifted using TryPLE (part#12604, Thermo Fisher). Cells were then suspended to the appropriate concentration for dispensing into Corning 384-well format 3D spheroid culture plates (part#3830, Corning Inc., NY). Cells were dispensed utilizing a Matrix Wellmate plate dispenser (ThermoFisher, Waltham, MA) at 2500 cells per well in 20 μL. Plates were centrifuged (1250 RPM, 5 min) and incubated for 1 day at 37 °C, 95% relative humidity, 5% CO 2 . This allowed for spheroid formation, which was verified using a bright field microscope (Thermo Fisher). Upon verification of spheroids, test compounds or controls were transferred into the spheroid test plates using an automated BioMEK NXP Pintool. Plates were incubated for an additional 24 h (for a total of 48 h) under the same atmosphere and then treated with 20 µL per well of CellTiter-Glo 3D (Part#G9683, Promega Corp., Madison, WI). Following a 30-min incubation at room temperature, luminescence was quantified on an EnVision plate reader (PerkinElmer Life Sciences, Waltham, MA). Luminescent apoptosis assay BxPC-3-KRAS G12V and BxPC-3-KRAS WT cells were seeded at the density of 2500 cells in 20 µL media per well into Corning 384-well spheroid plates for 3D analysis (part#3830, Corning Inc., NY) or white TC-treated 384-well plates for 2D analysis (part#789163-T, Greiner Bio-One, Monroe, NC) and incubated for 24 h at 37 °C, 95% relative humidity, 5% CO 2 . Test compounds or vehicle (final 0.2% DMSO) were added followed by immediate addition of Real Time-Glo Annexin V apoptosis and Necrosis reagent (part# JA1011, Promega Corp., Madison, WI). Luminescence signal was monitored overtime up to 24 h using ViewLux plate reader (PerkinElmer Life Sciences, Waltham, MA). Confocal microscopy BxPC-3-KRAS G12V or BxPC-3-KRAS WT cells were grown as described above. Forty-eight hours post-seeding, spheroids were stained with Hoechst stain and incubated overnight. The stained spheroids were transferred to a flat, clear bottom plate, and cells were imaged on a GE IN Cell Analyzer 6000. To confirm the spheroidicity, multiple Z-stack images were taken at 10 μm increments and aligned in Image J to generate a composite intensity projection biased by color scale. Screening libraries To aid in target validation of the 3D KRAS assays, sub-libraries of pharmacologically active compounds, such as the Scripps-curated Spectrum Collection (2400 compounds from MicroSource Discovery Systems, Inc., Gaylordsville, CT), a collection of small molecules with pharmacologic activity against a broad range of targets, were implemented as they are ideally suited in early stage testing. Screening data acquisition, normalization, representation and analysis All data files were uploaded into the Scripps institutional HTS database (Symyx Technologies, Santa Clara, CA) for plate QC and hit identification. Activity for each well was normalized on a per-plate basis using the following equation: $$\begin{array}{ll}&{\mathrm{\% }}inhibition =\\&\quad 100 \times \left( {\frac{{Test\;well - Median\;Low\;Control}}{{Median\;High\;Control - Median\;Low\;Control}}} \right)\end{array}$$ (1) where “High Control” represent wells containing cells with media only; whereas “Low Control” represents wells containing cells and DMSO and finally the “Test Wells” contain cells with test compounds. The Z’ and S:B (signal:background) were calculated using the High Control and Low Control wells. In each case, a Z’ value >0.5 was required for a plate to be considered acceptable [ 29 ]. ATP1A1 knockdown BxPC-3-KRAS G12V or BxPC-3-KRAS WT cells were transfected with either non-targeting control siRNA (Qiagen, SI03650318) or human pooled ATP1A1 siRNAs (GE Dharmacon, M-006111-02). After 24 h, cells were trypsinized, counted and seeded into (a) 96-well Corning Spheroid Microplates (10,000 cells/well, 100 μL) and (b) 384-well Corning Spheroid Microplates (2500 cells/well, 20 μL) for the spheroid experiments. Spheroid formation and culture conditions proceeded as described above. Forty-eight hours after seeding, (a) spheroids were collected from 96-well spheroid microplates and prepared for immunoblotting, and (b) CellTiter-Glo 3D was added 1:1 to spheroids in 384-well format and luminescence was measured on a PerkinElmer EnVision plate reader. For 2D experiments, cells were seeded into (a) 10 cm 2 dishes for immunoblots and (b) 384-well Corning CulturPlates (2500 cells/well, 20 μL). Forty-eight hours after seeding, (a) cells were collected and prepared for immunoblotting, and (b) CellTiter-Glo was added 1:1 to cells in 384-well format and luminescence measured as stated above. Percent inhibition was calculated using Eq. ( 1 ). Immunoblotting Spheroids were collected, washed twice with ice-cold PBS, and lysed on ice in RIPA buffer with dissolved protease inhibitors. Lysates were centrifuged at 14,000 RPM (4 °C) and supernatant protein concentration was quantified by modified Bradford assay (Bio-Rad Laboratories, Hercules, CA). Blots were probed with antibodies for ATP1A1 (D4Y7E, Cell Signaling Technology (CST), Danvers, MA), ATP1B1 (D6U8Q, CST), AKT (11E7, CST), phospho-AKT S473 (D9E, CST), ERK1/2 (137F5, CST), phospho-ERK1/2 (D13.14.4E, CST), tubulin (T5168, Sigma Aldrich, St. Louis, MO) or vinculin (V4505, Sigma Aldrich) and appropriate secondary antibodies (CST). Blots were developed using Amersham ECL and ECL Prime chemiluminescent developers (GE Healthcare Life Sciences, Boston, MA) and were exposed to X-ray film. Statisitical analysis The statistical tests were done using GraphPad Prism. The test used, number of samples and significance are indicated in the respective figure legends.
May 14, 2018—Cancer is a disease often driven by mutations in genes. As researchers learn more about these genes, and the proteins they code for, they are seeking smarter drugs to target them. The ultimate goal is to find ways to stop cancer cells from multiplying out of control, thereby blocking the growth and spread of tumors. Now researchers from The Scripps Research Institute are reporting an innovative new method to screen for potential cancer drugs. The technique makes use of tiny, three-dimensional ball-like aggregates of cells called spheroids. These structures can be used to interrogate hundreds or even thousands of compounds rapidly using a technique called high throughput screening. In fact, by using this approach, the team has already identified one potential drug for an important cancer gene. The results were reported in the journal Oncogene. "What's important about this research is that we're able to do studies using a form of cancer cells that is more physiologically relevant and better recapitulates how these cells appear in the body," says Timothy Spicer, director of Lead Identification Discovery Biology and High Throughput Screening on Scripps Research's Florida campus and one of the study's corresponding authors. "Until now, most of the research to screen for cancer drugs has used cells that are growing flat on a plate," adds Louis Scampavia, director of HTS Chemistry and Technologies at Scripps Research and one of the study's co-authors. "With these 3-D spheroids, we emulate much more closely what's found in living tissues." The spheroids are 100 to 600 microns in diameter-equivalent to the thickness of a few sheets of paper. In contrast to single layers of cells normally used to screen for drugs, which tend to all grow at the same rate because they get the same exposure to oxygen and nutrients, the spheroids mimic what might happen in a tumor: Some cells are on the outside and some are on the inside. The researchers used a technique called confocal microscopy to confirm that the cell lines were forming spheres. Here is the BxPC-3-KRASG12V cell line. Credit: Kota et. al./The Scripps Research Institute In the new paper, the researchers focused on a cancer-driving protein called KRAS. The KRAS gene and other members of the related RAS gene family are found to be mutated in nearly one-third of all cancers. They are common in lung cancer, colorectal cancer, and especially pancreatic cancer. In fact, up to 90 percent of pancreatic cancers are driven by KRAS mutations, and the investigators used pancreatic cancer cell lines for the current study. "In the past, KRAS has been a very tricky protein to target. People have spent several decades trying, but so far there has been little success," says Joseph Kissil, Ph.D., professor at Scripps Research Medicine and the other co-corresponding author. "The KRAS protein is relatively small, and that's made it hard to attack it directly. But the method of screening that we used in this study allowed us to come at the question in a different way." The investigators performed what is called a phenotypic screen, which means they were looking for drugs that had an effect on cell growth, but didn't have a preconceived idea about how they might work. "We came at this in an unbiased way," Kissil explains. "We were not trying to design something to attack a specific part of the KRAS protein. We were just looking for something that acted on some part of the pathway that's driving cell growth." The investigators report in the new paper that they have already identified one compound that was previously not know to affect KRAS, called Proscillaridin A. The compound is similar to a class of drugs used to treat some heart conditions. Although the team says this particular drug is unlikely to be developed as a cancer treatment, it validates the approach of conducting drug screenings using spheroids. "It's unlikely we would have discovered this connection using standard 2-D methods," Scampavia says. "From our perspective, this is a proof-of-principle study," Kissel adds. "It shows you can look at libraries of drugs that have already been approved for other diseases, and find drugs that may also work for cancer. In theory, you could use this screening method for any line of cancer cells, and any mutation you want." "We would love to use this research to create a pipeline for new oncology drugs," Spicer concludes. "Many of the most promising compounds may be overlooked with 2-D screening. This study provides direct evidence that screening for drugs using 3-D structures of cancer cells may be more appropriate."
10.1038/s41388-018-0257-5
Computer
New study assesses the impact of automation on long-haul trucking
Aniruddh Mohan et al, Impact of automation on long haul trucking operator-hours in the United States, Humanities and Social Sciences Communications (2022). DOI: 10.1057/s41599-022-01103-w
http://dx.doi.org/10.1057/s41599-022-01103-w
https://techxplore.com/news/2022-03-impact-automation-long-haul-trucking.html
Abstract Automated long haul trucking is being developed for commercial deployment in the United States. One possible mode of deployment for this technology is a “transfer-hub” model where the operationally less complex highway driving is automated, while human drivers drive the more complex urban segment of the route. We study the possible net impacts on tractor-trailer operator-hours from this mode of deployment. Using data from the 2017 Commodity Flow Survey, we gather information on trucking shipments and the operator-hours required to fulfill those shipments. We find that up to 94% of long haul trucking operator-hours may be impacted as the technology improves to operate in all weather conditions. If the technology is however restricted to the southern states where the majority of companies are currently testing automated trucking, we find that only 10% of operator-hours are impacted. We conduct interviews with industry stakeholders including tractor-trailer operators on the feasibility of such a system of deployment. We find that an increase in short haul operation is unlikely to compensate for the loss in long haul operator-hours, despite public claims to this effect by the developers of the technology. Policymakers should consider the impact of different scenarios of deployment on the long haul trucking workforce. Introduction Automated driving technology is currently being tested on public roads in the United States in both the light vehicle and heavy duty segments. Given the likely reduced operational complexity involved in highway driving, several companies are currently working on developing automation for long haul trucking, which is designed to work as per a “transfer-hub” model (Hickman et al., 2018 ; Transport Topics, 2018 ). This would involve an automated truck (AT) completing the highway leg of the route and human drivers undertaking the more complex suburban-urban segments at both the starting and end points of the journey. Truck ports near highways would be used to switch out the trailer from the prime mover and enable this switch at both ends. For a schematic of the possible transfer-hub model see Fig. 1 . Fig. 1: Schematic showing the possible operation of a transfer-hub model where a human driven truck drives the load from the origin to the first truck port where the load is switched to an automated prime mover shown in black. This completes the highway leg of the journey to another truck port where a human driven truck takes the load from the port to the final destination. Full size image The promise of ATs has led to widespread concern about job losses in long haul trucking, which is a common profession in the United States, particularly for men with high-school educations (Wertheim, 2020 ). On the other hand, it has also been noted, often by the companies developing this technology, that long haul trucking currently faces a labor shortage and automation will create new short haul jobs, which will more than make up for the long haul jobs lost. As a result of these conflicting claims, as well as the uncertainty over the technology itself and its limitations, there is little clarity on how automated trucking will be deployed and its economic and political ramifications, such as the impact on the long haul trucking labor market. We use data from the Commodity Flow Survey (CFS) 2017 (United States Census Bureau, 2020 ), which is a dataset jointly produced by the U.S. Census Bureau, U.S. Bureau of Transportation Statistics, and the U.S. Department of Commerce. CFS produces a sample of shipments in the United States including data on the type, origin and destination, value, weight, modes of transportation, and distance shipped. We estimate the operator-hours required for different routes using origin and destination information for trucking shipments from the CFS dataset, which enables us to estimate highway and (sub)urban splits for each shipment. We consider the technological constraints of automated trucking to make more refined estimates of the possible near term impacts of automation on long haul trucking operator-hours in the United States by assessing different scenarios of deployment. We find that contrary to strong claims by companies developing this technology (Gilroy, 2019 ), the loss in long haul operator-hours is unlikely to be compensated for by an increase in demand for short haul drivers. We find large labor impacts from the deployment of the transfer-hub model, contingent on the pace of progress of automation technology’s ability to function in different operating conditions. We compare the capabilities of automation with the tasks truck drivers are required to perform through discussions with industry stakeholders, including long haul operators. This allows us to have a realistic assessment of the tasks the technology will be required to perform over the highway legs and explore the feasibility of such deployment scenarios. In this framework, where jobs are considered as a bundle of tasks, our work builds on an extensive literature in the social sciences (Acemoglu and Autor, 2011 ; Acemoglu and Restrepo, 2019 ; Arntz et al., 2016 ). Through our limited sample of interviews, we discover reluctance to shift to new modes of operation such as short haul driving that will result from any application of the transfer-hub mode of deployment. The rest of the paper is structured as follows. Section “Literature review” reviews the existing literature on job impacts of AT. Section “Methods” describes our methods for both the analysis of operator-hour impacts and semi-structured interviews. Section “Results” presents the results and section “Conclusion and policy implications” concludes. Literature review Given the relatively recent advances in automated driving technology, there are a limited number of studies that have been conducted on the job impacts of automation on truck driver jobs. Viscelli ( 2018 ) has analyzed the impact of automation based on a similar “transfer-hub” operational model assumption shown in Fig. 1 , where highway driving is automated while human drivers perform the urban segments of the route. Using revenue data from major trucking carriers and estimates of average per driver revenue, Viscelli ( 2018 ) estimates nearly 300,000 long haul jobs to be at risk. As per the analysis, these at-risk jobs are primarily concentrated in dry van and refrigerated trucking, which are characterized by high turnover and low wages (Viscelli, 2018 ). Similarly, Gittleman and Monaco ( 2020 ) have undertaken a study analyzing the potential job losses from automation. They also find that contrary to several estimates in the media, the upper bound on job losses from automation are likely to be 400,000 jobs, far less than popular estimates of 1–2 million jobs at risk. Gittleman and Monaco ( 2020 ) perform this analysis using 2002 Vehicle Inventory and Use Survey (VIUS) data to gather estimates of the operational use of heavy trucks. They then apportion the number of long haul truck drivers using Occupational Employment Statistics data from the Bureau of Labor Statistics. Their study also notes that only a specific aspect of trucking—long haul highway driving—is ripe for automation and that operators who perform other tasks or are involved in customer facing roles such as in package delivery services are unlikely to face job losses (Gittleman and Monaco, 2020 ). Groshen et al. ( 2019 ) consider different scenarios of adoption of automation and analyze the job impacts in different sectors of the economy using simulations and consultations with industry experts. They estimate 60–65% of heavy truck and tractor-trailer truck driving jobs to be eliminated with full implementation of automation (Groshen et al., 2019 ). Waschik et al. ( 2021 ) leverage a dynamic model of the U.S. economy to simulate the macroeconomic impacts of automated trucking in the economy. They consider three different speeds of adoption: slow, medium and fast and find that employment in for-hire trucking falls by 20–25 percent and in private trucking by 4–5 percent across the scenarios considered (Waschik et al., 2021 ). Broader benefits in terms of productivity and employment in the economy are found to outweigh the employment impacts on long haul operators and the study assumes that long haul operators will switch to short haul jobs, minimizing the overall effect on the workforce (Waschik et al., 2021 ). Finally, future of work studies around the deployment of automated vehicles have also delved into the issue of job losses in trucking from automation. For instance, Leonard et al. ( 2020 ) suggest that automation will create new roles such as remote management of trucks, dispatching, and field support while disrupting day driving jobs. However, they do not estimate the number or share of jobs at risk. We found no examples in the literature of analyses that used trucking routes and shipment data or examined the specific capabilities of the technology (e.g., only operating in favorable weather conditions) to estimate the labor impacts of automated trucking. We also found no studies that undertook interviews with stakeholders including long haul operators to understand their perspectives on the tasks required over the highway legs. Outside of the specific case of automated trucking, we also find a broad literature in economics on the impacts of automation on employment. Frey and Osborne ( 2013 ) model the probability of computerization for different occupations and find that workers in transportation and logistics are among those at high risk of automation, partly due to their low wages and low levels of educational qualifications. However, their methods, including considering that entire occupations rather than single tasks within jobs can be automated have faced criticism. Arntz et al. ( 2016 ) have shown that jobs are a bundle of tasks and while tasks may be easy to automate, jobs are often not. As such, forecasts of a large number of job-losses from automation, such as the study by Frey and Osborne ( 2013 ) can overestimate the impact of automation. One of the questions that therefore emerges here is the composition of tasks involved in long haul trucking. We attempt to unpack this through our semi-structured interviews with truck operators. Borland and Coelli ( 2017 ) study the impact of automation on employment in Australia and find that total work available did not decrease following the introduction of computerization and that job turnover in the labor market has also not increased due to computer-based technologies. Mokyr et al. ( 2015 ) have analyzed the history of automation and job losses, explaining how there has always been anxiety on how technological progress will cause substitution of machines for human workers, leading to unemployment. However, such scenarios do not come to pass because the long-run effects of technology are beneficial in terms of net job-creation (Autor, 2015 ) and technology only impacts the type of jobs available and what they might pay. Technology can also complement labor resulting in increased productivity, earnings, and demand for labor (Autor, 2015 ; Bessen, 2019 ). Nevertheless, perceptions of large job losses from automation are present in the social and political discourse. In terms of the perception of automation in trucking, Dodel and Mesch ( 2020 ) have shown how workers in occupations involving a greater number of manual or physical tasks, such as in the case of long haul trucking, can have more negative perceptions regarding the impact of automation on their livelihood. Orii et al. ( 2021 ) analyzed discussions related to automation among the members of the r/Truckers subreddit and found that <1% of comments had positive views on automation. Methods We contribute to the literature in the field by drawing on the CFS data (United States Census Bureau, 2020 ) to get a reliable estimate of the density of long haul trucking in different regions and the operator-hours required for different routes. Our primary analysis is centered in the use of freight data along with routing and operator-hour algorithms to estimate the share of operator-hours that may be lost to automation. We complement this quantitative analysis with a limited number of interviews with long haul trucking stakeholders to understand the feasibility of a transfer-hub mode of deployment. Overall, our mixture of quantitative and qualitative methods is based on the triangulation method (Jick, 1979 ), useful for analyzing socio-technical transitions and emerging technologies. We further elaborate on our methods below. Data The Commodity Flow Survey is a well known dataset for transportation planning and research, produced every five years by the U.S. Bureau of Transportation Statistics, U.S. Census Bureau, and the U.S. Department of Commerce. The latest iteration, CFS 2017, is a sample of 5.9 million shipments from approximately 60,000 responding establishments (United States Census Bureau, 2020 ). We disregard inter-modal shipments and focus on shipments delivered through for-hire trucks and private trucks. We only consider shipments routed over >150 miles as those are commonly classified as long haul (FMCSA, 2020 ; Viscelli, 2018 ). This subset contains nearly 1.5 million trucking shipments detailing origin and destination states, shipment distance, weight, and financial quarter. The data also contain a weighting factor, which can be used to estimate the total number of shipments of that type in the population. Routing We draw on the Google Maps API and the GGMAP package (Kahle and Wickham, 2013 ) in R to estimate highway and (sub)urban splits for each shipment. For the purposes of routing we categorize two types of shipments in the dataset: intrastate and inter-state. Intrastate shipments are those where the shipment does not cross state borders. Inter-state shipments are those where the state of origin is different to the state of destination. We apply a differentiated methodology to calculate the highway and (sub)urban splits for each shipment depending on whether the shipment is within a state or across it and depending on whether the shipment has listed origin and destination Metropolitan Statistical Areas (MSAs). MSAs are listed for many but not all shipments in the dataset. If MSAs are not provided, we use the closest approximation of origin or destination location. The different types of shipments and the methods used to calculate the highway and (sub)urban splits and highway and urban average speeds are shown in Table 1 . Table 1 Truck routing calculation methods. Full size table For inter-state shipments we proceed as follows. Where possible we use the origin and destination MSAs from the CFS dataset in Google Maps and estimate the highway and urban distance ratios (details provided in the next subsection) for those routes, which we then apply to the actual shipment distance from the dataset. To do this, we assume that the precise origin or destination is the centroid of the MSA. For shipments that specify either the origin or destination MSA (but not both), or specify neither origin nor destination metropolitan areas, we use the rest of the state centroid, which is defined as the centroid of all other areas of the state that are not listed MSAs. Note that this is an approximation that affects the estimate of the highway and sub(urban) split but does not affect the distance of the shipment, which is provided in the CFS dataset. For intrastate journeys we apply the same method for shipments, which have specified origin and destination MSAs. For those that do not, we apply the highway and sub(urban) split derived from the Freight Analysis Framework dataset (Bureau of Transportation Statistics, 2012 ) by splitting the roads into those have average speeds below and above 50 mph. Let the place of origin be designated as p o,i and place of destination be designated as p d,i where i is a shipment. Then, consider a shipment from p o,i to p d,i where p o,i and p d,i are set as per the cases listed in Table 1 . The Google Maps API where applicable then provides us with detailed route directions, which list the amount of time driven for any stretch of road before the next turn and so on. This allows us to calculate speeds for each section and then split the drive into segments, which are greater than or equal to 50 mph (classified as highway) and below 50 mph (classified as urban or suburban). Note that the route suggested by Google Maps may be different depending on the time of day that the API request is sent. We therefore ran several iterations of the routing algorithm at different times of day and found no discernible difference to our results. Let the highway segment of this journey be h i and the urban sections u i . Let the origin-destination distance be d i . The highway to total ratio r i is then defined as: $${r}_{{{{\rm{i}}}}}=\frac{{h}_{{{{\rm{i}}}}}}{{d}_{{{{\rm{i}}}}}}$$ (1) We then use these calculated ratios for each origin-destination combination and apply them to the actual shipment distance from the CFS dataset. This allows us to calculate the highway and urban leg lengths D S,H,i and D S,U,i for the shipments in the dataset. Let the shipment distance be D S,i . Then, $${D}_{{{{\rm{S}}}},{{{\rm{H}}}},{{{\rm{i}}}}}={D}_{{{{\rm{S}}}},{{{\rm{i}}}}}* {r}_{{{{\rm{i}}}}}$$ (2) and then, $${D}_{{{{\rm{S}}}},{{{\rm{U}}}},{{{\rm{i}}}}}={D}_{{{{\rm{S}}}},{{{\rm{i}}}}}* (1-{r}_{{{{\rm{i}}}}})$$ (3) Operator-hours calculation The final step involves the calculation of urban and highway operator-hours. We assume the urban legs are equally split at the two ends of the journey with the highway leg in between. We apply a constraint of 11 h of daily driving as per hour of service (HOS) regulations (FMCSA, 2020 ) and then calculate the operator-hours required for the highway and urban legs of the journey. Using this information and the aforementioned weighting factor we are then able to calculate the total operator-hours as well as the share of highway and urban operator-hours. Let d a y 1 hours be the number of hours remaining that can be driven on day 1 of the trip after completing the initial urban leg. Let O be operator-hours described for both highway leg O H and urban leg O U . Let highway and urban driving time be T H and T U , respectively, which can be calculated from the average velocities V H and V U for the respective segments also derived from the Google Maps API where applicable. Then for shipment i: $${T}_{{{{\rm{U}}}},{{{\rm{i}}}}}=\frac{{D}_{{{{\rm{S}}}},{{{\rm{U}}}},{{{\rm{i}}}}}}{{V}_{{{{\rm{U}}}},{{{\rm{i}}}}}}$$ (4) and similarly $${T}_{{{{\rm{H}}}},{{{\rm{i}}}}}=\frac{{D}_{{{{\rm{S}}}},{{{\rm{H}}}},{{{\rm{i}}}}}}{{V}_{{{{\rm{H}}}},{{{\rm{i}}}}}}$$ (5) Then, our algorithm to estimate the operator-hours is described below. Note that ⌈ ( x ) ⌉ denotes the ceiling of x and x % y denotes the remainder of x when divided by y . Algorithm 1 i from 1: I \(da{y}_{1,{{{\rm{i}}}}}=11-\frac{{T}_{{{{\rm{U}}}},{{{\rm{i}}}}}}{2}\) if d a y 1,i ≥ T H,i then O H,i = T H,i else if d a y 1,i < T H,i then \({O}_{{{{\rm{H}}}},{{{\rm{i}}}}}={T}_{{{{\rm{H}}}},{{{\rm{i}}}}}+\left\lceil \left(\frac{{T}_{{{{\rm{H}}}},{{{\rm{i}}}}}-da{y}_{1,{{{\rm{i}}}}}}{11}\right)\right\rceil * 10\) end if if \(\left(\frac{{T}_{{{{\rm{H}}}},{{{\rm{i}}}}}-da{y}_{1,{{{\rm{i}}}}}}{11}\right) \% 11+\frac{{T}_{{{{\rm{U}}}},{{{\rm{i}}}}}}{2} > 11\) then O U,i = T U,i + 10 else O U,i = T U,i end if end if The algorithm can be explained as follows. If the highway driving time is less than the number of driving hours remaining on day 1, then the shipment is simply completed on the day and the highway operator-hours are equal to the highway driving time. However, if the highway driving time exceeds this then the driver undertakes the journey over the following days with 10 h of rest following 11 h of driving as mandated by law. The urban driving time is simply the time taken to drive the urban leg if the second and final urban segment can be completed staying within the HOS requirements, else it is completed with a day of rest. With the calculated urban and highway operator-hours for each trip we can then estimate the total operator-hours across both highways and urban areas using the trip weighting factor provided by the CFS dataset. The weighting factor is the estimate of the true number of trips of such type in the actual population and is available for each shipment in the CFS dataset. Let the weighting factor be Π. Further let shipment weight be W. Then for the total operator-hours O Total we have: $${O}_{{{{\rm{Total}}}}}=\mathop{\sum }\limits_{i=1}^{I}({O}_{{{{\rm{H}}}},{{{\rm{i}}}}}+{O}_{{{{\rm{U}}}},{{{\rm{i}}}}})* {{{\Pi }}}_{{{{\rm{i}}}}}* \frac{{W}_{{{{\rm{i}}}}}}{{\mathrm{TL}}}$$ (6) where TL is truckload or the total weight that can be carried on one fully loaded semi truck. Then the urban and highway share of the total operator-hours, US and HS, is simply: $${\mathrm{HS}}=\frac{\mathop{\sum }\nolimits_{i = 1}^{I}{O}_{{{{\rm{H}}}},{{{\rm{i}}}}}* {{{\Pi }}}_{{{{\rm{i}}}}}* \frac{{W}_{{{{\rm{i}}}}}}{{\mathrm{TL}}}}{{O}_{{{{\rm{Total}}}}}}$$ (7) $${\mathrm{US}}=\frac{\mathop{\sum }\nolimits_{i = 1}^{I}{O}_{{{{\rm{U}}}},{{{\rm{i}}}}}* {{{\Pi }}}_{{{{\rm{i}}}}}* \frac{{W}_{{{{\rm{i}}}}}}{{\mathrm{TL}}}}{{O}_{{{{\rm{Total}}}}}}$$ (8) Note that the highway share (HS), across both inter-state and intrastate trucking, is the share of operator-hours at risk from automated highway trucking. US represents the share of hours that must still be driven by a human driver. Notice that if the truckload is a constant, such as for, e.g., fully loaded class 8 semi trucks, then it cancels in both the numerator and denominator of equations ( 7 ) and ( 8 ) and is therefore irrelevant to our results. More information on methods including the limitations of our approach are provided in Supplemental Information (SI) Section 1 . Interviews In order to obtain some assurance about the validity of the assumptions underlying the transfer-hub model, we undertook semi-structured interviews with stakeholders in the trucking industry using a purposeful sampling methodology (Robinson, 2014 ), which was formulated through authors’ prior work in the automated vehicle domain (Mohan et al., 2020 ), as well as prior informal conversations with companies and researchers in this area, which helped us identify relevant questions and stakeholders. For our conversations with drivers in particular, we used snowball sampling (Naderifar et al., 2017 ): we identified an initial set of drivers who have a public profile (e.g., have podcasts or YouTube Channels about trucking) and asked them to introduce us to their colleagues. We stopped when we achieved data saturation (Guest et al., 2006 ): that is, when conversations with new drivers did not introduce us to new concepts or phenomena (Hennink et al., 2017 ). We spoke with stakeholders across automated trucking startups (2), truck drivers (5), trucking logistics operators (1), and labor union representatives (1). In terms of our selection of different interviewees, we deliberately sought to elevate the voices of truck drivers in our sample, relative to other actors such as automated trucking startup CEOs or logistics operators. This is because of two reasons. Firstly, operators were best placed to provide us with the operational challenges and opportunities for the transfer-hub model of automation, given they are currently in charge of the major task that automation may replace (driving). Second, much of the narrative and coverage around automated trucking in the popular media has focused on the claims made by private operators, without much consideration of whether long haul drivers themselves believe that a switch to automation is feasible. Note that our sample size was not designed to enable generalization to all the stakeholders in long haul trucking. Our sample size and qualitative method (semi-structured interviews) were instead selected with an idiographic approach (Robinson, 2014 ), focused on gathering detailed insights into the tasks truck drivers performed on a journey. The interviews also highlight interesting areas for future research. Most importantly, as part of the triangulation method (Jick, 1979 ) we use to analyze automated trucking, the interviews complement our quantitative analysis of the CFS data and the routing and operator-hour algorithms we present by providing a feasibility check on the deployment modes assumed in this paper and which have been promoted by technology companies. The full list of interviewees is provided in Table 2 . Table 2 List of interviewees. Full size table Results Our analysis finds that up to 94% of operator-hours for truck drivers are impacted if the technology is deployed across the continental U.S. in all conditions. However, if restricted to the states where testing is currently taking place, only 10% of operator-hours are impacted. The capabilities of the technology and decisions around where and how AT should be deployed will therefore determine the extent of impacts on the long haul operator labor market. Below we first discuss the findings from our exploration of possible scenarios of deployment, and their associated impacts on operator-hours, if realized. We analyze the extent of the possible increase in short haul jobs if AT delivers cost and time savings in freight delivery and show that this is unlikely to outweigh the hours lost to automation. Then, we present the takeaways from our semi-structured interviews with stakeholders in long haul trucking. Finally, we end with a brief discussion on the labor impacts in sectors associated with long haul trucking. Scenarios We consider different scenarios of deployment, which correspond to constraints the technology may face in the near-medium term. They are as follows: Deployment in southern, sunny states only—To our knowledge, significant AT testing currently takes place in Florida, Texas, and Arizona. We hypothesize that initial highway deployment will be restricted to states in the southern sun-belt (see Fig. 2) to minimize risk of exposure to snow or hail, which may be outside the limits of safe operation of the technology and therefore only consider routes between or within these states. Deployment across all states over the financial quarters Q2 and Q3, which encompass the spring and summer months from April 1 to September 30—The CFS dataset lists the financial quarter of shipment allowing us to assess a scenario where all journeys that would be performed in more favorable weather are automated. Deployment for journeys above 500 miles only—While AT may be financially optimal at even short distances of around 100 miles (see SI Section 2 ), the time required for switching trailers and separating urban and highway legs could mean that journeys, which currently cannot be performed in a single day given HOS requirements will be automated first. Long haul operators can comfortably cover 500 miles without stopping so we set this as the threshold. Widespread deployment—deployment of transfer-hub-based AT across the United States and automation of all highway driving. While this is likely to take several years, it can be thought of as the most extreme scenario in terms of job losses. Note that most studies in the literature (Gittleman and Monaco, 2020 ; Viscelli, 2018 ) only consider this scenario . Fig. 2 Highlighted southern sun-belt states where a transfer-hub model may first be deployed. Full size image Operator-hours Figure 3 shows the impact on operator-hours across the different scenarios considered, with each scenario considered as additive (cumulative) to the previous scenario. Fig. 3: The impact on operator-hours as each deployment scenario becomes feasible, cumulatively building on the previous deployment. Starting with only a 10% impact on operator-hours, automation can eventually put 94% of current long haul operator-hours at risk. Full size image In scenario 4, with widespread deployment across the continental United States, 94% of current operator-hours may be automated. On the other hand, we find that deployment of automation only in places where companies are currently testing the technology, i.e., the southern states, limits the impact to just 10% of operator-hours (scenario 1). These estimates could represent a potential trajectory, over time, for impacts on employment, if the technology improves over time. It is likely that near term deployment, if any, will be in favorable weather conditions and in states with favorable regulation, precisely those states where testing is currently taking place. If this is then further expanded to all parts of the country but only in the favorable weather months, i.e., financial Quarter 2 and 3, then we find that over half of operator-hours could be impacted. If automation is then extended to all shipments over 500 miles, we find that a further 33% of operator-hours are impacted. Other possible deployment scenarios and their impacts are shown in Table S1 . Changes in demand and price elasticity of freight Automation is likely to dramatically reduce the need for labor, which constitutes roughly 40% of the cost of trucking (Williams and Murray, 2020 ). It is also likely to eliminate HOS requirements, which currently mean that-unless two drivers work in tandem-a truck remains idle for >60% of the time in order to allow the driver to rest (FMCSA, 2020 ). As a consequence, automation will reduce the amortized capital cost of the truck, which constitutes another 16% of the cost of trucking (Williams and Murray, 2020 ). Finally, the elimination of HOS requirements will make trucking faster than it is today, since the trucks will no longer have to stop for driver breaks. Cost and delivery time reductions in the delivery of freight through automated trucking may result in increased demand for trucking shipments. This may happen in two ways. First, freight that was previously routed through air, train, or inter-modal services may now be shifted to long haul trucking. The modal choice of freight could therefore tilt in favor of trucking. There is considerable variation in modal elasticity estimates in the literature (De Jong et al., 2010 ). For example, Abdelwahab ( 1998 ) has estimated an elasticity of –1.44 and –0.99 for truck modal choice given an increase in trucking shipment time and cost, respectively. More recently, Christidis et al. ( 2009 ) reviewed a number of studies and found cross-elasticities of rail and road that range from 0.3 to 2. Dahl ( 2012 ) has found limited elasticity for trucking demand to fuel costs. Second, cost and time reductions in freight delivered through trucking may increase demand for freight services in the economy overall. This increase in demand will increase the overall operator-hours required to fulfill long haul trucking shipments. While elasticities vary widely in the literature for these factors as described above, we find that any resultant increase in demand for freight from some or all of these factors is likely to have a small impact on operator-hours. Further, this increase is unlikely to offset the overall operator-hours at risk due to automation. For example, even a 50% increase in demand for trucking services, which translates to an overall elasticity of 5, will only offset 5% of the at-risk hours due to automation, dropping the overall share from 94% to 89% of operator-hours at risk. This is simply because even though such an increase will lead to great demand for trucking services, the large majority of all operator-hours will still be needed on the highway (Fig. 3) . A large increase in demand of the magnitude of 50% or more for trucking services is in fact unlikely. Trucking already dominates the freight market with 70% of tonnage in the United States shipped through trucks (Association, 2020a ). Freight that is today delivered through competing modes such as rail is often because of reasons other than cost or shipment time, for example, gross weight requirements, or because a close coupling between the location of the rail tracks and siting of various industrial facilities. Overall, we find limited evidence in support of the claim that increases in demand for trucking due to the economic and productivity gains from automation will create short haul jobs that will offset the highway operator-hours lost due to automation. Jobs What do these impacts on operator-hours imply for jobs? It is important to note that we do not directly characterize job losses from automation in trucking, focusing instead on the share of operator-hours that will be impacted. This is because data for the number of long haul truck drivers in the United States is not available to a high level of accuracy, owing to the large number of owner operators. Estimates have a wide range, from a few hundred thousand to millions of jobs. Studies that attempt to put a number on the total jobs lost therefore run the risk of a large error and are often not comparable due to the differing assumptions about the baseline number of jobs. Nevertheless, translating the share of operator-hours impacted to different estimates of the number of long haul trucking operators in the United States can provide some insight. Previous analyses has estimated the number of long haul operators to be between 300,000–400,000 (Gittleman and Monaco, 2020 ; Viscelli, 2018 ). Waschik et al. ( 2021 ) estimate that there are roughly 550,000 long haul operators, in their modeling of the macroeconomic impacts of automated trucking. Our results on the share of operator-hours at risk from automation would therefore mean that anywhere from 30,000 to >500,000 jobs may be impacted, depending on the scenarios presented above. Impacts on workers in associated sectors Finally, automation of highway trucking will have impacts on more than just tractor-trailer operators. While outside the focus of this paper, we offer a brief discussion here. First, automation of highway trucking could obviate the need for truck stops on the highway or at least make them far less frequented than before. Employment associated with operating truck stops will therefore be impacted. These truck stops currently employ about 70,000 people (See SI Table S2) . On the other hand, the creation of truck ports to facilitate the transfer-hub model of AT will likely create new jobs. These will possibly involve new tasks such as switching trailers between the human and automated prime movers, offering services to human operators that were previously offered at truck stops, and maintenance and safety checks of sensors and other equipment on board automated trucks before their deployment on a route. To a first approximation, our analysis indicates that it is possible that the labor-hours lost at truck stops and other locations on highways could be compensated by new employment opportunities at transfer-hub ports (see SI Section 4 for more details). However, it is unclear whether operators in existing jobs at truck stops will be interested in or qualified for the new jobs that may arise from deployment of AT. Discussion with Stakeholders Scholars note that typically, only some of the tasks that constitute a job, are amenable to automation (Arntz et al., 2016 ; Autor, 2015 ). As such, tasks are easy to automate; jobs are often not. Existing literature has noted how truck driving jobs have increasingly been reduced to solely the task of driving (Viscelli, 2016 , 2018 ). While this supports automation, one of the major challenges we envisaged to AT deployment on highways is maintenance and repairs. However, upon speaking with actors in the trucking industry (interviews 3–9) we found that employed truck drivers do not perform any significant maintenance and repairs on their truck. Instead, drivers simply call for assistance and trucking companies send out repair teams or arrange for a repair appointment at the nearest service station. ATs will therefore need to be able to send out distress signals and get assistance when needed. While some drivers indicated that they do perform some maintenance on their trucks (interview 7,9) this was restricted to minor fixes, for, e.g., a broken headlight. Often drivers indicated that their companies did not want them to try to repair problems with the truck but instead rely on expert help. Some trucking jobs are union jobs and in those cases the drivers often have clauses in their contracts that restrict them to only driving and not having to perform work such as maintenance and unloading (interview 2). The benefits of automation in terms of shorter trip times and lower costs could quite easily and quickly be incorporated into trucking operations and logistics, as per one of our interviews with a Senior Manager at a logistics firm (interview 4). AT startup executives acknowledged that both weather and lighter regulation motivated their decision to test in states such as Florida and Texas (interviews 1,3). This raises questions regarding the widespread application of this technology to all parts of the continental United States. Interestingly, every operator (interviews 5–10) we spoke to said that they could not see any major barriers to ATs performing highway journeys. Several of them (interviews 5,9,10) highlighted the difficulties posed by inclement weather conditions, places where lane markings are absent, and routing if global positioning system (GPS) signal is lost, which could mean that some routes continue to be driven by human drivers. All drivers we spoke to try to use their maximum allowed driving time of 11 h in a day. They all also indicated that there are often several weeks where they do not have any direct contact with their employers and simply perform their tasks as assigned. All drivers also indicated their trucks are tracked with GPS and therefore companies and customers are continuously aware of their location. Our discussions indicate that the job of long haul trucking has indeed effectively been reduced to a single task, which-given conducive external conditions-makes it amenable to automation. We further offer some propositions for future research to explore. Owing to the limitations of our sample size, the lack of segmentation between different types of drivers, and no consideration of potential confounders, these are not intended as conclusions. Proposition 1 A larger volume of shorter trips may not compensate for the loss of work associated with automated long haul trips . A transfer-hub model will require drivers to shift to short haul jobs. The most common pay structure currently for “truck-load" drivers, who haul full truckloads worth of generic containerized freight for a trucking company (and not for a shipper like Walmart or Target), is payment per mile of haulage. They are often not paid for the waiting and paperwork that occurs at the beginning and end of each trip. Truckload drivers seek to maximize the time they keep the cargo moving. A shift to shorter trips would increase the ratio of stationary (unpaid) to driving (paid) time, reducing their wages per hour worked (Viscelli, 2016 ). In our limited sample there was near consensus among our interviewees that the shorter trips and lower pay that may come from urban driving jobs will be unattractive (interviews 5–9). Therefore, one of the interesting questions for future qualitative research based on a large sample size would be to understand operators’ views on shifting to short haul jobs and whether these might hinder or accelerate a shift to the transfer-hub model. Proposition 2 Transfer hub deployment could create short haul jobs in locations that are different from where long haul truckers currently live . A shift to only urban driving will likely require operators to live in sub(urban) areas. At least one operator we spoke with expressed reluctance to shift to short haul trucking for this reason alone (interview 9). Around 40% of older truckers come from rural areas so it’s possible that the geographical shift will prove a barrier to transitioning current operators to short haul jobs, which speaks to the cultural significance long haul trucking jobs have carried in the U.S. (Levy, 2015 ). However, these new jobs may prove attractive to new truckers joining the workforce, who are increasingly from urban areas (Cheeseman Day and Hait, 2019 ). Again, this is an important question for future work and will require careful study. Shifting of employment from rural to urban areas for trucking will naturally have political implications, particularly given the existing rural-urban divide in the American political landscape (Thiede et al., 2017 ). Proposition 3 Partial automation is viewed negatively by heavy truck operators, as previous studies have suggested (Slowik and Sharpe, 2018 ). Partial automation systems have also been criticized in the broader safety literature as they may lead to disengaged and distracted operators who are too quick to trust the technology and will be unable to react in a timely and safe manner should something go wrong (Endsley and Kiris, 1995 ). Many companies involved in self-driving technology are in fact skipping partial automation systems and focusing solely on full automation (Ayre, 2017 , Naughton, 2017 , Volvo, 2017 ). Economically, if companies still have to pay drivers for their labor and also pay for the system, it is difficult to see how this would be attractive compared to the current system of only paying drivers, unless the cost savings from increased safety were significant. All drivers (interviews 5, 7–10) in our limited sample expressed dislike for partial automation systems that they have used or experienced such as lane assist and emergency braking. How might views on such technologies differ depending on the age and experience of truck drivers? We believe that future work on the transition from human driven to automated trucks and the potential role of partial automation systems as a bridge technology must take into account the views of drivers, as they will be the primary users of such technologies. Conclusion and policy implications Automation of the major part of the job—in this case highway driving—will naturally put downward pressure on wages in the long haul trucking industry. It is unclear that the labor supply will easily adjust to the new level of prevailing wage and operating requirements (short haul jobs) in the market. Our limited number of interviews certainly highlight the challenges employers may face in transitioning long haul operators to different jobs such as short haul driving. It also suggests that the deployment of AT is being driven by techno-economic considerations alone with limited understanding of the social consequences, consistent with the broader narrative around automated mobility technologies (Bissell et al., 2020 ). Although companies have claimed that such technologies will benefit truck drivers, our evidence does not suggest that the motivations of truck drivers are part of the designed operation of this technology. Moving away from industry led visions of AT futures will require a greater understanding of the motivations and interests of long haul operators, and a participatory approach to shaping AT deployment (Mladenović, 2019 ). In the currently envisioned transfer-hub model, short term adjustment costs are likely and potentially notable. As we show, a significant share of operator-hours will be affected if the technology is deployed in all conditions and locations. Further, we argue that this result is robust to increase in demand for freight delivered through trucking if as assumed, the cost of long haul trucking falls due to automation. We do caution however that the potential loss of a significant share of operator-hours to automation need not be necessarily viewed as permanent unemployment or as a permanent welfare loss. Long haul trucking has been characterized by turnover rates of nearly 100% in recent years (Association, 2020b ). The profession is increasingly unattractive to potential new entrants with most new operators lasting less than a year in the job. This has occurred in substantial part through a concerted effort to make trucking cheaper by paying drivers less; for example, by encouraging many drivers to operate as independent contractors (Viscelli, 2016 ). Wages may need to increase as these arms-length employment arrangements are challenged in court and as it becomes increasingly difficult to find new drivers. This dynamic may strengthen the economic case for automation. Historically, technological change has resulted in short term employment shocks but realignment in the labor market means that these shocks have limited impact on the broader economy in the long-run, as new industries grow and workers transition to new jobs with new skill requirements (Bessen, 2019 ). Long haul operators may therefore move across to different sectors after a period of unemployment, some may transition to lower paying short haul jobs that will be created by the transfer-hub model, others may retire prematurely (Waschik et al., 2021 ). The sharp reduction in labor cost makes the economics of ATs compelling but will disrupt livelihoods and, by potentially shifting demand from rail to trucks, likely also increase emissions of greenhouse gasses and other air pollutants (Kaack et al., 2018 ). The threat of jobs lost due to automation in trucking may also have profound political impacts, the existing literature has found increased support for radical right wing parties as the risk of automation increases (Im et al., 2019 ). Policymakers could demand that-in exchange for permission to deploy ATs on public roads-truck operators re-invest some of the monetary benefits of reduced labor costs to ameliorate the disruption to employment and in reducing the environmental footprint of the trucking industry (Viscelli, 2020 ). Ultimately, societal and political choices can determine the mode of deployment of AT capabilities, and accordingly, the winners and losers of any shift to automation of long haul trucking. Data availability The Commodity Flow Survey (2017) dataset used in this paper is publicly available from the U.S. Census Bureau .
As automated truck technology continues to be developed in the United States, there are still many questions about how the technology will be deployed and what its potential impacts will be on the long-haul trucking market. A new study by researchers at the University of Michigan and Carnegie Mellon University assessed how and where automation might replace operator hours in long-haul trucking. They found that up to 94% of operator hours may be impacted if automated trucking technology improves to operate in all weather conditions across the continental United States. Currently, automated trucking is being tested mainly in the Sun Belt. "Our results suggest that the impacts of automation may not happen all at once. If automation is restricted to Sun Belt states (including Florida, Texas and Arizona)—because the technology may not initially work well in rough weather—about 10% of the operator hours will be affected," said study co-author Parth Vaishnav, assistant professor of sustainable systems at the U-M School for Environment and Sustainability. Using transportation data from the 2017 Commodity Flow Survey, which is produced by the U.S. Bureau of Transportation Statistics, U.S. Census Bureau and U.S. Department of Commerce, the study authors gathered information on trucking shipments and the operator hours used to fulfill those shipments. In addition, they explored different automated trucking deployment scenarios, including deployment in southern, sunny states; deployment in spring and summer months (April 1 to Sept. 30); deployment for journeys more than 500 miles; and deployment across the United States. "Our study is the first to combine a geospatial analysis based on shipment data with an explicit consideration of the specific capabilities of automation and how those might evolve over time," said co-author Aniruddh Mohan, a doctoral candidate in engineering and public policy at Carnegie Mellon. The study was published online March 15 in the journal Humanities and Social Sciences Communications. Long-haul trucking is generally defined as transport that covers more than 150 miles. Several companies are currently working on developing automation for long-haul trucking that is designed to work as a "transfer hub" model. It would involve an automated truck completing the highway leg of the route and human drivers undertaking the more complex suburban-urban segments at both the starting and end points of the journey. Truck ports near highways would be used to switch out the trailer from the prime mover and enable this switch at both ends. Labor accounts for about two-fifths of the cost of trucking, so deploying automated technology will be seen as an attractive option for trucking companies to save money, said Vaishnav. However, there are concerns about the potential job losses for workers. "Because trucking is viewed as one of the few jobs that give folks with a high school education the chance to make a decent living, there is a concern that automation will eliminate these jobs," he said. "Some people worry that all or most of the million or more trucking jobs might be lost. "In terms of numbers, our analysis showed that automation could eliminate a few hundred thousand jobs (as opposed to a million or more), but there is plenty of evidence to suggest that for most people these are fleeting, poorly paid and unpleasant jobs. We think that it is possible that the number of operator hours lost at truck stops, because automated trucks will have no drivers who need to be served at truck stops, could be compensated by new employment opportunities at transfer hub ports." The researchers also analyzed if automated trucking could lead to an increase in short-haul driving jobs, which involve transporting shipments within a 150-mile radius, and determined that the operator hours of work lost to the automation of long-haul trucking would not be made up both in terms of quantity and quality by short-haul driving work. Short-haul jobs typically pay less than long-haul jobs, the study noted, creating the potential for a reduced livelihood for workers. "We found that an increase in short-haul operation is unlikely to compensate for the loss in long-haul operator-hours, despite public claims to this effect by the developers of the technology," Vaishnav said. "As a result of these conflicting claims, as well as the uncertainty over the technology itself and its limitations, there is little clarity on how automated trucking will be deployed and its economic and political ramifications, such as the impact on the long-haul trucking labor market. We hope to help resolve these controversies." As part of their study, the researchers conducted interviews with trucking industry stakeholders, including tractor-trailer operators, to determine the feasibility of automated trucking deployment. "A key finding was just how economically attractive this technology would be and the fact that everyone, including truckers, agreed that the interstate part of the job could be automated," Vaishnav said. "Ultimately, societal and political choices can determine the mode of deployment of automated trucking capabilities, as well as the winners and losers of any shift to automation of long-haul trucking."
10.1057/s41599-022-01103-w
Medicine
Lateral inhibition keeps similar memories apart
Claudia Espinoza et al, Parvalbumin+ interneurons obey unique connectivity rules and establish a powerful lateral-inhibition microcircuit in dentate gyrus, Nature Communications (2018). DOI: 10.1038/s41467-018-06899-3 S. J. Guzman et al. Synaptic mechanisms of pattern completion in the hippocampal CA3 network, Science (2016). DOI: 10.1126/science.aaf1836 Journal information: Nature Communications , Science
http://dx.doi.org/10.1038/s41467-018-06899-3
https://medicalxpress.com/news/2018-11-lateral-inhibition-similar-memories.html
Abstract Parvalbumin-positive (PV + ) GABAergic interneurons in hippocampal microcircuits are thought to play a key role in several higher network functions, such as feedforward and feedback inhibition, network oscillations, and pattern separation. Fast lateral inhibition mediated by GABAergic interneurons may implement a winner-takes-all mechanism in the hippocampal input layer. However, it is not clear whether the functional connectivity rules of granule cells (GCs) and interneurons in the dentate gyrus are consistent with such a mechanism. Using simultaneous patch-clamp recordings from up to seven GCs and up to four PV + interneurons in the dentate gyrus, we find that connectivity is structured in space, synapse-specific, and enriched in specific disynaptic motifs. In contrast to the neocortex, lateral inhibition in the dentate gyrus (in which a GC inhibits neighboring GCs via a PV + interneuron) is ~ 10-times more abundant than recurrent inhibition (in which a GC inhibits itself). Thus, unique connectivity rules may enable the dentate gyrus to perform specific higher-order computations. Introduction Throughout the brain, fast-spiking, parvalbumin-expressing (PV + ) GABAergic interneurons play a key role in several higher functions, such as feedforward and feedback inhibition, high-frequency network oscillations, and pattern separation 1 . Understanding how PV + interneurons contribute to these complex computations requires a detailed and quantitative analysis of their synaptic connectivity. While early studies suggested that connectivity of PV + interneurons is random 2 , more recent work highlighted several specific connectivity rules 3 , 4 , 5 , 6 , 7 (Supplementary Table 1 ). Analysis of principal neuron (PN)–interneuron (IN) connectivity in the neocortex revealed that reciprocally connected pairs occurred much more frequently than expected in a random network 3 , 4 , 5 , 6 , 7 . Moreover, synaptic strength appeared to be higher in these reciprocally connected motifs 4 , 6 . Whether these connectivity rules also apply in other microcircuits, such as the hippocampus, has not been determined yet. Pattern separation is a fundamental network computation in which PV + interneurons are likely to be involved. Pattern separation is thought to be particularly important in the dentate gyrus, where conversion of overlapping synaptic input patterns into non-overlapping action potential (AP) output patterns 8 , 9 , 10 , 11 , 12 may facilitate reliable storage of information in the downstream CA3 network 9 , 13 , 14 . Previous studies suggested a model of pattern separation based on a winner-takes-all mechanism mediated by feedback inhibition 15 , 16 , 17 , 18 , 19 . Such a model has received experimental support in the olfactory system 20 , 21 , 22 . While some studies suggested that similar mechanisms may operate in the dentate gyrus 23 , 24 , it is not clear whether the rules of PN–IN connectivity are adequate to support such a model. Specifically, two forms of feedback inhibition need to be distinguished: recurrent inhibition, in which an active PN inhibits itself via reciprocal PN–IN connections, and lateral inhibition, in which an active PN inhibits neighboring PNs but not itself 25 , 26 . A winner-takes-all mechanism likely requires lateral inhibition; recurrent inhibition may be counter-productive, because it could suppress potential winners 17 , 26 , 27 . However, in both neocortex and brain areas directly connected to the hippocampus, recurrent inhibition and lateral inhibition are equally abundant 3 , 4 , 5 , 6 , 7 (Supplementary Table 1 ). Such a circuit design would seem incompatible with efficient pattern separation. To resolve this apparent contradiction, we examined the functional connectivity rules in PN–IN networks in the dentate gyrus, using simultaneous recordings from up to seven granule cells (GCs) and up to four GABAergic interneurons. Our experiments reveal a uniquely high abundance of lateral inhibition mediated by PV + interneurons. Results Octuple recordings from neurons in the dentate gyrus To determine the functional connectivity rules between PNs and INs in the dentate gyrus, we performed simultaneous whole-cell recordings from up to eight neurons (up to seven GCs and up to four INs) in vitro (Fig. 1a, b ). PV + interneurons, somatostatin-positive (SST + ), and cholecystokinin-positive (CCK + ) interneurons were identified in genetically modified mice, obtained by crossing Cre or Flp recombinase-expressing lines with tdTomato or EGFP reporter lines. PV + interneurons showed the characteristic fast-spiking AP phenotype during sustained current injection, whereas both SST + and CCK + interneurons generated APs with lower frequency, corroborating the reliability of the genetic labeling (Supplementary Figure 1 ). Fig. 1 Octuple recording from GCs and PV + interneurons in the dentate gyrus. a Octuple recording from five GCs and two PV + interneurons (seven cells successfully recorded). Infrared differential interference contrast video micrograph of the dentate gyrus in a 300-µm slice preparation, with eight recording pipettes. Shaded areas represent the 2D projections of cell bodies (blue, GCs; red and yellow, PV + interneurons). Blue dashed lines, boundaries of GC layer. b Partial reconstruction of one GC and two PV + interneurons in the same recording as shown in ( a ). Cells were filled with biocytin during recording and visualized using 3,3′-diaminobenzidine as chromogen. For clarity, only the somatodendritic domains were drawn for the PV + interneurons. Insets, biocytin-labeled putative synaptic contacts, corresponding to boxes in main figure. c Connectivity matrix of an octuple recording (all eight cells successfully recorded). Subpanels on the diagonal (AP traces) represent the presynaptic cells, subpanels outside the diagonal (EPSC or IPSC traces) indicate postsynaptic cells. In this example, 56 connections were tested; 7 excitatory GC–PV + interneuron connections, 7 inhibitory PV + interneuron–GC connections, and 42 connections between GCs. Brief transients in a subset of traces represent capacitive coupling artifacts, as shown in previous publications 5 , 14 . d Expanded view of presynaptic APs and postsynaptic currents, corresponding to the boxed areas in ( c ). In this octuple recording, an inhibitory synaptic connection was identified between the PV + interneuron (red) and GC 5 (blue) and an excitatory synaptic connection was found between GC 1 (blue) and the PV + interneuron (red). The presence of a unidirectional excitatory GC–PV + interneuron connection and a unidirectional inhibitory PV + interneuron–GC connection documents the existence of lateral inhibition in this recording. e Coexistence of different synapses in an octuple recording. In this recording, an excitatory GC–PV + interneuron connection, an inhibitory PV + interneuron–GC connection, a chemical inhibitory connection between the PV + interneurons, and an electrical connection between the PV + interneurons were found (from left to right). Same recording as in ( a ) and ( b ) Full size image To probe synaptic connectivity, we stimulated presynaptic neurons under current-clamp conditions, and recorded excitatory postsynaptic currents (EPSCs) or inhibitory postsynaptic currents (IPSCs) in postsynaptic neurons in the voltage-clamp configuration (Fig. 1c–e , Fig. 2 ). In total, we tested 9098 possible connections in 50 octuples, 72 septuples, 68 sextuples, 48 quintuples, 17 quadruples, 10 triples, and 5 pairs in 270 slices. Interestingly, PV + interneurons showed a much higher connectivity than both SST + and CCK + interneurons. For GC–PV + interneuron pairs with intersomatic distance ≤ 100 µm, the mean connection probability was 11.0% for excitatory GC–PV + interneuron and 28.8% for inhibitory PV + interneuron–GC connectivity (Fig. 2g ). In contrast, for both SST + interneurons and CCK + interneurons, the mean connection probability was substantially lower (1.4 and 2.8% for SST + interneurons, 1.2 and 12.1% for CCK + interneurons; Fig. 2g ). Excitatory interactions between GCs were completely absent, and disynaptic inhibitory interactions between GCs 28 , 29 were extremely sparse (0.124%). These results indicate that in the dentate gyrus PV + interneurons show a markedly higher connectivity than SST + and CCK + interneurons, extending previous observations in the neocortex 30 . Fig. 2 Differential connectivity of PV + , CCK + , and SST + interneurons in the dentate gyrus. a Light micrograph of a SST + interneuron filled with biocytin during recording, and visualized using 3,3′-diaminobenzidine as chromogen. Cells were identified by genetic labeling in SST-Cre mice. Axon branches in the molecular layer (red arrows) suggest that the cell was a HIPP or TML interneuron 68 , 69 . GCL, granule cell layer. b Light micrograph of a CCK + interneuron filled with biocytin. Cells were identified by genetic labeling in CCK-Cre;DLX 5/6-Flp mice. Axon branches in the inner molecular layer (red arrows) suggest that the cell was a HICAP interneuron 68 , 69 , 70 . c , d Excitatory and inhibitory connectivity of SST + interneurons. GC–SST + interneuron unitary EPSCs are shown in ( c ), SST + interneuron–GC IPSCs are illustrated in ( d ). Individual synaptic responses (gray) and average trace (magenta or blue, 15 traces) are shown overlaid. Note the facilitation of EPSCs during train stimulation in ( c ). e , f Excitatory and inhibitory connectivity of CCK + interneurons. GC–CCK + interneuron EPSCs are shown in ( e ), CCK + interneuron–GC IPSCs are illustrated in ( f ). Note the asynchronous release during and after train stimulation in ( f ), which is highly characteristic of CCK + interneuron output synapses 70 . g Comparison of average connection probability for pairs with an intersomatic distance of ≤ 100 µm. Whereas PV + interneurons were highly connected, SST + and CCK + interneurons showed a markedly lower excitatory and inhibitory connectivity (number of tested connections 767, 71, and 165). Error bars represent 95%-confidence intervals estimated from a binomial distribution Full size image Connectivity rules for excitatory input of PV + interneurons As PV + interneurons showed the highest input and output connectivity, we focused our functional connectivity analysis on this interneuron subtype. We first examined the rules of excitatory synaptic connectivity between GCs and PV + interneurons by measuring EPSCs (Fig. 3a–c ). We found that PV + interneurons were highly and locally connected to GCs. The connection probability showed a peak of 11.3%, and steeply declined as a function of intersomatic distance, with a space constant of 144 µm (Fig. 3b ). In contrast, the EPSC peak amplitude showed no significant distance dependence (Fig. 3c ). To determine the efficacy of unitary GC–PV + interneuron connections, we measured unitary excitatory postsynaptic potentials (EPSPs). Unitary EPSPs had a mean peak amplitude of 1.79 ± 0.36 mV (range: 0.30–7.16 mV; Supplementary Figure 2a, b ) 28 , 31 , 32 . To assess the efficacy of these events in triggering spikes in the presence of ongoing synaptic activity from multiple sources, we performed in vivo whole-cell recordings from fast-spiking interneurons in the dentate gyrus in awake mice running on a linear treadmill (Supplementary Figure 2c–g ). Under in vivo conditions, the difference between baseline membrane potential and threshold was 10.3 ± 1.8 mV (three in vivo recordings from fast-spiking interneurons in dentate gyrus). Thus, although the largest unitary EPSPs were close to the threshold of AP initiation, they were insufficient to trigger a spike. However, the high focal GC–PV + interneuron connectivity (Fig. 3b ) may enable activation of PV + interneurons by spatial summation. Fig. 3 Rules of excitatory and inhibitory connectivity in GC–PV + interneuron networks. a Unitary EPSCs, with individual synaptic responses (gray) and average trace (red, 15 traces) in a representative GC–PV + interneuron pair. b GC–PV + interneuron connection probability plotted versus intersomatic distance. Connection probability was determined as the ratio of the number of found connections over that of all possible connections in a given distance range. Error bars represent 95%-confidence intervals estimated from a binomial distribution. Data points were fit with a sigmoidal function; shaded area indicates the distance range in which connection probability decayed to half-maximal value (space constant). Red dashed line, maximal connection probability. Maximal connection probability ( c max ) was 11.3%, and space constant ( d half ) was 144 µm. c Peak amplitude of unitary EPSCs at GC–PV + interneuron synapses, plotted against intersomatic distance. Data points were fit by linear regression; dashed lines indicate 95%-confidence intervals. d – f Similar plots as shown in ( a – c ), but for IPSCs generated at inhibitory PV + interneuron–GC synapses. Maximal connection probability was 28.9%, and space constant was 215 µm. g Bootstrap analysis of maximal connection probability and space constant. Histograms indicate distributions of c max (left) and d half (right) for 10,000 bootstrap replications of the inhibitory PV + interneuron–GC connections. Red arrows indicate experimental mean values for GC–PV + interneuron synapses. h Number of reciprocally coupled GC–PV + interneuron pairs (excitatory and inhibitory synapse; “recurrent inhibition motif”) and unidirectionally coupled PV + interneuron–GC pairs (inhibitory synapse only; “lateral inhibition motif”). Note that the number of lateral inhibition motifs was almost 10-times higher than that of recurrent inhibition motifs, demonstrating the high abundance of lateral inhibition in the dentate gyrus microcircuit Full size image Connectivity rules for inhibitory output of PV + interneurons Next, we examined the rules of inhibitory synaptic connectivity between GCs and PV + interneurons by measuring IPSCs (Fig. 3d–f ). Similar to excitatory GC–PV + interneuron connectivity, inhibitory PV + interneuron–GC connectivity was distance-dependent (Fig. 3e ). However, maximal connection probability was higher (28.9%) and the range of connectivity was wider (215 µm) than that of excitation. Bootstrap analysis revealed that both maximal connectivity and space constant were significantly shorter for excitatory GC–PV + interneuron synapses than for inhibitory PV + interneuron–GC synapses ( P < 0.0001 and P = 0.0042, respectively; Fig. 3g ). Thus, different connectivity rules apply for excitatory and inhibitory GC –PV + interneuron connections (focal excitation versus broad inhibition). To compare the connectivity rules in the dentate gyrus with those in other brain regions, we quantified the ratio of excitatory to inhibitory connection probability. We found that inhibition was much more abundant than excitation, with a connection probability ratio of 3.83, substantially higher than in other brain areas (Supplementary Table 1 ). Furthermore, we quantified the abundance of lateral and recurrent motifs in pairs of neurons. In our total sample of 1301 GC–PV + interneuron pairs, we found 296 unidirectional inhibitory connections, but only 32 bidirectional connections (Fig. 3h ). Thus, the ratio of lateral inhibition to recurrent inhibition was 9.25, substantially higher than in other circuits (Supplementary Table 1 ). These results indicate that connectivity rules of PV + interneurons in the dentate gyrus are unique in comparison to other previously examined circuits. Connectivity rules for mutual inhibition of PV + interneurons Finally, we analyzed the functional connectivity rules for synapses between interneurons (Fig. 4 ). Chemical inhibitory synapses between PV + interneurons showed a connectivity pattern that was more focal than that of inhibitory PV + interneuron–GC synapses (Fig. 4a, b ). Likewise, electrical synapses between PV + interneurons 33 , 34 , 35 showed a focal connectivity pattern (Fig. 4c, d ). Bootstrap analysis revealed that the maximal connectivity was significantly higher, while the space constant was significantly shorter for inhibitory PV + –PV + interneuron synapses than for PV + interneuron–GC synapses ( P = 0.0001 and P = 0.0036, respectively). Furthermore, recordings from GCs and multiple PV + interneurons provided direct evidence for the suggestion 33 that EPSPs propagate through gap junctions, although the peak amplitude is markedly attenuated (Supplementary Figure 3 ). Taken together, these results indicate that connectivity rules in PN–IN microcircuits are synapse-specific. Different connectivity rules apply to excitatory and inhibitory synapses between PNs and INs (GC–PV + versus PV + –GC), and to inhibitory synapses terminating on different postsynaptic target cells (PV + –GC versus PV + –PV + synapses). Fig. 4 Rules of chemical and electrical connectivity between PV + interneurons. a Left, light micrograph of a biocytin-labeled PV + interneuron–PV + interneuron pair. Right, unitary IPSCs, with individual synaptic responses (gray) and average trace (red, 15 traces) in the same pair. GCL, granule cell layer; IML, inner molecular layer. b PV + interneuron–PV + interneuron chemical connection probability (left) and IPSC peak amplitude (right) plotted versus intersomatic distance. Connection probability data points were fit with a sigmoidal function, IPSC amplitude data were analyzed by linear regression. Maximal connection probability was 58.1%, and space constant was 141 μm. c Electrical coupling between two PV + interneurons. Voltage changes in the pre- and postsynaptic cell caused by the injections of long polarizing current pulses (left, + 200 pA; right, −200 pA; 200 ms) in one of the coupled cells. d PV + interneuron–PV + interneuron electrical connection probability (left) and coupling coefficient (right) plotted versus intersomatic distance. Maximal connection probability was 77.3%, and space constant was 146 μm. The coupling coefficient (CC) was calculated as the mean ratio of steady-state voltages ( V 2 / V 1 , V 1 / V 2 ) during application of current pulses in one of the cells (cell 1 and cell 2, respectively) Full size image Disynaptic connectivity motifs Previous studies demonstrated that recurrent PN–PV + interneuron connectivity motifs are enriched above the chance level expected for a random network in several cortical microcircuits 3 , 4 , 5 , 6 , 7 , 36 . To test this hypothesis, we analyzed the abundance of all 25 possible disynaptic connectivity motifs in our sample (Fig. 5 ) 37 . To probe whether connectivity was random 38 or nonrandom 14 , 39 , 40 , 41 , 42 , we compared motif numbers in our experimental data to a simulated data set assuming random connectivity with experimentally determined distance-dependent connection probabilities (Fig. 5a, b ). Fig. 5 Overabundance of disynaptic connectivity motifs in GC–PV + interneuron networks and different functional properties of synapses embedded in motifs. a Graph analysis of disynaptic connectivity motifs. In total, there are five possible disynaptic connectivity motifs with two cells and 20 disynaptic motifs involving three cells. Arrows with open triangles indicate excitatory synapses, arrows with filled circles represent inhibitory synapses, and arrows with zigzag lines indicate gap junctions. Number indicates motif index. b Analysis of the number of motifs in 10,000 simulated data sets. Connection probability for the simulated data set was specified according to the experimentally determined spatial rules. Left, absolute motif number in experimental (black) and simulated data set (red, median; gray, 90%-confidence interval). Center, bar plot of relative abundance of various motifs (number of motifs in experimental data set over mean number in simulated data set). Error bars were taken from bootstrap analysis. Right, bar plot of z score of the different motifs. Light red area indicates z score in the interval [−1, 1]. Motifs 2, 3, 7, and 9 were significantly enriched above the chance level ( P = 0.03145, 0.0085, 0.0272, and 0.0068 after multiple comparison correction). In contrast, motifs 6, 8, 10, 12, and 16 were slightly, but not significantly underrepresented ( P = 0.15 for motif 6). Note that motifs 5, 17, 19–21, and 23–25 were not encountered in the present data set, because of the lack of connectivity between GCs. c Comparison of EPSC peak amplitude (left) and IPSC peak amplitude (right) in bidirectionally versus unidirectionally coupled GC–PV + interneuron pairs. Peak amplitudes were not significantly different ( P = 0.33 and 0.58, respectively). d Comparison of IPSC peak amplitude in PV + interneuron–PV + interneuron pairs connected by different chemical or electrical synapse motifs. IPSC peak amplitude was significantly larger in pairs with bidirectional inhibitory connections than with unidirectional connections ( P = 0.016) and slightly higher in connections with than without gap junctions ( P = 0.057). Asterisk indicates P < 0.05 Full size image Among the 25 possible disynaptic motifs, four types of motifs were significantly enriched above the chance level: (1) Gap junction connections between PV + interneurons, (2) mutual inhibition motifs (PV + interneuron–PV + interneuron connections) combined with gap junction connections 43 , (3) convergence motifs (connections of multiple GCs on a single PV + interneuron), and (4) divergence motifs (connections of one PV + interneuron onto multiple GCs; Fig. 5b ; P < 0.05 after correction for multiple comparisons). Surprisingly, reciprocal GC–PV + interneuron motifs were not significantly enriched. Previous studies further demonstrated that the amplitude of unitary IPSCs is higher in bidirectionally than in unidirectionally connected PN–IN pairs 4 , 6 . In contrast, in the dentate gyrus neither the amplitude of EPSCs nor that of IPSCs was significantly different between bidirectionally and unidirectionally connected GC–PV + interneuron pairs (Fig. 5c ). However, the amplitude of IPSCs was significantly larger in PV + interneuron–PV + interneuron pairs coupled by reciprocal inhibitory synapses (Fig. 5d ). Taken together, these results indicate that in the dentate gyrus, like in other cortical areas, synaptic connectivity of PV + interneurons is nonrandom. However, both the types of enriched motifs and the rules setting synaptic strength differ from those in other circuits 3 , 4 . Discussion Our results demonstrate that the rules of functional connectivity in the PN–IN network of the dentate gyrus fundamentally differ from those in other cortical circuits. In the dentate gyrus, unidirectionally inhibitory connections are ~10-times more frequent than reciprocal connections, demonstrating a massive prevalence of lateral inhibition in this circuit (Supplementary Table 1 ). In contrast, in neocortex, entorhinal cortex, and presubiculum, reciprocal connections are equally or more abundant than unidirectional connections, implying powerful recurrent inhibition 3 , 4 , 5 , 6 , 7 (Supplementary Table 1 ). Furthermore, in the dentate gyrus mutual inhibition motifs, convergence motifs, and divergence motifs are statistically overrepresented. In contrast, in the neocortex, interneuron connectivity has been suggested to be largely random 2 . Collectively, these results suggest that the dentate gyrus network obeys unique connectivity rules. The specific connectivity rules of the dentate circuit raise the intriguing possibility that these rules represent an adaptation to specific network functions implemented in this brain region. A major function of the dentate gyrus is pattern separation 8 , 9 , 10 , 11 , 12 , thought to be generated by a “winner-takes-all” mechanism 15 , 16 , 17 , 18 , 19 . In an ideal pattern separation circuit, a small population of activated “winner cells” must be able to efficiently and rapidly inhibit a large population of “non-winner cells”. The dentate gyrus connectivity rules are well suited for these functions. First, powerful lateral inhibition efficiently suppresses non-winners, whereas winners remain unaffected. Second, the combination of local connectivity and rapid axonal signaling mechanisms of PV + interneurons 1 , 44 implements a high-speed suppression mechanism, as required for efficient pattern separation. Previous modeling work suggested that scale-free network organization and the presence of hub neurons may enhance the robustness of network computations 45 , 46 . Our results may support this view, since the high abundance convergence and divergence motifs are consistent with scale-free architectural properties. Furthermore, the connectivity rules of the PN–IN network may be important for the generation of network oscillations in dentate gyrus 47 . In particular, the high chemical and electrical IN–IN connectivity establishes an efficient gamma oscillator circuit. The dense and focal electrical–chemical connectivity may explain the high power and frequency of gamma oscillations in the dentate gyrus 47 , 48 , 49 . Previous modeling work suggested that the small-world interneuron network architecture will support the emergence of coherent gamma oscillations 50 , 51 . Our results support this notion, since the high abundance of electrical–chemical IN–IN motifs would be consistent with small-world architectural properties 52 . The establishment of a robust gamma oscillation circuit may, conversely, be important for the pattern separation process. Proposed models of pattern separation imply that the separation of patterns takes place in the time period during the recovery from a preceding gamma cycle 17 . Whether and how the pattern separation computation and the generation of gamma oscillations can coexist in the same circuit remains to be determined. Our results suggest the possibility that the uniquely high abundance of lateral inhibition in dentate gyrus may contribute to pattern separation (Supplementary Table 1 ). What then is the function of recurrent inhibition in all other brain areas, such as the neocortical circuits? In the neocortex, PN activity is high, which requires a mechanism to establish excitation–inhibition balance; reciprocal PN–IN connectivity seems well suited for this purpose 7 , 20 . In contrast, in the dentate gyrus PN activity is low, and such a balancing function may not be required 53 , 54 , 55 , 56 , 57 . Additionally, reciprocal PN–IN connectivity could contribute to the generation of slower network oscillations in these brain regions, for example in the lower gamma or beta frequency range, which are characteristic for the neocortex. Our results are consistent with the idea that local connectivity rules can shape diverse network computations across multiple circuits. In the dentate gyrus, the unique PN–IN connectivity rules may determine the properties of pattern separation, grid-to-place code conversion, or processing of context information 17 , 58 . In the neocortex, PN–IN connectivity may determine network stability and excitation–inhibition balance 7 , 20 . In the hippocampal CA3 network, functional PN–PN connectivity rules shape pattern completion 14 , whereas in the neocortex functional PN–PN connectivity may shape response properties such as orientation selectivity 41 . Thus, the present results contribute to the emerging view that local connectivity rules are major determinants of higher computations in neuronal networks. Future work will be needed to test this hypothesis in both network models and behavioral experiments. Methods Hippocampal slice preparation Experiments on genetically modified mice were performed in strict accordance with institutional, national, and European guidelines for animal experimentation and were approved by the Bundesministerium für Wissenschaft, Forschung und Wirtschaft of Austria (A. Haslinger, Vienna; BMWFW-66.018/0007-WF/II/3b/2014; BMWF-66.018/0010-WF/V/3b/2015; BMWFW-66.018/0020-WF/V/3b/2016). To genetically label PV + interneurons, C57BL/6 J PV-Cre knockin mice ( ) crossed with Ai14 loxP-flanked red fluorescent protein tdTomato reporter mice ( ) were used. To identify SST + interneurons, somatostatin-ires-Cre mice (C-SSTtm1Npa, kindly provided by H. van der Putten; Novartis Pharma; MTD37295, Basel, Switzerland) were crossed with Ai14 tdTomato reporter mice. Finally, to label CCK + interneurons, CCK-ires-Cre;DLX 5/6-Flp mice ( and ) were crossed with dual reporter mice expressing either EGFP or tdTomato (RCE = R26R CAG boosted EGFP mice, ; Ai65, ) 59 . Mice (20- to 44-days-old; mostly postnatal day 20–25) of either sex were lightly anesthetized with isoflurane (Forane, AbbVie, Vienna). For animals up to postnatal day 30, mice were sacrificed by decapitation. For animals older than 30 days, transcardial perfusion was performed with ice-cold sucrose-artificial cerebrospinal fluid (sucrose-ACSF) solution. Animals were deeply anesthetized with isoflurane followed by the intraperitoneal injection of a mixture of xylazine (0.5 ml, 2%), ketamine (1 ml, 10%), acepromazine (0.3 ml, 1.4%), and physiological NaCl solution (1.5 ml, 0.9%). Anesthetics were applied at a dose of 0.033 ml/10 g body weight. The depth of the anesthesia was verified by the absence of toe pinch reflexes. For preparing slices, the brain was rapidly removed and immersed in ice-cold sucrose-ACSF solution during dissection. A block of tissue containing the hippocampus was transferred to a vibratome (VT 1200, Leica) and transverse slices of 300-µm thickness were cut with blade oscillation amplitude of 1.25 mm and blade forward movement velocity of 0.03 mm s −1 60 . Finally, slices were incubated at ~35 °C in standard artificial cerebrospinal fluid (ACSF) for 30 minutes and subsequently maintained at ~22 °C for maximally 5 h before transfer into the recording chamber. Solutions and chemicals The ACSF used for in vitro recordings contained 125 mM NaCl, 25 mM NaHCO 3 , 25 mM glucose, 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 2 mM CaCl 2 , and 1 mM MgCl 2 . The sucrose-ACSF used for dissection contained 64 mM NaCl, 25 mM NaHCO 3 , 10 mM glucose, 120 mM sucrose, 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 0.5 mM CaCl 2 , and 7 mM MgCl 2 . The osmolarity of the solutions was 290–315 mOsm and the pH was maintained at ~7.3 when equilibrated with a 95% O 2 /5% CO 2 gas mixture. The intracelluar solution for in vitro recordings contained 120 mM K-gluconate, 40 mM KCl, 2 mM MgCl 2 , 2 mM Na 2 ATP, 10 mM HEPES, 0.1 mM EGTA, and 0.3% biocytin, pH adjusted to 7.28 with KOH. Chemicals were purchased from Merck or Sigma-Aldrich. Multi-cell recordings Glass micropipettes were fabricated from thick-walled borosilicate tubing (2 mm outer diameter, 1 mm inner diameter) and had open-tip resistances of 3–8 MΩ. They were manually positioned with eight LN mini 25 micromanipulators (Luigs and Neumann) under visual control 14 provided by a modified Olympus BX51 microscope equipped with a 60x water-immersion objective (LUMPlan FI/IR, NA = 0.90, Olympus, 2.05 mm working distance) and infrared differential interference contrast video microscopy and epifluorescence. To preserve connectivity, cell bodies ~30–120 μm below the surface of the slice were targeted for recording. Interneurons were identified on the basis of tdTomato or EGFP fluorescence in epifluorescence illumination and the AP phenotype upon 1-s current pulses (>50 Hz in a series of pulses of 100–1,200 pA for PV + interneurons). Mature GCs were identified on the basis of morphological appearance in the infrared image and on the basis of passive and active membrane properties. Cells with input resistance > 500 MΩ, potentially representing newborn GCs 61 , were not included in the analysis. Cells with resting potentials more positive than −55 mV were immediately discarded. In total, the number of successfully recorded cells per recording varied between eight and two. Recording temperature was ~22 °C (range: 20–24 °C, room temperature). Electrical signals were acquired with four two-channel Multiclamp 700B amplifiers (Molecular Devices), low-pass filtered at 6–10 kHz, and digitized at 20 kHz with a Cambridge Electronic Design 1401 mkII AD/DA converter using custom-made stimulation-acquisition scripts running under Signal 6.0 software (CED). For current-clamp recordings, pipette capacitance was ~80% compensated and series resistance was balanced by the bridge circuit of the amplifier; settings were readjusted throughout the experiment when necessary. For voltage-clamp recordings, series resistance was not compensated, but repeatedly monitored using 2-mV hyperpolarizing pulses. To test for synaptic connections, a presynaptic neuron was stimulated with a train of five or ten current pulses (2 ms, 1–2 nA) at frequencies of 20 or 50 Hz, while all other neurons were voltage-clamped at −70 mV (Fig. 1c ). A connection was defined as monosynaptic if synaptic currents had latencies ≤ 4.0 ms and peak amplitudes were larger than 2.5 times the standard deviation of the baseline of the average trace (computed from 15–30 individual traces). Events with latencies > 4.0 ms were considered polysynaptic. For distal SST + –GC synapses, connectivity may be underestimated, because of substantial attenuation of synaptic signals by cable filtering. Data analysis Recordings were analyzed using Stimfit and Python-based scripts 62 . Synaptic latency was measured from the peak of the presynaptic AP to the onset of the postsynaptic potential or current. Kinetic analysis of EPSCs or IPSCs was performed in pairs with series resistance of < 15 MΩ. Distance was measured from soma center to soma center. Analysis of the axonal arbor of PV + interneurons and GCs revealed that the axonal length was 2.21 ± 0.20 and 1.59 ± 0.07 times larger than the corresponding intersomatic distance (Supplementary Figure 4 ). Connection probability was calculated as number of connected pairs over total number of tested pairs in each 50-µm distance interval. 95%-confidence intervals were obtained according to binomial distributions. Distance dependence of connectivity was fit with a sigmoidal function f ( x ) = A [1 + Exp[( x – B )/ C ] −1 , where x is absolute distance, and A , B , and C are fitted parameters. Throughout the text, the maximal connection probability ( c max ) was determined as f (0), and the space constant ( d half ) was determined as the x’ value that specified the condition f ( x ’)/ f (0) = 0.5. To test whether connectivity differed between synapses, 10,000 bootstrap replications of the inhibitory PV + interneuron–GC data set were obtained, and the mean values of the GC–PV + interneuron and PV + interneuron–PV + interneuron experimental data sets were compared against the simulated distribution 63 . Values are given as mean ± standard error of the mean (S.E.M.). Box plots show lower quartile (Q1), median (horizontal line), and upper quartile (Q3). The interquartile range (IQR = Q3–Q1) is represented as the height of the box. Whiskers extend to the most extreme data point that is no more than 1.5 x IQR from the edge of the box (Tukey style). Statistical comparisons were done either with a non-parametric Mann–Whitney U two-sided test or by linear regression, testing whether the slope was significantly different from 0. To test whether disynaptic motifs 64 occurred significantly more frequently than expected by chance, we simulated the entire set of recording configurations including PV + interneurons (41 octuples, 62 septuples, 54 sextuples, 37 quintuples, 14 quadruples, 7 triples, and 3 pairs in 218 slices) 10,000 times, assuming random connectivity 14 , 38 , 64 . The connection probabilities were set to the experimentally determined distance-dependent values. For each simulated data set, we counted the number of all 25 possible disynaptic motifs (Fig. 5a ). From the 10,000 bootstrap replications, mean, median, and confidence intervals for these counts were determined. P values were calculated as the number of replications in which the motif number was equal to or larger than the empirical number, divided by the number of replications. If a motif was never encountered in the 10,000 replications, P was assumed as < 0.0001. For assessing statistical significance, correction for multiple testing was performed using a Benjamini–Hochberg method that controls the false discovery rate 65 . P values for m comparisons were sorted in increasing order ( P 1 ≤ P 2 ≤ … ≤ P m ), the first P i value that satisfied the condition P i ≤ i / m 0.05 was identified (starting with P m ), and the motifs corresponding to P j values with 1 ≤ j ≤ i were considered significant. For illustration purposes, P values were converted into z scores, using the quantiles of a standard normal distribution. Morphological analysis Neurons that were filled with biocytin (0.3%) for >1 h were processed for morphological analysis. After withdrawal of the pipettes, resulting in the formation of outside-out patches at the pipette tips, slices were fixed for 12–24 h at 4 °C in a 0.1 M phosphate buffer (PB) solution containing 2.5% paraformaldehyde, 1.25% glutaraldehyde, and 15% (v/v) saturated picric acid solution. After fixation, slices were treated with hydrogen peroxide (1%, 10 min) to block endogenous peroxidases, and rinsed in PB several times. Membranes were permeabilized with 1% Triton X100 in PB for 1 h. Slices were then transferred to a PB solution containing 1% avidin-biotinylated horseradish peroxidase complex (ABC, Vector Laboratories) and 1% Triton X100 for ~ 12 h. Excess ABC was removed by several rinses in PB and the slices were developed with 0.05% 3,3′-diaminobenzidine tetrahydrochloride (DAB) and subsequently hydrogen peroxide. Finally, slices were embedded in Mowiol (Sigma-Aldrich). In vivo recordings from dentate gyrus PV + interneurons Whole-cell patch-clamp recordings in vivo were performed in male 35- to 63-day-old mice as described previously 53 . Animals were in the head-fixed, fully awake configuration, and were running on a linear belt treadmill 66 , 67 . The head-bar implantation and craniotomy were performed under anesthesia by intraperitoneal injection of 80 mg/kg ketamine (Intervet) and 8 mg/kg xylazine (Graeub), followed by local anesthesia with lidocaine. A custom-made steel head-bar was attached to the skull using superglue and dental cement. The day before recording, two small (~0.5 mm in diameter) craniotomies, one for the patch electrode and one for a local field potential (LFP) electrode, were drilled at the following coordinates: 2.0 mm caudal, 1.2 mm lateral for whole-cell recording; 2.5 mm caudal, 1.2 mm lateral for the LFP recording. The dura was left intact, and craniotomies were covered with silicone elastomer (Kwik-Cast, World Precision Instruments). Pipettes were fabricated from borosilicate glass capillaries (1.75 mm outer diameter, 1.25 mm inner diameter). Long-taper whole-cell patch electrodes (9–12 MΩ) were filled with a solution containing: 130 mM K-gluconate, 2 mM KCl, 2 mM MgCl 2 , 2 mM Na 2 ATP, 0.3 mM NaGTP, 10 mM HEPES, 18 mM sucrose, 10 or 0.1 mM EGTA, and 0.3% biocytin, pH adjusted to 7.28 with KOH. Whole-cell patch electrodes were advanced through the cortex with 500–600 mbar of pressure to prevent the electrode tip from clogging. After passing the hippocampus CA1 subfield, the pressure was reduced to 20 mbar. After the blind whole-cell recording was obtained, series resistance was calculated by applying a test pulse ( + 50 mV and −10 mV) under voltage-clamp conditions. Recordings were immediately discarded if series resistance exceeded 100 MΩ. After the bridge balance was compensated, step currents from −100 pA to 400 pA were injected to calculate input resistance and maximal firing frequency of the recorded cells. All the recordings were done in current-clamp experiment configuration without holding current injection using a Heka EPC double amplifier. Signals were low-pass filtered at 10 kHz (Bessel) and sampled at 25 kHz with Heka Patchmaster acquisition software. After recording, the patch pipettes were slowly withdrawn to form an outside-out patch, verifying the integrity of the seal. Data included were obtained from three fast-spiking cells in the dentate gyrus, which generated APs during sustained current injection at a frequency of >100 Hz. To determine the relative AP threshold, spontaneous action potentials (sAPs) were detected, using either a single sAP or the first AP in a burst. The membrane potential preceding the sAP was measured in a 10–20 ms time window before the sAP. sAP absolute threshold was determined from a d V /d t – V phase plot; the rising phase was fit with an exponential function including a shift factor, and the intersection of the fit curve with the baseline was defined as threshold. Data availability Original data, analysis programs, and computer code will be provided by the corresponding author (P.J.) upon request.
Digital reconstruction of the two parvalbumin-expressing interneurons (red and yellow) and one granular cell (blue) and visualization of the synaptic connections (black & white photographs). Credit: Espinoza et al When you park in the office car park, you usually have no problem finding your car again at the end of the day. The next day, you might park a few spots further away. However, in the evening, you find your car, even though the memories of both days are very similar. You find your car (also) because our brains are able to store memories of very similar events as distinct memories in a process called pattern separation. Researchers at the Institute of Science and Technology Austria (IST Austria) are deciphering how the brain computes this pattern separation in a brain region called the dentate gyrus. Results of their work are published today in Nature Communications. Peter Jonas and his team, including first author and Ph.D. student Claudia Espinoza, Jose Guzman and Xiaomin Zhang sought to understand how the connections between neurons in the dentate gyrus, a part of the hippocampus, enable the separation of patterns in mice. In the dentate gyrus, two types of neurons send signals: principal neurons send excitatory signals, while interneurons send inhibitory signals. The researchers sought to decipher the rules of connectivity between them—which neurons send signals to each other, whether connections between neurons are reciprocal, and whether many neurons converge to send signals to one main neuron. They recorded signaling between neurons to understand how the neurons are connected and how the local circuit works to support pattern separation. Espinoza performed octuple whole-cell recordings, in which she stimulated one neuron in a slice of the dentate gyrus, and recorded how the other seven neurons responded. By labeling all stimulated neurons, she could then reconstruct the morphology of the circuit. The researchers found that the parvalbumin-expressing interneurons are connected in a specific way only in the dentate gyrus. In the dentate gyrus, parvalbumin-expressing interneurons mainly inhibit the activity of nearby neurons in a process called lateral inhibition. In other brain regions, such as the neocortex, parvalbumin-expressing interneurons are not connected in this manner. "We think that the unique connectivity rules established by parvalbumin-expressing interneurons, such as lateral inhibition, represent a circuit adaptation to specific network functions that occur in this brain region," says Claudia Espinoza. "Our experimental data supports the idea that pattern separation works through a mechanism called 'winner-takes-all,' achieved via lateral inhibition in the dentate gyrus. However, this has not been proven yet. We need behavioral data and computational models, which we are working on." After the dentate gyrus separates similar memories to avoid an overlap between them, the CA3 region of the hippocampus then stores these memories. In a previous article published in Science in 2016, Peter Jonas and Jose Guzman showed that the connectivity in the CA3 region of the hippocampus is designed to recall information of stored memories in a process called pattern completion. "At a biological level, our group found the connectivity rules that support the computational function of a brain region," says Espinoza, "Our work contributes to showing how local circuits are optimized for the specific function of a brain area. While the input that reaches the dentate gyrus is important, the way in which the dentate gyrus then computes this information to achieve pattern separation is crucial." Claudia Espinoza is a Ph.D. student in the group of Peter Jonas. Before she joined IST Austria for her Ph.D. studies in 2013, she worked with patients with neurological disorders. This experience motivated Espinoza to pursue a Ph.D. in neuroscience: "I realized that my work as a therapist was very limited because the treatment that we could offer to our patients was very scarce, and actually most of the available treatments are palliative and not curative. The main reason is that the information available about how the nervous system works is very limited, more than what most people believe. This fact motivated me the most for changing my career from a therapist to a researcher. I think that creating knowledge is a beautiful way of contributing something to our society and indirectly to helping people."
10.1038/s41467-018-06899-3
Earth
Development may reduce heatwave impact
Simone Russo et al. Half a degree and rapid socioeconomic development matter for heatwave risk, Nature Communications (2019). DOI: 10.1038/s41467-018-08070-4 Simone Russo et al. When will unusual heat waves become normal in a warming Africa?, Environmental Research Letters (2016). DOI: 10.1088/1748-9326/11/5/054016 Journal information: Nature Communications , Environmental Research Letters
http://dx.doi.org/10.1038/s41467-018-08070-4
https://phys.org/news/2019-02-heatwave-impact.html
Abstract While every society can be exposed to heatwaves, some people suffer far less harm and recover more quickly than others from their occurrence. Here we project indicators of global heatwave risk associated with global warming of 1.5 and 2 °C, specified by the Paris agreement, for two future pathways of societal development representing low and high vulnerability conditions. Results suggest that at the 1.5 °C warming level, heatwave exposure in 2075 estimated for the population living in low development countries is expected to be greater than exposure at the warming level of 2 °C for the population living in very high development countries. A similar result holds for an illustrative heatwave risk index. However, the projected difference in heatwave exposure and the illustrative risk index for the low and very high development countries will be significantly reduced if global warming is stabilized below 1.5 °C, and in the presence of rapid social development. Introduction The large socioeconomic costs of heatwaves make them a crucial target for impact assessments of climate change scenarios. Recent studies have focused on changes in the frequency, intensity, and duration of extreme events that affect their risk to human society 1 , 2 , 3 , 4 , 5 , 6 , in some cases differentiating the occurrence of those hazards in low income versus high income countries 7 , 8 . According to the Intergovernmental Panel on Climate Change 9 , 10 climate change risks are determined not only by climate extremes (the hazards) but also by the exposure and vulnerability of society to these hazards 10 , 11 . Here, we analyze and discuss changes in heatwave hazard, population exposure, and a vulnerability proxy. Subsequently, we derive an illustrative heatwave risk index (IRI) as the product of the probability of its occurrence (hazard) and normalized levels of exposure and a proxy for vulnerability 12 (see Eq. ( 1 )). We calculate the IRI at two different levels of warming (1.5 °C, 2 °C) and for two alternative scenarios of societal development based on the Shared Socioeconomic Pathways (SSPs) 13 designed to explore a range of exposures, potential vulnerabilities and potential capabilities to adapt to climate change. In particular, SSP1 corresponds to a society with low population growth and rapid social and economic development (low vulnerability), whereas the SSP4 represents a future society with high population growth in currently high fertility countries and a high degree of inequality (high vulnerability) 13 . As metric for heatwave hazard we use the decadal probability of experiencing an extreme heatwave. A heatwave is defined using the Heat Wave Magnitude Index daily 14 (HWMId), which takes into account both duration and temperature anomalies of a heatwave into a single number. Extreme heatwaves are those that occur on average every five hundred years under present climate conditions (hereafter HW500Y; results for 100-year return heatwaves are shown in Supplementary ). The hazard is estimated through extreme value analysis using a block maxima approach 1 , 15 , 16 , based on multi-model ensemble simulations (four models, each with 1000 years members or more) provided by the Half A degree additional warming, Prognosis and Projected Impacts (HAPPI) project for the present climate and at warming levels of 1.5 and 2 °C (see Methods). Following recent studies 8 , 17 , 18 , we combine the projected heatwave hazard with projections of spatially explicit population density consistent with the SSPs 13 , 19 to calculate exposure. Calculations of risk usually combine exposure to a particular hazard with dose-response relationships relating exposure to an outcome of interest, such as mortality or morbidity due to heatwaves. These relationships reflect the level of vulnerability of the exposed population. Lacking such dose-response relationships for heatwaves that are applicable globally, we instead adopt the Human Development Index 20 (HDI) as an indicator of broadly defined vulnerabilility. The HDI is a composite indicator introduced by the UNDP in 1990 to assess the socioeconomic development of countries. Other studies have used Gross Domestic Product (GDP) to account for vulnerability to climate change 8 , 21 , 22 , 23 ; HDI is a more comprehensive measure than GDP as it takes income, health, and education into account. Low and very high-human-development countries are defined by using the fixed cutoff points based on quartiles of HDI values introduced by the 2014 Human Development Report ( HDI < 0.55 and HDI > 0.8, respectively (see HDR_technical_notes.pdf and Methods). HDI has been shown to outperform several more recent indices as a generic national-level index of social vulnerability to climate change 24 . HDI also shows high significant correlation with historical measures of country vulnerability to climate change such as the Notre Dame-Global Adaptation Initiative Country Index (ND-GAIN) 25 (see 'HDI versus other vulnerability indices' section). However, it is important to emphasize that HDI can neither serve as a specific (or causal) vulnerability measure to heatwaves or any other climate hazard, nor does it indicate adaptive capacities to specific heatwaves per se. We use recent projections of HDI for all countries through 2075, consistent with the demographic, economic, and education assumptions in the SSPs 26 in order to calculate the IRI in a manner that illustrates how vulnerability can affect risk, not to estimate actual heatwave risk outcomes. Here, we derive IRI to illustrate relative composite spatial patterns of hazard, exposure, and vulnerability at the global scale rather than definitive or quantitative risk estimates. We calculate normalized and non-normalized versions of IRI. In the normalized IRI, present and projected HDI and population density variables are transformed to the same range of variability before aggregation by normalizing in (0, 1) using the Johnson Cumulative Distribution Function 27 (CDF) fitted to the present period (see Methods). Normalized IRI thus represents the probability of occurrence of an extreme heatwave (HW500y) scaled by normalized population density and level of social development. An IRI of zero indicates low or negligible risk relative to the other locations, for instance due to very low population density and thus low exposure, or very high HDI and thus low vulnerability. In the normalized version, an IRI of 1.0 represents the highest possible level of risk. In the present period, due to the construction of normalized IRI, its values lie between zero and the present-day hazard probability (i.e., a HW500y in a present decade has a chance of 0.2%). In the future, risk can—in principle—either increase or decrease as its components (hazard, population, and HDI) increase or decrease, and these changes will be reflected in the IRI. For example, if hazard probability and HDI remain constant, but population density decreases, IRI would decrease. In contrast, very large increase of IRI in a future period might reflect increase in the hazard probability, or—rather theoretically—an increase in population by several orders of magnitude. In summary, IRI explores relative effects of hazard probability, exposure, and vulnerability. IRI values are not based on or calibrated to a dose-response relationship, and hence, the normalized IRI does not preserve physical units. Accordingly, due to the lack of a physical relationship, the IRI (normalized) approach implicitly assumes that relative changes in hazard probabilities, exposure, and vulnerability of the respective normalized distributions are equally important. Consequently, IRI values cannot be interpreted in terms of physical or quantitative risk estimates. Results Heatwave hazard Figure 1b–i depicts the spatial distribution of the HW500Y hazard, expressed in terms of probability of occurrence and the corresponding return period at the 1.5 and 2 °C warming levels, occurring at least once every 100 years over most of the land surface, and radically increasing across Africa, Middle East, and and parts of Southeast Asia and Latin America to at least once per decade (Fig. 1f–i ). Substantial changes in heatwave frequencies in these regions are related to lower year-to-year temperature variability, and thus a higher warming-to-noise ratio leading to larger relative changes 28 . Similar changes in frequency are shown for heatwaves occurring every one hundred year in the present period (see Supplementary Fig. 1 ). Under the 1.5 °C scenario, the frequency of HW500Y events is substantially reduced (relative to 2 °C warming), with maximum frequencies reduced to once every several decades. Fig. 1 Probability of occurrence of extreme heatwave. a Extreme value analysis GEV-fit for decadal-maximum HWMId at a location in Central Africa Republic (18.75°E, 4.69°N) as a function of return period (bottom x -axis) and hazard (upper x -axis expressed as decreasing probability) at present climate (black curves), 1.5 °C (blue curves) and 2 °C (red curves) warming levels. The colored dashed curves give the 95%-confidence interval, based on a likelihood estimates (see method), and the open colored circles are the simulated decadal-maximum HWMId values. The open squares represent the 500-year return level heatwave (HW500y) in present climate (black) and at 1.5 and 2 °C level of warming, blue and red, respectively. b – i Spatial distribution of the probability and return level of a heatwave with five-hundred years return period under 1.5 °C ( b – e ) and 2.0 °C ( f – i ) warming for multiple models Full size image Heatwave exposure Population exposure to the heatwave hazard is affected not only by these changes in frequency but also by projected population changes. By the end of the century, the global population is expected to reach approximately 120 and 140% of the present population in SSPs 1 and 4, respectively. In addition, in both SSPs more population growth occurs in countries currently at lower levels of development, and, as we have already noted, increase in the heatwave hazard are larger in those countries as well. As a consequence, exposure (the product of the hazard and population exposed to it) increases most in countries at lower levels of development. In fact, we find that at the 1.5 °C level the population in the low-human-development countries (defined as HDI < 0.55) will be exposed to equal or greater levels of heatwave hazard than the population in very high-human-development countries (defined as HDI > 0.8) under the 2 °C scenario (see Fig. 2 ). A list of low and very high-human-development countries, a grouping introduced by the UN Development Program and assigned here, according to the 2015 HDI values, is reported in Supplementary Table 1 . Exposure is higher not only because of the difference in hazard, but also because the population exposed at the end of the century is larger in low-human-development countries, equivalent to 25 and 39% of present global population in SSPs 1 and 4, respectively, compared to 20 and 18% in the very high-development-countries. Fig. 2 Population exposure to heatwave hazard. a Bar plots show the ensemble model median, with associated range represented by black lines, of the global population in 2075 exposed to different probabilities of HW500Y events occurring in a given decade at 1.5 °C(gray bars) and 2 °C(red bars) warming and under the SSP1 pathway. Population in 2075 is expressed as a percentage of the current global population. The bar plots are calculated for all the grid points of the global domain with population density greater than 0. b , c as for a, but for very high and low-human-development countries with HDI > 0.8 and HDI < 0.55, respectively. d – f as a – c , respectively, but for the SPP4 scenario Full size image Heatwave risk The IRI goes beyond exposure to illustrate how accounting for vulnerability could potentially change the outlook for future risk. HDI increases over time in all countries, but at different rates, and therefore vulnerability generally decreases, ameliorating changes in future risk at different rates across countries and scenarios. Projections of the spatial distribution of non-normalized IRI based on one representative climate model (Fig. 3a–d ), when compared to projections of the hazard alone using the same model (Fig. 1 , panel for ECHAM model), show that the consideration of population density and an index of vulnerability substantially changes the outlook for potential risk. The IRI in North America, most of Latin America, Australia, and much of Europe is substantially muted, relative to the rest of the world, to a degree that is not evident in the projection of the heatwave hazard. In contrast, the IRI in South and East Asia is on par with the relatively high values in Sub-Saharan Africa, despite having a relatively lower heatwave hazard in those areas. Fig. 3 Spatial distribution of the Illustrative risk index. a – d Non Normalized IRI for the ECHAM6 model: the values at each grid point are calculated as the product of the probability of occurrence of HW500Y the value of population density, and one minus the Human Development Index (see Methods). e – h Normalized IRI for the ECHAM6 model: the values at each grid point are calculated as the product of the probability of occurrence of HW500Y and the normalized distribution of exposure (population density) and vulnerability (one minus the Human Development Index, see Methods) Full size image Given the fact that population density can range much more widely than the value of HDI, the scale of the non-normalized IRI is influenced mainly by variability in population density. The normalized IRI transforms the three variables into standard uniform units (see Methods). It produces a similar spatial pattern of the IRI to the non-normalized version (Fig. 3e–h ), but with a smaller index value in South and East Asia relative to other locations due to the more limited effect of population density on IRI after normalization (normalizing only HDI, and not population, does not produce this effect, see Supplementary Fig. 2 ). Because the probability of HW500y is likely to increase substantially in 1.5 or 2 °C worlds (see example in Fig. 1 ), while projected changes in exposure or vulnerability are not as large in relative terms, changes in the normalized IRI will be to a large extent driven by changes in the hazard component. Other analyzed HAPPI models show similar patterns in the spatial distribution of normalized IRI (Supplementary Figs. 3 and 4 ). The value of IRI is highest in the SSP4 scenario with 2 °C warming (Fig. 4 ). Under these circumstances, a population equivalent to 77% of the current global population will experience an illustrative heat risk value greater than 20% (Fig. 4d ). In low developement countries a population equivalent to 27% of the current global population, the IRI value will be greater than 50% (see Fig. 4f ). Values of IRI are lowest in the SSP1 scenario with 1.5 °C warming. In that case, IRI nowhere reaches values above 50%, and in low-development-countries a population equivalent to only 5% of the present global population experiences IRI values greater than 20% (Fig. 4c ). Fig. 4 Population as function of IRI. a Bar plots show the ensemble model median, with associated range represented by black lines, of the population in 2075 (measured as percent of current global population) that experiences different IRI levels at 1.5 °C(gray bars) and 2 °C(red bars) warming, and under the SSP1 pathway. The bar plots are calculated for all the grid points of the global domain with population density greater than 0. b , c as for a , but for population in countries with HDI > 0.8 and HDI < 0.55, corresponding to very high and low-human-development countries, respectively. d – f as for a – c , respectively, but for the SPP4 pathway Full size image It follows then that the greatest reductions in IRI are achieved by both limiting warming to 1.5 °C and fostering rapid social development (SSP1), particularly across sub-Saharan Africa (Figs. 3a, e and 5f ) where most of the present low-human-development countries are located (Supplementary Fig. 5d ). Differences between the normalized IRI values across other scenario combinations show that the risk index increases in all inhabited regions if global warming reaches 2 °C rather than being limited to 1.5 °C, and if the degree of exposure and the vulnerability proxy (HDI) of future society follows SSP4 instead of SPP1 (Fig. 5 , see Supplementary Figs. 5 – 7 for other models). The effect of differences in climate and development also interact. For example the impact of the additional half a degree warming on the illustrative risk index is substantially amplified under SSP4 compared to SSP1 (see Fig. 5a, d ). In addition, different effects on IRI of climate and societal factors implies that in this illustrative calculation, the consequences of 2 °C warming in SSP1 are similar to those of 1.5 °C warming in a more vulnerable society (SSP4) (see Fig. 5c ). The comparison of Fig. 4b, c, e, f , suggests also a prominent contrast between the impact of global warming on the very high and low-human-development countries. For example, the IRI levels in very high human-development-countries remain low (values less than 20% almost for all population) even with 2 °C warming in SSP4 (Fig. 4e ). In contrast, in low-human-development countries under the same inequality scenario (SSP4), the IRI level is almost always above 10% even at a warming level of 1.5 °C (Fig. 5f ). More generally, the illustrative heatwave risk index for the population living in low-human-development countries at the 1.5 °C warming level is typically larger than the values for the very high-human-development countries, even with 2 °C warming. Amplified patterns in heat extremes, i.e., the hazard component, for countries with low human development, had been pointed out earlier 28 , and thus our results appear consistent with previous literature. The analysis repeated for the heatwaves defined with one hundred years return levels, i.e., HW100Y, shows similar results (see Supplementary Fig. 8 ), as does an analysis without using normalized HDI values (see Supplementary Fig. 9 ). Fig. 5 Differences in Normalized IRI. Normalized IRI differences are calculated at each grid point by means of IRI values for the ECHAM6 model. a , d Differences of IRI between 2 and 1.5 °C under the low vulnerability (SSP1) and high vulnerability (SSP4) scenarios, respectively. b , e Differences of IRI between SSP4 and SSP1 for the 1.5 and 2 °C warming levels, respectively. c Differences of IRI between 2 °C under the SSP1 scenario and 1.5 °C under the SSP4 scenarios. c , f Differences of IRI between the most pessimistic (2 °C level of warming under the SSP4 scenario) and most optimistic (1.5 °C level of warming under the SSP1 scenario) scenarios Full size image Discussion In this study, we have quantified heatwave hazard, exposure and a vulnerability proxy, associated with a global warming stabilized at 1.5 and 2 °C levels compared to preindustrial climate conditions. In addition, we have presented and discussed the aggregation of the three dimensions as an illustrative risk index (IRI). The results were also differentiated between two socioeconomic pathways, which represent either rapid social and economic development (SSP1) or high inequality (SSP4) by the end of the century, and which strongly contrast in exposure and vulnerability. The analysis highlights a stark contrast in the aggregated risk metric between low and very high-human-development countries, quantified for different combinations of warming levels and socioeconomic pathways. Even under the 1.5 °C warming level, the low-human-development countries (representing future populations equal to 25 or 39% of the present global population in the SSP1 and SSP4, respectively) experience exposure levels equal to or greater than the levels for the very high-human-development countries with 2 °C warming. We also find that, in agreement with a recent study 8 , holding the temperature below 1.5 °C warming yields a large potential to reduce the levels of the heatwave exposure. Results for the IRI suggest that the same could be true for heatwave-related risks to society, especially for low-human-development countries. In addition, we show that the IRI values can be reduced, not only by limiting global temperature increase to 1.5 °C, but also with rapid socioeconomic development. The role of the latter might be crucial, considering that some studies estimate the likelihood of reaching the Paris agreement targets, i.e., stabilizing warming at the 1.5 or 2 °C, to be low (approximately 5% and 10%, respectively) 29 . This work represents an initial attempt to quantify differences in heatwave hazard, exposure and illustrative risk between different warming levels and socioeconomic pathways that is global in scope. An important caveat to the study is that our illustrative risk index does not use a dose-response relationship relating exposure to a specific heatwave-related impact. Rather, it uses a proxy for vulnerability, the HDI, which is a general measure of vulnerability to a wide variety of climate impacts, is not resolved below the level of individual countries, and is not tailored specifically to risks from heatwaves, nor calibrated to specific outcomes such as mortality 30 , morbidity, or reduced labor productivity. Relative changes in any of its components would equally contribute to changes in IRI. Hence, the IRI cannot be interpreted in terms of a physical risk estimate (such as probability of a specific harmful consequence). Relative changes in IRI values across countries, or across scenarios, would be different if a different proxy for vulnerability were used, or a different approach taken to the calculation of the index. Furthermore, a very small hazard probability in the present period leads to large relative changes in the hazard component, which likely dominate changes in IRI. Projections presented here of the heatwave hazard and exposure to it can be interpreted more directly as the distribution of population by the likelihood of experiencing the hazard, but have the shortcoming of not accounting for the differential levels of vulnerability across populations. The IRI illustrates how the incorporation of vulnerability-related information could change outcomes. Future work could also improve on this analysis by accounting for the large heterogeneity in vulnerability within countries. Nonetheless, the incorporation of projections of population exposure and a proxy of human vulnerability to climate-related hazards, such as heatwaves, provides relevant information for global-scale impacts and risk assessments, and points toward ways in which analyses of a wide range of climate risks could be strengthened. Methods Data Heatwave magnitude is estimated by means of the Heat Wave Magnitude Index daily 14 , 31 (HWMId). Results are tested by comparing the HWMId values with the the annual maximum of 5-day average of daily maximum temperature (TX5x). The HWMId and the TX5x are calculated for the present climate and at 1.5 and 2 °C warming from daily maximum temperature from the HAPPI (Half A degree additional warming, Prognosis and Projected Impacts) project, based on the atmospheric components of the CMIP5 models forced by prescribed Sea Surface Temperature (SST) and sea ice concentrations 32 , 33 . A recent study 34 has shown that, particularly over the tropics and Australia, estimates of the changes in the odds of annual temperature extremes can be up to more than a factor of 5 to 10 larger using prescribed SSTs than when using a fully coupled model configuration. This is because the variability of the distribution of annual maximum temperatures, simulated by using prescribed SST, is underestimated with respect to the one simulated by fully coupled model configuration. While this issue can be alleviated to a certain degree by using metrics that are standardized relative to its variability (interquartile range) such as HWMId, findings should still be interpreted as conditional on the period in which sea surface temperatures were prescribed 34 . Hence, extreme events obtained from these simulations can be seen as conditional on a certain decade, but it is important not to interpret 500-year return periods at multi-decadal scale, precisely because of long-term variability 34 . However, heatwave magnitudes corresponding to this long return period should be representative of extreme heatwaves such as the one in Central Europe in 2003 and in Russia in 2010 14 , 35 , if a sea surface state corresponding to such an event occurred in the respective decade used to run the ensemble simulations. As prescribed by the HAPPI protocol, the 1.5 and 2 °C simulations use the same aerosol forcing. It is important to emphasize that aerosol emission for the stabilization scenarios are reduced from present days 1 , 36 . This could produce some differences between heatwave return levels in the stabilized scenarios and present day. However, this does not affect our results that focus on the differences between the two stabilized scenarios. We use four out of the five available simulations that have at least one thousand year runs: Model-1 is the Canadian Fourth Generation Atmospheric Global Climate Model (CanAM4) contributed by the Canadian Centre for Climate Modeling and Analysis 37 . Model-2 is the NCAR-DOE Community Atmosphere Model version 4 (CAM4) coupled to the Community Land Model version 4 (CLM4) with simulations contributed by ETH Zurich 38 , 39 . Model-3 is ECHAM6.3 40 ,contributed by the Max Planck Institute of Meteorology, Hamburg, Germany (global 1.875 °C grid). Model-4 is a high resolution (global 0.5 °C grid) model, contributed by the National Institute for Environmental Studies, Tsukuba, Japan and denoted as MIROC5 41 , 42 . Statistical distribution of heatwaves According to many studies 4 , 6 , 14 , 43 a heatwave is defined as at least three consecutive days with daily temperature above the local 90th percentile threshold. Since heatwaves are extreme events, on average, they are not expected to occur every year. Supplementary Fig. 10 shows the number of years that do not show any heatwave in a decade of the present climate. We model the statistical distribution of heatwaves by applying a block maxima approach (see Coles 2002, Wehner 2018, Sterl 2008) with a block of 10-years. At each grid point, our set of data is composed by the maximum heatwave magnitude in 10-years. By fitting the maximum HWMId values in 10-year block with L-moments based estimators, we show that the decadal maxima of both HWMId and TX5x follows a Generalized Extreme Values distribution with skewness greater than or equal to zero 16 (Freschet or Gumbel distribution, respectively. See Supplementary Fig. 11e–h ). By using an Anderson Darling statistical test, suitable for extreme events because more weight is put on the tails (than in comparable tests such as Kolmogorov-Smirnov for instance), with the null hypothesis that decadal HWMId maxima are GEV, we demonstrate that the null hypothesis cannot be rejected in any location ( P -value > 0.1 everywhere, see Supplementary Fig. 11a–d ). This result is valid with all HAPPI models. By using the fitted GEV models we estimate 500 year heatwaves return levels (HW500Y) in the present climate and show the spatial distribution of the probability of occurrence of these values at 1.5 and 2 °C warming levels (Fig. 1 ). The same analysis is applied to the annual maximum of 5-day average of daily maximum temperature index (TX5x see Supplementary Fig. 12 ) for validation. The spatial distributions of HW500Y hazard calculated for the HWMId and the TX5x indices compare very well both in terms of pattern and probability values (Fig. 1 and Supplementary Fig. 12 , respectively). Uncertainties associated to the occurrence of HW500Y are calculated as the 95 th confidence level of the GEV model fitted to the data (see Fig. 1a , Supplementary Figs. 13 and 14 ). Population density To consider the impact of changes in heatwaves in populated regions of the world, we use a set of global, spatially explicit population projections that are consistent with the new Shared Socioeconomic Pathways (SSPs) 19 . The spatial population projections cover the period 2010–2100 in ten-year time steps. We have used population datasets at two different periods (2015 and 2075, decade 2010–2019 and 2070–2079, respectively) and under two different SSPs (SSP1 and SSP4). All population projections are remapped onto a regular grid of each HAPPI model by using a second order conservative remapping approach. Supplementary Fig. 15a shows population density in persons per km 2 and normalized values (see section on normalization below), remapped on the MIROC5 model for all time periods and SSP pathways. All other models show the same maps. Human development index As a proxy for vulnerability as a component of an illutstrative risk index (IRI) we use the Human Development Index (HDI), a composite indicator introduced by the UNDP in 1990 to assess the development of countries inspired by the concept of capabilities development by Amartya Sen 44 . HDI is based on the geometric average of three dimensions, all within suitable bounds: health (life expectancy at birth); education (expected and mean years of schooling), and standard of living (mean gross national income per capita, expressed in Purchasing Power Parity). In this study we define vulnerability as 1-HDI so that countries with the lowest HDI levels are associated with the highest vulnerability and vice-versa. As was done for the population data, we remap the most recent Human Development Index data (for the year 2015, see HDR2016) and projected HDI values under SSP1 and SSP4 pathways 26 on the grid of each of the four HAPPI simulations used here (see Supplementary Fig. 15b for MIROC5 model). Because HDI data and projections are for country averages only, the approach taken here abstracts from the substantial heterogeneity in income, education, and health within countries, but captures the heterogeneity in HDI across countries. HDI versus other vulnerability indices To evaluate the robustness of the HDI in accounting for vulnerability to climate we have estimated its correlation with the Notre Dame-Global Adaptation Initiative Country Index 25 (ND-GAIN), a national index constructed from 45 indicators of vulnerability and readiness to respond to climate change in six sectors: food, water, health, ecosystem, services, human habitat, and infrastructure. In the present period, the HDI is significantly correlated with the ND-GAIN (Pearson correlation equal to 0.95 with a p -value < 0.001, see Supplementary Fig. 16 ) and this alternative index would thus produce a similar ranking of Countries if looking at their Economic Vulnerability to climate in the ND-GAIN. As ND-GAIN data projections are not available, we rely on HDI. Normalized population and human development index For deriving the normalized version of IRI, and thus to illustrate composite spatial patterns of hazard, exposure, and vulnerability at the global scale, rather than definitive or quantitative risk estimates, the distribution of population density and HDI values are normalized by means of Johnson’s transformation 27 (see Supplementary Fig. 17 ). Normalization is needed to guarantee the homogeneity of the variances 45 of the variables aggregated into the IRI. This illustrative approach implicitly assumes equal relative weights of exposure and vulnerability of the respective normalized distributions. Our normalization method consists of: First: removing ties from population and (1-HDI) values for the present period (2015). Ties are removed only for statistical purposes in order to find the best statistical distribution fitting our data; they are not removed from risk maps. In fact, two locations with the same population density (or 1-HDI values) will have the same normalized score; Second: fitting the present population density and (1-HDI) values with the Johnson Family curves 27 Third: using the Cumulative Density distribution function fitted to present data to transform projected population and (1-HDI) values into a uniform probability interval [0, 1] (see Supplementary Fig. 17 ). Note that, since present HDI spatial data follow a bounded distribution, future HDI values that are out of the range of the present HDI values would not have a corresponding normalized value. In order to avoid this the Johnson fit is done by imposing the maximum HDI range that by definition is equal to (0, 1) 20 . The same limitation does not apply to population density data, since it follows a Log-Normal distribution with a domain in [0, + ∞]. The lowest entry in the population or 1-HDI data (population or HDI equal to zero) takes a normalized value equal to zero. In contrast, the highest entry (maximum population or 1-HDI values) takes a value equal to one or very close to one. Maps of population and HDI values are reported in Supplementary Fig. 15 . The goodness of fit of the Johnson Family curves fitted to population and (1-HDI) data have been tested by means of a Kolmogorov-Smirnov test of hypothesis. In both cases we cannot reject the null hypothesis that population density and (1-HDI) datasets follow a Log-Normal and a Bounded distribution, respectively. Illustrative risk index at the global scale At each location normalized IRI (expressed in %) is calculated as the product of the probability of occurrence of HW500Y multiplied by normalized population density and 1-HDI values: $${\mathrm{IRI}} = ({\mathrm{HW}}_{{\mathrm{hazard}}} \times {\mathrm{Population}}_{{\mathrm{exposure}}} \times (1 - {\mathrm{HDI}})_{{\mathrm{vulnerability}}})\times100$$ (1) with all components of the product above normalized in [0, 1]. Code availability Codes and additional information can be provided by directly contacting the authors. Data availability All data used in analysis available in public repositories or upon request. Daily maximum temperature data are available at the following repository: . Population density data are available at: . HDI data are reported in Supplementary Table 1 and are available upon request or from the repository reported in Crespo and Lutz 26 . The script used for calculating the HWMId is publicly available as a function of an R package extRemes (see: ).
A new study published in Nature Communications suggests that global warming of 1.5 or 2°C will lead to more intense and frequent extreme heatwave events. Increased socio-economic development can, however, help reduce their impact on society in low development countries. The study, which was published in January, illustrates how people living in both developed and developing countries may be at risk of being affected by extreme heatwaves. This has been done by establishing and projecting an illustrative index of global heatwave risk associated with global warming of 1.5 and 2°C for two future pathways of societal development. These scenarios represent several low and high vulnerability conditions based on UN's Human Development Index (HDI), i.e. level of education, per capita income and life expectancy. "Findings of the study indicate that with a 1.5°C mean temperature rise in 2075, the average number of people exposed to heatwave events may be larger in low-HDI countries compared to the number of people in high-HDI countries exposed to heatwave events at a 2°C rise," says Sebastian Sippel, researcher at NIBIO's department for Terrestrial Ecology and co-author of the study. "The results of our study show inequality and provide a picture of where the impacts of climate change may be felt the most," he adds. Sippel points out that the study is illustrative and does not fully account for the different levels of vulnerability across populations, seeing as calibrated dose-response relationships for heatwaves are unavailable for most countries. This being said, the study does provide relevant information for global-scale impacts and risk assessments linked to heatwaves. In addition, it points toward ways in which analyses of a wide range of climate risks could be strengthened. More intense heatwaves Jana Sillmann, research director at CICERO Center for International Climate Research and another co-author of the study, says that the frequency and intensity of heatwaves will increase with rising global mean temperatures. The tropics and the sub-tropics, which is where most of the least developed and developing countries are located, will in the near term experience a more rapid intensification of heatwaves. "For instance, heatwaves that are very unusual in today's climate will by 2040 occuron a regular basisif we continue emitting greenhouse gases at the current rate," Sillmann says. If global warming is stabilised below 1.5°C, the risk of people in developing countries suffering serious harm during heatwaves may be significantly reduced. Furthermore, the risk may be lowered if these countries experience rapid socio-economic development. Better healthcare may lower heatwave impact "Socio-economic development strengthens healthcare and increases the number of people having higher education and a better income. This means that people will have better access to healthcare during a heatwave, and will be better equipped to deal with heatwave impacts, such as having access to air conditioning and a home that provides some shelter from the heat," Sillmann explains. "Being able to escape the heat reduces the risk of suffering from heat-induced health problems such as dehydration, heat exhaustion and heat strokes, which may all lead to premature death," she adds. An initial attempt for further studies Vulnerability is a complex issue which is difficult to quantify. The projections presented in the study of the heatwave hazard and exposure to it can be interpreted as the distribution of the world's populations according to the likelihood of them experiencing the hazard. The study does not account for differential levels of vulnerability across populations. Sebastian Sippel from NIBIO stresses that although the study has significance, it is important to look at it for what it is: illustrative, not necessarily reality. "Rather than paint the full picture, our work should be considered an initial attempt to quantify global differences in heatwave hazard, exposure and illustrative risk between different warming levels and socioeconomic pathways," he says. "Our hope is that the study triggers discussions about how to go about real risk quantification analyses when it comes to heatwaves and other hazards at a global scale," he adds.
10.1038/s41467-018-08070-4
Physics
Ringing an electronic wave: Elusive massive phason observed in a charge density wave
Fahad Mahmood, Observation of a massive phason in a charge-density-wave insulator, Nature Materials (2023). DOI: 10.1038/s41563-023-01504-5. www.nature.com/articles/s41563-023-01504-5 Journal information: Nature Materials
https://dx.doi.org/10.1038/s41563-023-01504-5
https://phys.org/news/2023-03-electronic-elusive-massive-phason-density.html
Abstract The lowest-lying fundamental excitation of an incommensurate charge-density-wave material is believed to be a massless phason—a collective modulation of the phase of the charge-density-wave order parameter. However, long-range Coulomb interactions should push the phason energy up to the plasma energy of the charge-density-wave condensate, resulting in a massive phason and fully gapped spectrum 1 . Using time-domain terahertz emission spectroscopy, we investigate this issue in (TaSe 4 ) 2 I, a quasi-one-dimensional charge-density-wave insulator. On transient photoexcitation at low temperatures, we find the material strikingly emits coherent, narrowband terahertz radiation. The frequency, polarization and temperature dependences of the emitted radiation imply the existence of a phason that acquires mass by coupling to long-range Coulomb interactions. Our observations underscore the role of long-range interactions in determining the nature of collective excitations in materials with modulated charge or spin order. Main The fundamental collective modes (amplitudon and phason) of a broken-symmetry ordered state (Fig. 1a ) have been key in establishing foundational theories across various fields of physics, including gauge theories in particle physics, as well as superconductors, antiferromagnets and charge-density-wave (CDW) materials in condensed-matter physics 2 , 3 . The phason is typically massless, in accordance with Goldstone’s theorem, which necessitates the emergence of a massless boson for broken symmetry in systems where the ground or vacuum state is continuously degenerate. A prominent exception occurs in superconducting systems. Here, even though the ground state is continuously degenerate, the long-range Coulomb interaction pushes the longitudinal phason up to the plasma frequency 4 , and therefore, a massless Goldstone boson does not exist and the low-lying excitation spectrum is fully gapped. This behaviour is the celebrated Anderson–Higgs mechanism that established the deep connection between symmetry breaking and gauge fields, and ultimately explained how all the fundamental particles acquire mass from interactions with the Higgs field. Fig. 1: Collective modes of an incommensurate CDW phase. a , Ginzburg–Landau free energy ( \({{{\mathcal{F}}}}\) ) and real-space representation of the amplitudon (dashed arrow) and phason (solid line arrow) for a CDW order parameter in the absence of long-range Coulomb interactions. b , c , Dispersion relations of the CDW collective modes. Below T CDW , the otherwise undistorted acoustic phonon (LA) softens near \({\overrightarrow{q}}_{{{{\rm{CDW}}}}}\) and renormalizes into the amplitudon and phason bands. At moderate T ( ≲ T CDW ), the long-range Coulomb repulsion U is screened by quasiparticles ( b ). When the system is cooled well below T CDW , the screening weakens and the spectral weight from the massless (acoustic) phason is transferred to the massive (infrared-active) phason of frequency ω LF ( c ). Full size image Unlike superconductivity, an incommensurate CDW is believed to have a massless phason, typically understood in terms of the softening of a longitudinal acoustic phonon branch around the CDW wavevector \({\overrightarrow{q}}_{{{{\rm{CDW}}}}}\) (Fig. 1b ). Below the CDW transition temperature ( T CDW ), this mode softening results in the linear-in-wavevector, zero-gap dispersion of the phason, implying that the CDW can freely slide for excitation wavevector \(\overrightarrow{q}=0\) . In any real material, however, random impurities and disorder restrict this sliding motion, leading to a small gap in phason dispersion (pinning frequency) (Fig. 1b ). Thus, the sliding CDW motion can only be observed if a strong enough electric field is applied to first depin the phason. The resulting sliding motion of the CDW can then be measured in d.c. transport experiments as done in various systems 5 . However, the above phenomenology of a massless phason (or disorder pinned phason at low frequency) assumes the absence of long-range Coulomb interactions ( U ). This assumption is believed to be valid because the presence of normal electrons at a non-zero temperature can screen U . However, if U is sufficiently strong or if the density of normal electrons is sufficiently low, then the CDW phason at \(\overrightarrow{q}=0\) should be pushed to higher energies (even above the amplitudon energy) (Fig. 1c ). This behaviour was highlighted in work on CDW dynamics 6 and then formalized in a later work 1 that noted the similarity of the emergence of a massive CDW phason with the Anderson–Higgs mechanism in a superconductor. Later works 7 , 8 predicted that the massive (optical) phason could indeed dominate over the massless (acoustic) phason at sufficiently low temperatures where charged quasiparticles cannot sufficiently screen U . Within these models, the amplitudon remains unaffected by the long-range Coulomb interaction 8 . We note that in superconductors, the plasma frequency is much larger than the single-particle gap, rendering the phase mode unobservable deep in the superconducting phase. In a CDW system, however, the relevant scale is the plasma frequency of the condensate, which can lie far below the single-particle gap. To date, direct experimental evidence of a massive phason in CDW systems has been elusive. Here we present the evidence for the generation and detection of a massive phason in the CDW insulator (TaSe 4 ) 2 I, a quasi-one-dimensional (quasi-1D) material that undergoes an incommensurate CDW transition below T CDW ≈ 260 K (refs. 9 , 10 , 11 ) with a CDW gap for single-particle excitations of 2 Δ CDW ≈ 250–300 meV (refs. 12 , 13 , 14 ). (TaSe 4 ) 2 I is unique among quasi-1D CDW systems due to its unusually high resistivity in the low-temperature insulating state ( Supplementary Information ), such that the long-range Coulomb interaction can remain unscreened and the massive phason can have substantial spectral weight. To investigate the collective modes of the CDW order in (TaSe 4 ) 2 I, we performed time-domain terahertz (THz) emission spectroscopy using an ultrafast photoexcitation ‘pump’ pulse with an energy of 1.2 eV ( λ = 1,030 nm) (Fig. 2a shows the experimental geometry and Supplementary Information provides further details of the experimental setup). Note that the photoexcitation energy here is greater than 2 Δ CDW , and therefore, the pump pulse initially creates single-particle excitations across the CDW gap. As (TaSe 4 ) 2 I is structurally chiral (space group 97) and lacks inversion symmetry, this photoexcitation generates a transient current with a duration of a few picoseconds (ps). Such a phenomenon is well known as a photogalvanic or a photo-Dember effect, both of which can occur in systems lacking inversion symmetry 15 . The transient current then results in a short ps-duration burst of THz radiation in the far field, which we measure in the time domain using standard electro-optical sampling ( Supplementary Information ). Fig. 2: THz emission from (TaSe 4 ) 2 I. a , Geometry of the sample orientation, incident light (pump) and emitted THz polarizations. TaSe 4 chains ( c axis) are oriented along the z axis, and the pump beam is p polarized with incident angle θ i = 45°. The plane of incidence is represented as the grey-shaded region. b , THz emission signal ( E THz ) as a function of time ( t delay ), measured at 7 K. The Fourier-transform amplitude ( A FT ) is plotted in the inset. The 0.23 THz mode is marked with an asterisk (*). c , The p - and s -polarized components of the THz emission signal are \({E}_{{{{\rm{THz}}}}}^{p}\) (top) and \({E}_{{{{\rm{THz}}}}}^{s}\) (bottom), respectively. \({E}_{{{{\rm{THz}}}}}^{p}\) is shown with an offset. Source data Full size image Figure 2b shows the measured time profile of the THz electric field ( E THz ( t )) emitted from (TaSe 4 ) 2 I on photoexcitation at T = 7 K ≪ T CDW . Here the pump is p polarized, with an electric-field component along the quasi-1D chains of (TaSe 4 ) 2 I. Two features are immediately evident in E THz ( t ): a transient peak around t delay = 0 ps followed by a long-lived coherent oscillation that lasts over ~80 ps. In the frequency domain (Fig. 2b , inset), the transient peak around t delay ≈ 0 ps manifests as a broad distribution over frequencies from 0.1 to 2.0 THz, whereas the long-lived coherent oscillation manifests as a sharp peak centred at 0.23 THz. For the rest of this work, we refer to the transient ( t delay ≈ 0 ps) peak as E tr and the coherent oscillation as E osc . As noted above, E tr is what we typically expect from such an experiment due to the photogalvanic or photo-Dember effect. The transient current can be estimated from the measured E tr , and is greater than the current necessary to depin the dynamics of the CDW order in (TaSe 4 ) 2 I ( Supplementary Information ). We focus on the observed narrowband THz emission E osc since this aspect of the data is particularly striking. Here E osc lasts well over 80 ps—much longer than the typical few-ps-duration signal expected from semiconductors 15 and semimetals 16 , 17 , 18 in THz emission experiments. Another unusual feature of E osc is the observed waveform envelope in the time domain, which appears to gradually increase in magnitude starting at t delay = 0 ps. In impulsive excitation ultrafast experiments, the signal is usually peaked at t delay = 0 ps from where it exponentially decays. Additionally, although the measured transient peak E tr is both horizontally and vertically polarized, the coherent oscillation E osc is only horizontally polarized (Fig. 2c ). In our experimental geometry, this corresponds to E osc being polarized along the quasi-1D chains of (TaSe 4 ) 2 I (Fig. 2a ). To investigate the origin of the radiating mode, we study the E osc spectra at different temperatures (Fig. 3a,b ). The E osc signal is the strongest at 7 K and gradually decreases with increasing temperature, showing a sudden drop at ~80 K (at about 30% of T CDW ). This feature is also clear in the integrated spectral weight of the 0.23 THz mode as a function of temperature (Fig. 3c ). Note that the initial transient signal, E tr , is similar at both 70 and 100 K, but the oscillating mode E osc is considerably weaker at 100 K. To further understand the origin of the radiating mode, we studied the dependence of E THz on the incident pump fluence (Fig. 3d–f ). The spectral shape of E osc does not change with a decreasing pump fluence—only the overall spectral weight linearly decreases with decreasing fluence (Fig. 3f ), indicating that we are in a perturbative regime. Fig. 3: Temperature ( T ) and pump fluence ( F ) dependence of the 0.23 THz mode. a , THz emission signal as a function of delay time at a few select temperatures. The signals at different temperatures are offset for clarity. b , Fourier transforms of the traces shown in a . c , Spectral weight (SW) of the 0.23 THz mode as a function of temperature, normalized to the SW value at 7 K. The dashed line indicates 0.3 T CDW ≈ 80 K. d , THz emission signal at 7 K as a function of delay time at a few select pump fluences. The signals at different fluences are shown with offsets for clarity. e , Fourier transforms for the traces shown in d . f , SW of the 0.23 THz mode as a function of pump fluence, normalized to the maximum value of SW. The normalized SW in c and f are obtained by integrating the Fourier transforms ( A FT ( ω )) of E osc ( t ) over a 0.05-THz-wide frequency window centred at 0.23 THz. The error bars in the normalized SW are determined by Fourier transforming E osc ( t ) over two different time windows (with widths of 10 and 50 ps). Source data Full size image Based on the observations above, we can rule out several possible sources for the radiating mode at 0.23 THz, such as the phason gap due to pinning disorder 5 , 12 , optical phonons 15 , 19 , longitudinal acoustic phonons 20 or CDW amplitudon. Pinning is trivially ruled out since the measured phason gap due to disorder in (TaSe 4 ) 2 I is around 0.03 THz (ref. 12 ) (Supplementary Section 4.6 ), that is, nearly an order of magnitude smaller than the 0.23 THz oscillation measured in our work. The lowest-frequency optical phonons in (TaSe 4 ) 2 I are at much higher frequencies (1.1 THz (ref. 21 )) and the temporal waveform from optical or acoustic phonons would peak at t delay ≈ 0 ps followed by an exponential decay 20 , 22 , unlike the waveform shape observed here. Similarly, the spectral weight of the phonon or amplitudon is not expected to decrease to zero near T ≈ 80 K. For example, zone-folded longitudinal acoustic phonons at q CDW with frequency of around 0.2 THz have been measured in (TaSe 4 ) 2 I at 150 K (ref. 22 ) and temperatures close to T CDW (ref. 20 ). Furthermore, the amplitudon in (TaSe 4 ) 2 I cannot have a net infrared-dipole moment (due to the D 2 (222) point-group symmetry of the incommensurate phase 23 ), which is necessary for the far-field coherent radiation observed here. Although an earlier report identified a 2.8 THz mode to be the amplitudon 24 , later works noted that this mode exists both above and below T CDW (refs. 20 , 21 , 25 ). Recently, the lowest-energy amplitudon frequency in (TaSe 4 ) 2 I was found to be 0.11 THz using time-resolved X-ray diffraction 22 . We also note that the relatively low pump fluences used in our experiments (~580 μJ cm −2 ) and the linear pump fluence dependence preclude explanations in terms of a complete melting and recovery of the CDW order, which would have resulted in a saturating-type behaviour with pump fluence 20 . Recent work 22 on (TaSe 4 ) 2 I reported a complete melting of the CDW only beyond a pump fluence of 8 mJ cm −2 , that is, at a much greater fluence than that used in this work. Having ruled out the possible sources discussed above, we posit that the massive phason in (TaSe 4 ) 2 I gives the coherent radiation at 0.23 THz. Within the Lee-Rice-Anderson model for CDW dynamics 1 , 6 , the massive phason frequency ω LF at \(\overrightarrow{q}=0\) is given as \({\omega }_{{{{\rm{LF}}}}}^{2}=\frac{3}{2}\lambda {\omega }_{{{{\rm{Q}}}}}^{2}\) , where ω Q is the frequency of the normal-state longitudinal acoustic phonon at \({\overrightarrow{q}}_{{{{\rm{CDW}}}}}\) (Fig. 1b ) and λ is the electron–phonon coupling constant. For (TaSe 4 ) 2 I, ω Q ≈ 0.25 THz, as measured using inelastic neutron scattering 25 , 26 , whereas λ ≈ 0.6 (ref. 27 ) for the strong electron–phonon coupling in (TaSe 4 ) 2 I. This gives an expected ω LF ≈ 0.24 THz, close to our observed mode frequency of 0.23 THz. Additionally, the temperature dependence of the 0.23 THz mode, which shows a considerable spectral weight only for T ≲ 0.3 T CDW agrees fairly well with theoretical predictions for the temperature dependence of a massive phason. At high temperatures (close to T CDW ), the phase mode is effectively non-interacting and acoustic. As the temperature decreases, there are fewer thermally excited quasiparticles, and the long-range Coulomb interaction penalizes the spatial modulation of charge density. This causes the acoustic-mode phase velocity to increase with decreasing temperature. Once the temperature becomes sufficiently low such that the renormalized phason velocity is faster than the quasiparticle velocity, screening is no longer effective and the Coulomb interaction gaps the phase mode via the Anderson–Higgs mechanism, yielding the massive phason (Supplementary Fig. 8 ). By using typical values for the ionic mass of quasi-1D CDW compounds and including dynamical screening effects, previous work 8 estimated the failure of screening to begin around T ≈ 0.3 T CDW . Taken together, our observations strongly suggest that the narrowband 0.23 THz radiation originates from the predicted massive phason in a CDW insulator. To determine the excitation mechanism for the massive phason, we further discuss the temporal waveform of E osc highlighted earlier. Note that E osc shown in Fig. 2b is roughly zero at t delay = 0 ps and then becomes notable only at later times. We fit the behaviour of E osc in time using a model of two coupled modes, one of which is excited at t delay = 0 ps, whereas only the other one radiates (Fig. 4 ) ( Supplementary Information provides details on the fitting procedure). This fitting model is consistent with a picture where the radiating mode is the \(\overrightarrow{q}=0\) massive phason with frequency ω LF , whereas the non-radiating mode is a zone-folded acoustic phonon mode with wavevector \(\overrightarrow{q}={\overrightarrow{q}}_{{{{\rm{CDW}}}}}\) for T > T CDW and frequency ω Q ≈ ω LF . The presence of such an acoustic mode was established elsewhere 20 , 22 , 25 . Given that the infrared pump excites a large number of single-particle excitations above the CDW gap, we hypothesize that the zone-folded acoustic phonon is coherently excited via either a displacive excitation of coherent phonons mechanism 28 , 29 or the pressure of a non-equilibrium distribution of quasiparticles 30 . Both these mechanisms result in the maximum phonon amplitude at t delay = 0 ps ( q ( t ); Fig. 4 ). Once excited, the acoustic phonon can scatter off the CDW modulation and excite the massive phason mode. The massive phason mode at \(\overrightarrow{q}=0\) will then radiate, explaining the time profile of \({\overrightarrow{E}}_{{{{\rm{osc}}}}}\) (Figs. 3 and 4 ). Fig. 4: Model of a radiating massive phason coupled to a non-radiating acoustic phonon. a , p ( t ) and q ( t ) represent the time profile of the massive phason and acoustic phonon, respectively, as obtained from a solution of a coupled harmonic oscillator model ( Supplementary Information ). b , This particular set corresponds to p ( t ) obtained from a fit to the data at 7 K. Here N is a constant to rescale p ( t ). Full size image To conclude, we note that (TaSe 4 ) 2 I was recently argued to be an axion insulator (in the presence of an external magnetic field) based on magnetoconductance measurements of the sliding phason dynamics 14 , although such an interpretation is under active debate 31 . Since (TaSe 4 ) 2 I is a good insulator at low temperatures ( Supplementary Information ), the reported magnetoconductance measurements 14 were limited to temperatures above 80 K. Our results imply that at low temperatures, long-range Coulomb interactions are crucial for a full understanding of the CDW order in (TaSe 4 ) 2 I and related axion insulator candidates. Contributions to the longitudinal magnetoconductance due to axion electrodynamics may occur near the massive phason frequency, which can coherently provide a promising way to manipulate axionic states. Furthermore, our methodology here can provide a direct dynamical probe of collective excitations and effects of long-range interactions in materials with modulated order parameters such as spin or pair density waves. Methods Single crystals of (TaSe 4 ) 2 I were prepared using chemical vapour transport. Stoichiometric amounts of Ta wire (99.9%), Se powder (99.999%) and I shot (99.99%) were loaded into a fused silica tube, which was sealed under a vacuum and heated with a source temperature of 600 °C and sink temperature of 500 °C for 10 days. X-ray diffraction patterns were collected on a Bruker D8 ADVANCE diffractometer with Mo Kα radiation. Resistivity was measured in the four-point geometry in a Quantum Design physical property measurement system. The (110) surface of (TaSe 4 ) 2 I was oriented using backscattering Laue diffraction. THz emission spectroscopy was performed with our custom-built time-domain THz spectroscopy setup based on a Yb:KGW amplifier laser (PHAROS, LIGHT CONVERSION). The fundamental laser pulse wavelength is 1,030 nm with a pulse duration of ~160 fs. The fundamental beam is split into pump and probe paths using a 90:10 beamsplitter. The pump fluence ranged between 70 and 580 μJ cm –2 with a 1/ e 2 width of ~1 mm. The pump was incident onto the sample at a 45° angle of incidence. The sample was kept inside a closed-cycle cryostat with He exchange gas (SHI-950, Janis Research). The THz field E THz ( t ) radiated by the sample is collected by an off-axis parabolic mirror and focused on a CdTe (110) crystal. The electro-optical sampling probe beam is made to spatially and temporally overlap with E THz ( t ) on the CdTe crystal for electro-optic sampling. The path length of the probe is adjusted with a delay stage to control the time delay ( t delay ). The quasi-1D chain of (TaSe 4 ) 2 I was oriented to lie in the plane of incidence for all the measurements in this work. Data availability Source data are provided with this paper. Additional data are available from the corresponding author upon reasonable request.
Researchers at the University of Illinois Urbana-Champaign have detected the existence of a charge density wave of electrons that acquires mass as it interacts with the background lattice ions of the material over long distances. This new research, led by assistant professor Fahad Mahmood (Physics, Materials Research Laboratory) and postdoc Soyeun Kim (current postdoc at Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory), is a direct measurement of the Anderson-Higgs mechanism (of mass acquisition) and the first known demonstration of a massive phason in a charge density wave material, a prediction made more than 40 years ago. Their paper, "Observation of a massive phason in a charge density wave insulator," with these results, was recently published in Nature Materials. The collective excitations of condensed phases of matter often play a foundational role in developing fundamental theories for a variety of materials, including superconductors, quantum magnets, and charge density waves. In a simple metal, electrons are distributed uniformly across space; electron density at one point in space is equal to that of another point in space. In certain metals, however, the electron density develops a sinusoidal (wave) pattern (a charge density wave). Mahmood explains that, considering the charge density wave as frozen in space, if the wave is disturbed, it is going to "ring" (that is, its collective excitations are generated). It can ring by a change in the amplitude of the wave pattern, or the charge density wave can slide back and forth (phase shifting). The later collective excitation is dubbed the phason and is similar to sound waves in a material—it has negligible mass. More than 40 years ago, researchers predicted that if the phason interacts strongly with the background lattice of ions over long-distances (long-range Coulomb interactions), then it will try to drag the heavy ions as it moves. As a result, the phason will require a lot more energy to get it to move—the phason is said to "acquire mass". This mass acquisition of a phason is believed to occur due to the same mechanism by which all fundamental massive particles in the universe acquire mass (a phenomena known as the Anderson-Higgs mechanism). Direct observation of this mass acquisition has remained elusive, primarily because long-range Coulomb interactions do not exist in most charge density wave materials. The material used in this research, tantalum selenium iodide ((TaSe4)2I), is a very good insulator at low temperatures and is one of the best-known insulators for charge density waves. Because of that, long-range Coulomb interactions are likely to be present in the system and those interactions can give mass to the otherwise massless excitation. In theory, if the material is heated, it would become less insulating, the Coulomb interactions would weaken, and the massive phason should become massless. Mahmood, Kim, and their collaborators were able to investigate the charge density wave phason in (TaSe4)2I by developing a nonlinear optical technique known as time-domain terahertz (THz) emission spectroscopy at low temperatures (less than 10 K, -442° F). Using this technique, an ultrafast infrared laser pulse, lasting less than 150 fs (1 fs is a millionth of a billionth of second), was shined on the (TaSe4)2I sample, generating the collective excitations of the system. What they detected was the massive phason radiating in the THz region of frequencies, with a very narrow bandwidth. When they heated the material, the massive phason became massless (stopped radiating), matching up with long-standing theoretical predictions. While (TaSe4)2I is conducive to hosting a massive phason, it can be a very challenging material to work with because it grows as very thin needles making sample alignment very difficult. Kim described the process "like trying to shine a light on the side of a chopstick". A collaborator on this research, Daniel Shoemaker (associate professor, MatSE, UIUC), was able to grow (TaSe4)2I crystals with substantially large width, which enabled the application of THz emission spectroscopy on this material. "It is gratifying to see that a collective mode predicted many years ago is finally seen experimentally," commented Patrick Lee, the William & Emma Rogers Professor of Physics at MIT, and one of the pioneers of the theoretical works predicting the massive phason in charge density waves. "It speaks to the power of modern nonlinear optical techniques and the ingenuity of the experimentalists. The method is general, and we may see applications to other collective modes as well." At an applied level, generating narrowband radiation in the THz region of frequencies can be very difficult. However, due to the strikingly narrow bandwidth THz radiation resulting from the massive phason in (TaSe4)2I, the possibility of developing it (and other such materials) as a THz emitter is quite promising. The frequency and intensity of this THz emission can potentially be controlled by varying sample properties, applying external magnetic fields or strain. Mahmood summarizes, "This is the first known demonstration of a massive phason in a charge density wave material and settles the long-standing question of whether a charge density wave phason acquires mass by coupling to long-range Coulomb interactions. This is a major result that will have a profound impact on the field of strongly correlated materials, and in the understanding of the interplay between interactions, density-wave ordering and superconductivity in materials."
10.1038/s41563-023-01504-5
Biology
Shedding light on the origin of complex life forms
Christa Schleper, Actin cytoskeleton and complex cell architecture in an Asgard archaeon, Nature (2022). DOI: 10.1038/s41586-022-05550-y. www.nature.com/articles/s41586-022-05550-y Journal information: Nature
https://dx.doi.org/10.1038/s41586-022-05550-y
https://phys.org/news/2022-12-complex-life.html
Abstract Asgard archaea are considered to be the closest known relatives of eukaryotes. Their genomes contain hundreds of eukaryotic signature proteins (ESPs), which inspired hypotheses on the evolution of the eukaryotic cell 1 , 2 , 3 . A role of ESPs in the formation of an elaborate cytoskeleton and complex cellular structures has been postulated 4 , 5 , 6 , but never visualized. Here we describe a highly enriched culture of ‘ Candidatus Lokiarchaeum ossiferum’, a member of the Asgard phylum, which thrives anaerobically at 20 °C on organic carbon sources. It divides every 7–14 days, reaches cell densities of up to 5 × 10 7 cells per ml and has a significantly larger genome compared with the single previously cultivated Asgard strain 7 . ESPs represent 5% of its protein-coding genes, including four actin homologues. We imaged the enrichment culture using cryo-electron tomography, identifying ‘ Ca . L. ossiferum’ cells on the basis of characteristic expansion segments of their ribosomes. Cells exhibited coccoid cell bodies and a network of branched protrusions with frequent constrictions. The cell envelope consists of a single membrane and complex surface structures. A long-range cytoskeleton extends throughout the cell bodies, protrusions and constrictions. The twisted double-stranded architecture of the filaments is consistent with F-actin. Immunostaining indicates that the filaments comprise Lokiactin—one of the most highly conserved ESPs in Asgard archaea. We propose that a complex actin-based cytoskeleton predated the emergence of the first eukaryotes and was a crucial feature in the evolution of the Asgard phylum by scaffolding elaborate cellular structures. Main Soon after the discovery of archaea as a separate lineage besides bacteria, molecular and phylogenetic studies suggested that there is a deep common evolutionary descent between archaea and eukaryotes 8 , 9 . However, only recently has the discovery of the first Lokiarchaeota 10 (now Lokiarchaeia 11 ) and the wider superphylum of Asgardarchaeota in metagenomic analyses 1 , 2 , 11 , 12 , 13 , 14 , 15 , 16 , 17 corroborated a distinct relationship and a possible direct emergence of eukaryotic cells from archaea. In fact, eukaryotes form a direct sister group to Asgardarchaeota or even arise within Asgardarchaeota in most phylogenomic analyses 3 , 10 , 18 . Compellingly, all members of the Asgardarchaea carry an extensive repertoire of genes that were originally assumed to be unique to eukaryotes (ESPs) 1 , 2 , 3 , 10 , 19 , 20 . These ESPs are mostly associated with features of cells with high complexity, such as cytoskeleton formation, transport and the shaping of membranes. For example, the observation that Asgard archaeal genomes encode a complete and functional ubiquitin-coupled ESCRT system 10 , 21 suggested the possibility of elaborate intracellular membrane compartments 10 . Another notable example is genes encoding several close homologues of eukaryotic actin. While F-actin-like assemblies have been identified in other archaea 22 , 23 , Asgard archaea also possess actin-related proteins (ARPs), as well as actin-binding proteins. Notably, Asgard profilins and gelsolins were found to be able to modulate the dynamics of eukaryotic actin 5 , 19 , 24 , 25 , 26 , indicating the existence of an elaborate and dynamic cytoskeleton 4 . However, the in situ structures and functions of archaeal actins remain unclear. A seminal study 7 presented the first enrichment culture of an Asgard archaeon, ‘ Candidatus Prometheoarchaeum syntrophicum’, which grows slowly to low cell densities in syntrophic consortia with molecular-hydrogen-consuming organisms. As ‘ Ca. P. syntrophicum’ cells show long branched protrusions, the authors proposed a hypothesis for eukaryogenesis, in which a primordial Asgard archaeon closely interacts with the predecessor of the bacterial endosymbiont and eventually endogenizes it 7 , 27 . These observations were consistent with the stepwise mechanism of eukaryogenesis that was first proposed as the ‘inside out’ model 28 . The role of Asgard ESPs could so far not be investigated in the natural host, making it difficult to further test these conceptual models. Although ‘ Ca. P. syntrophicum’ was shown to transcribe ESPs, characteristics regarding its intracellular architecture could not be revealed. Fundamental questions regarding the presence of a cytoskeleton or internal compartmentalization in Asgard archaea remain unclear, as does the structure of the cell envelope. Here we combine the enrichment of an experimentally tractable Asgard archaeon with state-of-the-art imaging to reveal its cellular architecture at macromolecular detail. A Lokiarchaea culture from sediment Considering that lokiarchaeal organisms and other Asgard archaea can be found in a variety of anoxic and often marine environments 29 , 30 , we screened DNA from shallow-water sediment from different locations for the presence of 16S rRNA genes of Asgard archaea to select suitable and easily reachable sampling sites for establishing enrichments. Sediments from a small estuarine canal that regularly receives water from the Mediterranean near the coast of Piran, Slovenia, were identified to have the highest relative abundance at the 13–16 cm depth layer, exhibiting up to 4% of Asgard archaea 16S rRNA genes in amplicon sequencing (Extended Data Figs. 1 and 2 ). The identified sample was used to inoculate enrichment cultures (Fig. 1a ) with media of different compositions and various headspace conditions (Supplementary Table 1 ). With periodic monitoring using quantitative PCR (qPCR) with Lokiarchaea-specific primers (Extended Data Fig. 3 ), growth could be observed after 140 days at 20 °C in serum flasks containing sterile-filtered water from the original source supplemented with complex organics (casein hydrolysate, milk powder and amino acids). However, after two transfers under these conditions, growth could no longer be detected and a second round of screening with different medium compositions was performed. Using a modification of the medium MK-D1 reported for the cultivation of ‘ Ca. P. syntrophicum’ 7 , cell growth recovered, and abundances reached repeatedly 2–8%. However, higher enrichments, were not achieved under these conditions. Only through developing a minimal medium, mostly by reducing the input of organic carbon sources to a single compound and by increasing antibiotic concentrations, lokiarchaeal relative abundances reached between 25% and 80% after several transfers. The highest enrichments were achieved in minimized lokiarchaeal medium (MLM) with casein hydrolysate (Fig. 1a ), while growth was also observed with either tryptone, peptone, milk powder, single amino acids, glucose or pyruvate (Supplementary Table 1 ). Fig. 1: Enrichment and cultures of Loki-B35. a , Schematic of our cultivation approach. A sediment core fraction was used as an inoculum for cultivation in sterile-filtered environmental water from the sampling site supplemented with complex organic compounds. The enrichment was then transferred to modified MK-D1 medium 7 . Enrichments of up to 79% were obtained when the cultures were transferred to MLM supplemented with casein hydrolysate. AB, antibiotics. The figure was created using BioRender. b , The composition of the culture with the highest enrichment as assessed by 16S rRNA gene amplicon sequencing. c , Growth curves ( n = 4) of Loki-B35 in MLM (80:20 N 2 :CO 2 ) supplemented with casein hydrolysate (growth was quantified by qPCR), indicating maximum cell densities of 2.5 × 10 7 per ml and generation times of about 7–14 days. Full size image Amplicon sequencing analyses of 16S rRNA genes revealed that the culture with the highest enrichment (Loki-B35) consisted of three dominant and two minor species: a single Lokiarchaeon sequence (79%), a sulfate-reducing bacterium of the Desulfovibrio genus (10%), a hydrogenotrophic methanogen of the Methanogenium lineage (6%), as well as a Halodesulfovibrio and a member of the Methanofastidiosales genus (both at around 2%) (Fig. 1b and Extended Data Fig. 1 ). Notably, both Halodesulfovibrio and Methanogenium were also syntrophic partners in the enrichment cultures of ‘ Ca. P. syntrophicum’ 7 , which stems from a geographically different and deep-sea environment. Thus, it seems probable that both Lokiarchaea rely on a similar metabolism, involving the fermentation of peptides to H 2 and/or small organic acids. Loki-B35 grows without lag phase to maximum cell densities of up to 2.5 × 10 7 cells per ml, sometimes even 5 × 10 7 cells per ml within 50 to 60 days (Fig. 1c ) when started with a 10% inoculum. However, extremely long lag phases of 90 to 120 days were observed when the inoculum originated from stationary cultures. The generation time was estimated to be 7 to 14 days. Compared with the deep sea Lokiarchaeon ‘ Ca. P. syntrophicum’ (maximum cell densities of 10 5 cells per ml, generation time of 14–25 days) 7 , Loki-B35 grows considerably faster and to higher cell densities. Loki-B35 genome analyses and phylogeny The genome of Loki-B35 was assembled into one contig based on short and long read sequencing. It contains 6,035,313 base pairs (bp), encoding 5,119 predicted proteins, three 16S and 23S ribosomal rRNA copies (two of them in operons) and 34 tRNAs (Fig. 2a ). The presence of only one rRNA operon in the closed genomes of ‘ Ca. P. syntrophicum’ and two Heimdallarchaea 7 , 31 , but two operons in the high-quality assembly of the Lokiarchaeota genome ‘ Candidatus Harpocratesius repetitus’ FW102 (ref. 31 ) and even three ribosomal RNA operons in Loki-B35, indicates that there is variability that may be based on the strains’ generation time or flexibility to adapt to changing environmental conditions 32 , 33 . Fig. 2: Genome analysis and phylogenetic placement of ‘ Ca. L. ossiferum’. On the basis of the analysis of the closed genome of enrichment Loki-B35, we propose a description of the species ‘ Ca. L. ossiferum’. a , The characteristics of the genome of ‘ Ca. L. ossiferum’ in comparison to ‘ Ca. P. syntrophicum’. Note the substantial difference in genome size. The values indicated by asterisks are the estimated values of contamination and completeness on the basis of the identification of marker genes performed by CheckM 53 ( Methods ). b , The diagram shows to scale the number of shared clusters of orthologous proteins between ‘ Ca. L. ossiferum’ and ‘ Ca. P. syntrophicum’ as well as genome-specific clusters. A more detailed analysis is provided in Extended Data Fig. 4 . c , A comparison of the occurrence of ESPs in ‘ Ca. L. ossiferum’ and ‘ Ca. P. syntrophicum’. Annotation of ESPs was performed according to the asCOG database 2 ; general functional categories (on top) were added similar to earlier assignments 1 . Note that ‘ Ca. L. ossiferum’ is enriched for ESPs of the following protein families associated with trafficking machineries: adaptin N heat repeats domain; Arf-like GTPase; Gtr1/RagA GTPase; longin domain; NPRL2-like longin domain. d , e , Maximum-likelihood (ML) phylogenies based on the concatenation of 23 universally conserved ribosomal proteins 2 . d , Tree of life showing Eukarya as a sister clade of Asgardarchaeota. e , ‘ Ca. L. ossiferum’ and ‘ Ca. P. syntrophicum’ belong to the same Lokiarchaeia class. Taxonomic assignments are based on the recently proposed classification 11 . Branch supports were calculated with 1,000 ultrafast bootstrap samples. The values in square brackets show the genome sizes of complete genomes (bold) and MAGs within Lokiarchaeia. Full size image Compared with ‘ Ca. P. syntrophicum’, the genome of Loki-B35 is significantly larger (by approximately 1.6 Mb), which is also reflected by 2,256 unique proteins (Fig. 2b ). Functional annotation of orthogroups revealed that genes of Loki-B35 are enriched in almost all categories, reflecting its overall larger genome size (Extended Data Fig. 4 and Supplementary table 2 ). Notably, the fraction of proteins representing ESPs scales approximately with genome size, representing 5.5% in ‘ Ca. P. syntrophicum’ (218 ESPs) and 5% in Loki-B35 (258 ESPs; calculated according to the most recent asCOG database 2 ) (Supplementary table 3 ). Compared to ‘ Ca. P. syntrophicum’, the genome of Loki-B35 is particularly enriched for genes associated with membrane trafficking and protein transport (Fig. 2c ). The sequence similarity of the 16S rRNA genes of Loki-B35 and ‘ Ca. P. syntrophicum’ is 95.3% and the common orthologous proteins share 58.4% amino acid identity, which justifies a separation into different genera 34 , 35 . This separation is supported by the large number of unique proteins that represent 47.7% of the complete predicted protein set. The evolutionary distance between the two cultivated Lokiarchaea also becomes evident in a gene synteny comparison, which shows a high number of rearranged genes and genome-specific regions being distributed throughout the genomes (Extended Data Fig. 5 ). In a universal phylogeny based on 23 conserved ribosomal proteins from 291 representative species of all three domains of life, all Asgard archaea formed a monophyletic group with eukaryotes as their direct sister lineage (Fig. 2d ). The Asgard phylogeny (Fig. 2e ), based on 168 genomes, clearly separates all described classes of the phylum and is consistent with other recent phylogenomic analyses 1 , 2 , 13 , although inner branching nodes vary depending on differences in the analyses and datasets. Loki-B35 together with ‘ Ca. P. syntrophicum’ and three other lokiarchaeal metagenome-assembled genomes (MAGs) formed one of two major sublineages within the class Lokiarchaeia (Fig. 2e ). On the basis of these analyses, we propose a new genus and species for which we propose the name: ‘ Ca. L. ossiferum’ strain Loki-B35 (see below for etymology). Identification of ‘ Ca. L. ossiferum’ cells Our next goal was to identify individual ‘ Ca. L. ossiferum’ cells and characterize them using microscopy. Fluorescence in situ hybridization (FISH) with specific probes showed that ‘ Ca. L. ossiferum’ cells appeared as spheres of variable cell size (0.3–1.0 µm), being considerably smaller than other organisms in the enrichment (Fig. 3a ). Cell counts from FISH analysis (not shown) confirmed the relative abundance of up to almost 80% in our highest enrichments as seen by amplicon sequencing (Fig. 1b ). Fig. 3: Identification of ‘ Ca. L. ossiferum’ cells in the enrichment culture. a , Hybridization chain reaction-FISH analysis of the enrichment culture stained with DAPI (cyan) and nucleotide probes targeting the major species of the culture, that is, Lokiarchaea cells (red; the sample on the left was 70× concentrated), bacteria (green) and Methanomicrobiales (purple). The FISH experiments were performed five independent times with similar results. Scale bars, 2 µm. b , Low-magnification 2D cryo-electron micrographs of the three major cell types that were observed after screening of the enrichment culture ( n = 2 independent cultures), showing a putative ‘ Ca. L. ossiferum’ cell with a round cell body and complex cell protrusions (left), a Gram-negative bacterial cell (middle) and an archaeal cell (right). Scale bars, 1 µm. c , Slices through cryo-tomograms of all three organisms shown in c (slice thickness, 9.02 nm), detailing the characteristic cell envelope architecture of the three species. Putative Lokiarchaea show small and unordered surface densities (sd a ) and complex surface proteins (sd b ) protruding from a single membrane. cm, cytoplasmic membrane; cp, cytoplasm; om, outer membrane; pp, periplasm; sl, surface layer. Scale bars, 100 nm. d , Identification of ‘ Ca. L. ossiferum’ by Asgard-specific rRNA structures. Left, a sub-tomogram average (11.7 Å resolution) of ribosomes from cryo-tomograms of putative lokiarchaeal cells (large-subunit proteins (LSU), blue; small-subunit proteins (SSU), orange; rRNA, white). Middle, secondary structure prediction of the ‘ Ca. L. ossiferum’ large-subunit rRNA (expansion segments ES9/ES39 are labelled). Right, a superposition of the average with a low-pass filtered (11 Å) map of the T. kodakarensis 70S ribosome (Protein Data Bank (PDB): 6SKF ; yellow). The ‘ Ca. L. ossiferum’ structure (white) shows prominent additional rRNA features that were identified as the Asgard-specific rRNA expansion segments ES9 and ES39. See also Extended Data Fig. 6 . Full size image We next plunge-froze cells of a live culture onto electron microscopy (EM) grids to image them in a near-native state using cryo-electron tomography (cryo-ET). A major challenge was the low cell density combined with the high fragility of the ‘ Ca. L. ossiferum’ cells, which did not allow us to perform any processing steps. Thus, instead of concentrating the samples, we performed an extensive screening of the grid by recording two-dimensional (2D) overview images, followed by cryo-ET data collection of selected cells. This approach revealed three general cell types that had distinct morphologies and cell envelope architectures (Fig. 3b,c ). One class consisted of round-shaped cell bodies associated with elaborate and heterogeneous protrusions. The other two cell types were rods and spherically shaped cells, respectively, without protrusions. Their cell envelopes had canonical Gram-negative and archaeal cell envelope features, therefore probably representing co-cultured bacteria and archaea (Fig. 3c ). By contrast, the cell envelope of ‘ Ca. L. ossiferum’ candidates featured complex unordered densities protruding from a single membrane. To unambiguously identify ‘ Ca. L. ossiferum’ cells, we initially attempted to correlate FISH with cryo-ET. FISH, however, involves harsh sample preparation steps (chemical fixation, dehydration, permeabilization and high temperatures), which did not allow for the preservation of the fragile cellular ultrastructure. We therefore turned to an alternative approach and hypothesized that ‘ Ca. L. ossiferum’ cells could be identified by unique structural features of their ribosomes. Supersized eukaryote-like expansion segments (ES9 and ES39) have been reported in Asgard archaeal rRNAs but are absent from bacteria and other archaea 36 . We therefore computationally extracted 4,126 sub-tomographic volumes of ribosomes from 56 tomograms of ‘ Ca. L. ossiferum’ candidate cell bodies and protrusions and averaged them (Fig. 3d ). A comparison between the average and a high-resolution structure of a euryarchaeotal 70S ribosome revealed additional prominent densities, which we identified as Asgard-specific expansion segments ES9 and ES39, by correlating large subunit rRNA positions to the structural docking result (Fig. 3d and Extended Data Fig. 6c ). Importantly, expansion segments were not present in the large-subunit rRNA sequences of co-cultured species (Extended Data Fig. 6d ). The cell type exhibiting a characteristic variable morphology, cell bodies with long protrusions and a single membrane with an elaborate surface proteome was therefore identified as ‘ Ca. L. ossiferum’. These cells often appeared as individuals in FISH analyses (Extended Data Fig. 7a ) and electron microscopy imaging (Fig. 3b ). However, sometimes, ‘ Ca. L. ossiferum’ was found in aggregates with co-cultured species (Extended Data Fig. 7b,c ). As these observations were rather infrequent, we assumed that, although nutrient exchange with co-cultured species may be necessary for the strain’s growth, persistent cell–cell contact seems to not be obligate throughout its life cycle. Complex and variable cell architecture Having established the identity and general appearance of ‘ Ca. L. ossiferum’ cells, we next aimed to analyse their overall organization by scanning EM. We identified small coccoid cells with surface-bound vesicles and extensive protrusions (Fig. 4a,b and Extended Data Fig. 7d ). In contrast to ‘ Ca. P. syntrophicum’, these long protrusions appeared more irregular, frequently branching or expanding into bulbous structures. Fig. 4: Complex and variable architecture of ‘ Ca. L. ossiferum’ cells. a , b , SEM imaging of fixed ‘ Ca. L. ossiferum’ cells showed small coccoid cells with extensive protrusions. Example micrographs from n = 2 independent cultures are shown. See also Extended Data Fig. 7d . For a and b , scale bars, 500 nm. c – f , Slices through cryo-tomograms ( c , e ; thickness, 9.02 nm) and the corresponding neural-network-aided 3D volume segmentations ( d , f ) of two different ‘ Ca. L. ossiferum’ cells. The insets in c and e show 2D overview images of the two different target cells. Cell bodies ( c , d ) and networks of protrusions ( e , f ) both contained ribosomes (grey arrowheads), cytoskeletal filaments (orange arrowheads) and complex surface densities (blue arrowheads). Note that e and f show the same cell as in Fig. 3c . For c and e , scale bars, 100 nm (tomogram) and 1 µm (2D overview). g , h , Expanded views of slices from tomograms in c and e , showing ribosome chains, complex surface proteins and filaments (colour code as in c – f ) in a junction of a cell bridge ( g ) and a constricted part of the protrusion network ( h ). For g and h , scale bars, 100 nm. i – l , Slices through cryo-tomograms showing a putative chemoreceptor array ( i ; indicated by a white arrowhead) and different types of connections between cell bodies and protrusions ( j – l ). The coloured arrowheads indicate filaments and surface structures as defined for c – f . The white arrowheads in j indicate weak densities at the neck of the junction. Slice thickness, 9.02 nm ( j ) or 10.71 nm ( i and k – l ). For i – l , scale bars, 100 nm. Full size image To investigate the macromolecular organization of ‘ Ca. L. ossiferum’ cells, we extended our cryo-ET analysis. In contrast to data obtained using scanning electron microscopy (SEM), membranous protrusions of unfixed cells showed even more elaborate shapes (Extended Data Fig. 7e ). Some protrusions connected multiple larger cell bodies (Fig. 4c (inset)), reminiscent of cell bridges observed in other archaea 37 , 38 , 39 , 40 . Although the majority of cells exceeded the thickness limitation of cryo-ET, some were thin enough to visualize the internal architecture (Fig. 4c ) in 3D, enabling us to not only observe large protein structures such as ribosomes, but also numerous thin and sometimes bent filaments (convolutional neural network-aided segmentations are shown in Fig. 4d,f ; Supplementary Video 1 ). This network of filaments was even better resolved in some of the very thin (<100 nm) protrusions (Fig. 4e,f and Supplementary Video 1 ), which contained sometimes bundled filaments that connected different parts of the cell. Ribosomes were homogenously distributed throughout the cell body, cell bridges and protrusions (Fig. 4c–f ), where they could sometimes be observed as membrane-associated chains (Fig. 4g ). Except for two instances (Extended Data Fig. 7f ), we did not observe internal membrane-bound compartments. Our extended cryo-ET dataset also revealed further insights into the structure of the cell envelope. The single membrane was not only decorated with a layer of small unordered densities, but also with a plethora of structures further protruding from the membrane (Fig. 4c–l (blue arrowheads)). Some densities connected different parts of the protrusion network (Fig. 4d,f ), whereas others formed elaborate assemblies that localized to regions of high membrane curvature (Fig. 4f,h ). On the cytoplasmic side of the membrane, we infrequently ( n = 2) observed putative chemoreceptor arrays (Fig. 4i ). Consistent with this observation, the genome of ‘ Ca. L. ossiferum’ encodes a set of chemotaxis proteins. Although absent from ‘ Ca. P. syntrophicum’, many of these are also present in other Lokiarchaeia and Heimdallarchaeia (in 27 out of 97 MAGs; Extended Data Fig. 8 ). The gene set includes 15 methyl-accepting chemotaxis proteins as well as cheA/B/C/D/R/W/Y (Supplementary Table 4 ). Together with the extensive repertoire of surface proteins, these may mediate cell–cell communication, interactions and motility. The unique cell envelope was mostly continuous between the cell body and protrusions, even though the transition zones showed high variability. Some appeared as stable junctions (Fig. 4j ) (with potential densities appearing to stabilize the ‘neck’), whereas others formed very thin constrictions (Fig. 4c,d ) or were only loosely attached (Fig. 4k,l ). Notably, single cytoskeletal filaments frequently traversed the junctions into the protrusions (Fig. 4l ). In a similar manner, filaments also connected different parts of the protrusion network, often extending across constricted membrane tubes (Fig. 4f ). These observations indicate that the cytoskeleton functions as a scaffold to maintain the elaborate cellular architecture of ‘ Ca. L. ossiferum’. Lokiactin-based cytoskeleton To resolve the identity of the most frequently observed cytoskeletal filament in cryo-tomograms (Figs. 4 and 5a ), we set out to determine its structure in situ and developed a workflow using helical reconstruction of 2D-projected sub-tomograms, followed by sub-tomogram averaging (Extended Data Fig. 9 ). 2D classification showed a two-stranded filament structure (Fig. 5b ). The final reconstruction at a resolution of 24.5 Å enabled us to determine the helical parameters (rise of 27.9 Å per subunit, twist of −167.7 Å per subunit), which are highly similar to eukaryotic F-actin but also to the archaeal Crenactin 23 . Importantly, the dimensions of F-actin 41 and Crenactin 23 filaments would be consistent with our reconstructed map (Fig. 5c ). Fig. 5: Lokiactin is involved in cytoskeleton formation. a , Slice through a cryo-tomogram showing a cytoskeletal filament inside a protrusion at higher magnification. Slice thickness, 5.36 nm. Scale bar, 100 nm. b , Filament segments were extracted from cryo-tomograms for structural analysis. 2D classes that were obtained after helical reconstruction of 2D-projected filament particles are shown, indicating a twisted double-stranded architecture. Box size, 34.3 × 34.3 nm. See also Extended Data Fig. 9 . c , Sub-tomogram average (24.5 Å resolution) of the cytoskeletal filament displaying helical parameters with a high similarity to eukaryotic F-actin and archaeal Crenactin. Structural docking shows that an F-actin-like filament is consistent with the reconstructed map. See also Extended Data Fig. 9 . Scale bar, 50 Å. d , Maximum-likelihood phylogenetic tree of actin family proteins. The ‘ Ca. L. ossiferum’ genome encodes four homologues. One homologue (GenBank: UYP47028.1 ) clusters together with other Asgard archaeal Lokiactins (group indicated by orange arrowhead), which form a sister group to eukaryotic actin. The three other homologues (from top to bottom: GenBank UYP47647.1 , UYP44711.1 and UYP44126.1 ) cluster with other Asgard archaeal and eukaryotic ARPs (groups indicated by the black arrowheads). The tree was rooted with the MreB protein family. CR 4, subgroup from within Lokiarchaeia. e , f , Lokiactin is expressed in the enrichment culture. e , Transcription of the four actin homologues was analysed using RT–qPCR analysis of two enrichment cultures, indicating the highest levels of transcripts for Lokiactin. The mean expression levels normalized to Lokiactin are shown. The error bars indicate the s.d. of three technical replicates. f , Expression was also detected using western blotting analysis of a cell lysate obtained from the enrichment culture (representative result from n = 2 independent samples). Gel source data are provided in Supplementary Fig. 1 . Two antibodies (ab.) were used that were raised against different ‘ Ca. L. ossiferum’ Lokiactin-specific peptides. g , h , Immunofluorescence staining of ‘ Ca. L. ossiferum’ cells with two different Lokiactin-specific antibodies analysed using Airyscan ( g ) or stimulated emission depletion (STED) ( h ) imaging (representative images of n = 3 ( g ) or n = 2 ( h ) independent preparations). The distribution of fluorescent signal indicates the presence of Lokiactin-based cytoskeletal structures in cell bodies and protrusions, being consistent with observations from cryo-tomograms. The top row of g shows single slices of the fluorescent DNA signal (blue, Hoechst stain, LSM-Plus-processed confocal) and the Alexa Fluor 647-labelled secondary antibodies (red/orange, jDCV-processed Airyscan). The bottom row of g shows the minimum intensity z -projections of the transmitted light channel to indicate the cell shape. The control (right column) was probed with only secondary antibodies (the contrast in the top row was adjusted equally). The images in h show single slices of representative deconvolved STED images detecting Lokiactin (red/orange, abberior STAR 580-labelled secondary antibodies) and DNA (blue, SPY505-DNA). For g and h , scale bars, 1 µm. Full size image The ‘ Ca. L. ossiferum’ genome contains four homologues of eukaryotic actin. One of these homologues clusters inside a group of Lokiactins, which form a sister group to bona fide eukaryotic actin and most ARPs in phylogenies (Fig. 5d ). The remaining three homologues cluster with other ARPs from eukaryotes and Asgard archaea. Their phylogenetic patterns and distribution indicate a complex evolution within Asgard archaea, which seems to be shaped by gene losses, gains and duplications. By contrast, Lokiactin represents the most highly conserved group of actin homologues. It is found in all Asgard lineages 5 , implicating its presence in the last common ancestor of the entire phylum (Fig. 5d ). To test whether the actin-like cytoskeletal filament in ‘ Ca. L. ossiferum’ contained Lokiactin, we first tested the expression levels of all actin homologues using qPCR with reverse transcription (RT–qPCR) and found that Lokiactin expression was severalfold higher (Fig. 5e ). We next generated two antibodies that were raised against Lokiactin-specific peptides. Using these antibodies, expression was detected in lokiarchaeal cell lysates by western blotting (Fig. 5f ) and we observed staining throughout ‘ Ca. L. ossiferum’ cells in immunogold-labelling experiments (Extended Data Fig. 10 ). In immunofluorescence experiments, both antibodies revealed filamentous signals in the cell bodies and particularly also in protrusions, consistent with the abundant distribution of filaments observed in cryo-tomograms (Fig 5g,h ). We therefore conclude that ‘ Ca. L. ossiferum’ possesses a complex Lokiactin-based cytoskeleton. As ‘ Ca. L. ossiferum’ encodes eight gelsolin-like and three profilin-like proteins that have been show to affect actin polymerization dynamics 4 , 5 , 19 , 24 , 25 , we hypothesize that the assembly of the complex Lokiactin cytoskeleton in ‘ Ca. L. ossiferum’ is probably dynamically regulated by actin-binding proteins. Conclusion In conclusion, our comparatively fast-growing lokiarchaeal culture enabled us to study its cell architecture in a near-native state. We discovered an elaborate actin-based cytoskeleton in Asgard archaea, which has long been hypothesized 4 , 5 , 6 , 42 , 43 , 44 , but has not been visualized. The cytoskeleton is probably a hallmark structure of all Asgard archaea, as Lokiactin is conserved in genomes of all Asgardarchaeota classes. This clearly differentiates Lokiactin from many other ESPs that show complex evolutionary histories of gene gains and losses and lateral transfer resulting in patchy distributions in Asgard and also other archaea 2 . The high degree of Lokiactin conservation indicates strong constraints on its function. It is therefore probable that a dynamic cytoskeleton, regulated by numerous actin-related and actin-binding proteins, had a substantial impact on the emergence, evolution and diversification of Asgard archaea. Our cryo-ET and immunofluorescence data revealed actin filaments in both parts of the cell. In the cell body, filaments are often found at the periphery, and they follow the longitudinal axis in the cell’s protrusions. We therefore propose that Lokiactin acts as a scaffold for the complicated cell architecture of Asgard archaea, similar to eukaryotic actin, which is a major determinant of eukaryotic cell shape 45 . The elaborate cell architecture with extensive membranous protrusions has multiple implications for Asgard physiology and ecology. As these characteristic features make the cells highly fragile, it could also explain why the highest abundance of Asgard archaea is found in sediments 30 , 46 , 47 , 48 rather than in plankton. Importantly, our study established approaches that will enable imaging of Asgard archaea in a culture-independent manner in environmental samples, with the possibility of identifying Asgard cells in cryo-tomograms based on unique ribosomal RNA expansion segments. The large surface area of the convoluted network of protrusions, in combination with the unusual cell envelope lacking a highly ordered S-layer (as typically found in other archaea) but rather displaying numerous surface proteins, may have enabled the intricate cell–cell contacts required for eukaryogenesis that—considering the lifestyle of the two cultured Asgard strains—probably involved interspecies dependencies in syntrophic relationships 49 , 50 , 51 , 52 . These findings strongly support a gradual path of mitochondrial acquisition through protrusion-mediated cell–cell interactions, which have been proposed previously in the inside-out and E3 hypotheses 7 , 28 . Additional experimental data—in particular, from diverse Asgard archaea—will be needed to further test these models and exclude alternative views 42 , 43 . Importantly, the cell architecture may enable the compartmentalization of cellular processes even in the absence of the internal organelle-like membrane systems that had been hypothesized based on the genomes of the first Asgard archaea 10 . Finally, our enrichment culture will serve as a model system to study the peculiar cell biology of Asgard archaea, as it grows to cell densities and purities that make it experimentally tractable. The intricate cell architecture of ‘ Ca. L. ossiferum’ suggests elaborate mechanisms for processes such as macromolecular trafficking and assembly, membrane shaping, cell division and spatiotemporal regulation of the cytoskeleton, many of which are probably mediated by ESPs that can now be studied in situ. Etymology. Lokiarchaeum (Lo.ki.ar.chae’um. N.L. neut. n. archaeum (from Gr. masc. adj. archaios , ancient), an archaeon; N.L. neut. n. Lokiarchaeum , an archaeon named after Loki, a god in Norse mythology). ossiferum (os.si’fe.rum. L. neut. pl. n. ossa , skeleton; L. v. fero , to carry; N.L. neut. adj. ossiferum , skeleton-carrying). The name describes a skeleton-carrying archaeon of the provisional class Lokiarchaeia 11 within the Asgardarchaeota phylum. Locality. Isolated from a shallow sediment of an estuarine canal in Piran, Slovenia. Diagnosis. Anaerobic archaeon of the Asgardarchaeota phylum that grows in enrichments with H 2 -consuming bacteria and archaea at 20 °C on organic medium. Maximal cell densities reach 5 × 10 7 cells per ml at relative enrichments of up to 79%. Cells grow extensive, often branched protrusions with blebs and exhibit complex, irregular surface structures. The genome is 1,607,517 bp larger than that of the closest relative ‘ Ca. P. syntrophicum’ and shows 58.4% similarity at the amino acid level. Methods Sample collection and enrichment cultivation Sediment core samples (length, approximately 60 cm) were retrieved from a shallow brackish canal (at 40 cm water depth, 19.5 °C, pH 8) near the coast of Piran, Slovenia (45° 29′ 46.1′′ N 13° 36′ 10.0′′ E) on 21 April 2019. The sediment core was cut inside an anaerobic tent (N 2 atmosphere) at intervals of 3 cm, with each fraction being placed inside 50 ml conical sterile centrifuge tubes, sealed and stored at 4 °C. Around 0.5 g of sediment from the different fractions was used for DNA extraction and subsequent qPCR assays targeting lokiarchaeal 16S rRNA genes (qPCR conditions are provided below). The 13–16 cm deep sediment layer with the highest relative 16S rRNA gene copies of Lokiarchaea per ml in relation to total DNA content (Extended Data Fig. 2 ) was selected for further use in cultivation, which was performed in 120 ml serum bottles sealed with butyl rubber stoppers. A total of 2 g of sediment was used as inoculum and different medium and headspace conditions were tested; growth was monitored using qPCR assays as described above. After 140 days, the growth was observed in cultures inoculated in 50 ml of sterile-filtered brackish water from the canal supplemented with casein hydrolysate (0.1% (w/v)), 20 amino acids (0.1 mM each) and milk powder (0.1%) incubated at 20 °C under an atmosphere of 80:20 N 2 :CO 2 (0.3 bar). To try to limit bacterial growth, cultures were also supplemented with ampicillin, kanamycin and streptomycin (50 µg ml −1 each). The cultures were transferred into fresh medium whenever exponential growth could be detected (between 1–3 months). After two transfers, growth could no longer be observed under this set-up and the next transfer was performed in modified ‘ Ca. P. syntrophicum’ MK-D1 medium 7 . After three transfers under these conditions, the medium was reduced to further limit bacterial growth and, after five additional transfers in minimal medium, high enrichments were achieved. Eventually, the growth medium composition (per litre) was as follows: 20.7 g NaCl, 5 g MgCl 2 ·6H 2 O, 2.7 g NaHCO 3 , 1.36 g CaCl 2 ·2H 2 O, 0.54 g NH 4 Cl, 0.14 g KH 2 PO 4 , 0.03 g Na 2 S·9H 2 O, 0.03 g cysteine·HCl, 0.5 ml of acid trace element solution, 0.5 ml of alkaline trace element solution, 1 ml Se/W solution, 0.1% casein hydrolysate (w/v). The acid trace element solution contained (per litre): 1.491 g FeCl 2 ·4H 2 O, 0.062 g H 3 BO 3 , 0.068 g ZnCl 2, 0.017 g CuCl 2 ·H 2 O, 0.099 g MnCl 2 ·4H 2 O, 0.119 g CoCl 2 ·6H 2 O, 0.024 g NiCl 2 ·6H 2 O and 4.106 ml of HCl (37%). The alkaline trace element solution contained (per litre): 0.017 g Na 2 SeO 3 , 0.033 g Na 2 WO 4 , 0.021 g Na 2 MoO 4 and 0.4 g NaOH. The medium pH was adjusted to 7.5 and contained ampicillin, kanamycin and streptomycin (200 µg ml −1 each). The headspace atmosphere was 80:20 N 2: CO 2 (0.3 bar) and the cultures were incubated at 20 °C without shaking. DNA extraction and growth monitoring using qPCR A total of 2 ml of the cultures was sampled every 14 days and centrifuged for 30 min at 20,000 g at 4 °C. The supernatant was discarded and the resulting pellet was resuspended in 700 µl of SL1 buffer from the NucleoSpin Soil DNA extraction kit (Macherey-Nagel). The rest of the procedure was performed according to the manufacturer’s instructions. High molecular mass DNA for genome sequencing was extracted using a standard phenol–chloroform-based protocol. The DNA concentration was measured with the Qubit 2.0 Fluorometer (Invitrogen), using the dsDNA HS kit, according to the manufacturer’s instructions. Lokiarchaea-specific 16S rRNA gene primers (LkF, 5′-ATCGATAGGGGCCGTGAGAG and LkR, 5′-CCCGACCACTTGAAGAGCTG) were designed using the ARB tool 54 . All assays were performed in triplicates on the CFX Connect Real-Time PCR Detection System (Bio-Rad) and data were collected using the CFX Maestro (v.2.3). Reaction mixtures (20 µl) contained: 1× Luna Universal qPCR Master Mix (New England BioLabs), 0.5 µM of each primer and 5–10 ng of template DNA. The cycling conditions were as follows: 95 °C for 1 min; then 40 cycles of 15 s at 95 °C and 1 min at 60 °C (for both annealing and extension), with a fluorescence reading after each cycle. Melting curves were generated by increasing the temperature from 60 °C to 95 °C, at 0.5 °C increments for 5 s with fluorescence readings after each increment. Lokiarchaeal 16S rRNA gene fragments amplified from sediment DNA were used as quantification standards. For quantification, triplicates of standard tenfold dilutions ranging from 10 to 10 8 copies were used in every assay. The efficiencies of these reactions varied from 90% to 100%, with R 2 values of >0.99. Primer specificity was confirmed through amplicon sequencing (Illumina MiSeq) of environmental DNA (Extended Data Fig. 3 ). RNA extraction, cDNA synthesis and ARP RT–qPCR From cultures, 20 ml was centrifuged at 20,000 g for 30 min, 4 °C. The pellet was then resuspended in 600 µl of the lysis/binding buffer from the mirVana miRNA isolation kit (Invitrogen). The rest of the procedure was performed according to the manufacturer’s instructions. Potential leftover genomic DNA was removed by incubating the samples with TURBO DNase (Invitrogen) at 37 °C for 1 h. Lokiarchaeal 16S rRNA PCR tests were then used to confirm that no DNA had remained in the sample. cDNA was produced using the ProtoScript II First Strand cDNA Synthesis Kit according to the manufacturer’s instructions. Primers targeting Lokiactin (GenBank: UYP47028.1 , 83F, 5′-GCAGGAGAAGATCAGCCTCG; 337R, 5′-AACCGGATGTTCGCTTGGAT) and actin homologue (GenBank: UYP44126.1 ; 82F, 5′-TGGGGGAGAAAATGAGCCAC; 443R, 5′-GGCCCCACGAACAGGATAAT), GenBank UYP44711.1 (361F, 5′-CCCTCCCAGACATTGCACAA; 731R, 5′-TGCGGGATCGACAGAATCAG) and GenBank UYP47647.1 (305F, 5′-CAAGGCTGGATCCCTTCAGA; 591R, 5′-ATTGCGTGATATGGTGGCCT) were designed using the Geneious Prime 2021.2 software ( ). A temperature gradient PCR using culture DNA was used to determine the optimal annealing temperature and the temperature used for all primer pairs was 60 °C. The qPCR procedure and cycling conditions were the same as those used for lokiarchaeal 16S rRNA gene amplification. Amplified fragments from culture DNA were used as standards. The efficiencies of these reactions varied from 90% to 100%, with R 2 values of >0.99. Primer specificity was evaluated by melting curve analysis. 16S rRNA gene amplicon sequencing Amplicon sequencing was performed by amplifying the extracted DNA using the general prokaryotic 16S rRNA gene targeting primer pair 515f (5′-GTGCCAGCMGCCGCGGTAA) and 806r (5′-GGACTACHVGGGTWTCTAAT) 55 , which was then barcoded and sequenced at the Vienna BioCenter Core Facilities (VBCF) using the Illumina Miseq (300 PE) platform. Raw reads were processed using cutadapt 56 to remove primer sequences followed by the sequence analyses using the QIIME2 pipeline 57 . In brief, the DADA2 algorithm was used to denoise the data as well as to remove low-quality reads and chimeras. Sequences with 100% sequence identity were clustered into amplicon sequence variants. Taxonomy of amplicon sequence variants was assigned using the SILVA database (release 138) with the q2-feature-classifier plugin 58 . In situ DNA-hybridization chain reaction Cells were fixed in growth medium with the addition of 2.5% formaldehyde for 2 h at room temperature. After this period, they were washed three times in 1× phosphate-buffered saline and then stored at −20 °C in a mixture of absolute ethanol and 1× phosphate-buffered saline (1:1). The oligonucleotide probes used in this study were the general bacteria-targeting EUB338 (ref. 59 ), the Methanomicrobiales-targeting MG1200 (ref. 60 ) and the Lokiarchaea-specific DSAG-Gr2-1142 (ref. 7 ) (Supplementary Table 5 ). The DNA-hybridization chain reaction procedure was performed as described previously 61 and the samples were imaged on the Eclipse Ni-U epifluorescence microscope (Nikon) using Gryphax (v.1.1.8.153). Images were processed using ImageJ 62 . Illumina sequencing For short-read sequencing, metagenomic DNA of three samples from different time points in our enrichment was shotgun sequenced using the NovaSeq 6000 (paired-end, 150 bp) platform at Novogene. Sequencing data were processed using Trimmomatic (v.0.36) 63 to remove Illumina adapters and low-quality reads (SLIDINGWINDOW:5:20). The trimmed reads were co-assembled using SPAdes (v.3.15.2) 64 with the k -mer length varying from 21 to 111 and the ‘--meta’ option. Contigs longer than 1,000 bp were binned using Binsanity 65 , MaxBin2 (ref. 66 ), MetaBAT 67 and CONCOCT 68 followed by contig dereplication and binning optimization using the DAS tool 69 . Completeness and contamination of MAGs was evaluated using CheckM 53 and taxonomic affiliation was obtained using GTDB-Tk (v.1.5.0; classify_wf) 70 . A lokiarchaeal MAG (genome length, 6,008,683 bp; completeness, 90.9%; contamination, 7.9%) was identified. Nanopore sequencing and base calling Library construction for ONT sequencing was performed using the SQK-LSK109 ligation kit followed by the PromethION sequencing using FLO-PRO002 flowcells. Fast5 files were processed using the high-accuracy ONT basecaller Bonito (v.0.3.6; ) with the dna_r9.4.1 pretrained model (bonito basecaller --fastq dna_r9.4.1). Long-read demultiplexing and adapter removal was performed using Porechop (v.0.2.4) 71 . Reads shorter than 1 kb were removed using NanoFilt (v.2.8.0; NanoFilt -l 1000) 72 . We performed an initial metagenomic assembly of long reads using Flye (v.2.8.3-b1695) 73 with the ‘--meta’ option. On the basis of the taxonomic assignment of contigs using the GTDB-Tk (v.1.5.0) classify workflow (classify_wf) 70 , we identified a Lokiarchaeal genome with a total length of 6,035,157 bp in a single non-circular contig. To obtain the complete Lokiarchaeal genome, the following approach was applied. Long reads mapping to the short-read-assembled MAG and the long-read-assembled non-circular contig were extracted using minimap2 (ref. 74 ) with the ‘-ax map-ont’ option. Mapped reads were converted to BAM format using samtools 75 , and bedtools bamtofastq 76 was used to obtain the reads in FASTQ format. Duplicated sequences were removed using the SeqKit 77 tool. After long-read deduplication, the remaining reads were assembled using flye (--nano-raw) 78 yielding a circular genome. Polishing and validation of the Loki-B35 genome was performed using four rounds of Pilon 79 as part of the validation pipeline of metagenomic assemblies proposed previously 80 . Phylogenomic tree We recovered a total of 167 good-quality Asgard archaea genomes (completeness over 80% and less than 10% contamination) from the NCBI and other public databases (Supplementary Table 6 ). To reconstruct the phylogenomic tree of life, we collected a total of 23 ribosomal protein markers (Supplementary Table 7 ) from a wide range of bacteria, non-Asgard archaea and eukaryotes (Supplementary Tables 8 and 9 ) from a previous study 2 . The identification and retrieval of the 23 ribosomal markers in our Asgard genome database was based on the proteome annotation performed using Prokka (v.1.14.6) 81 . Two phylogenomic trees were reconstructed in this study: (1) a tree of life with ribosomal markers from bacteria, eukaryotes, non-Asgard archaea and a subset of 94 Asgard genomes (Supplementary Table 6 ); and (2) a second tree using sequences only from Thermoproteota and the 168 Asgardarchaeota genomes present in our database. Ribosomal markers were aligned using the L-INSi algorithm of MAFFT (v.7.427) 82 and trimmed using BMGE using the default parameters 83 . Both sets of ribosomal markers were concatenated independently resulting in two multiple sequence alignment comprising 2,327 and 3,285 amino acid sites for the tree of life and the Thermoproteota plus Asgardarchaeota datasets, respectively. Phylogenomic reconstructions were performed with IQ-TREE 2 (ref. 84 ) under the LG + C20 + F + G model with 1,000 ultrafast bootstrap replicates using UFBoot2 (ref. 85 ). Genome analysis and creation of orthogroups between ‘ Ca. L. ossiferum’ and ‘ Ca. P. syntrophicum’ Protein sequences of ‘ Ca. L. ossiferum’ and ‘ Ca. P. syntrophicum’ were predicted using Prokka (v.1.14.6) 81 . The identified proteins were annotated using HMM searches against the Pfam database (v.34.0) 86 and KEGG Orthology was assigned using the BlastKOALA online server 87 . Proteomes of ‘ Ca. L. ossiferum’ and ‘ Ca. P. syntrophicum’ were used as queries in BLASTp searches ( e = 1 × 10 −10 ) against the recently published collection of Asgard clusters of orthologous genes 2 to obtain the asCOG numbers and annotation. Best matches were selected according to the following criteria: asCOG annotation of best hit was assigned if the BLAST alignment covered more than 70% of the query and sequence identity was greater than 25%; when the best BLAST hits did not meet the 70% threshold, a lower query coverage of 30% and at least 25% sequence identity was used to annotate protein domains. The Asgard COG annotation was used to identify ESPs on the basis of the curated ESP set composed of 505 AsCOGs previously described 2 . Average amino acid identity between ‘ Ca. L. ossiferum’ and ‘ Ca. P. syntrophicum’ was calculated using CompareM (v.0.1.2) (with the ‘aai_wf’ parameter) ( ). Insertion sequence elements were identified using ISEScan 88 . Clusters of orthologous proteins between these two proteomes were identified using OrthoMCL (v.1.4) 89 according to the Synima pipeline 90 based on the all versus all protein alignments performed using BLASTp searches. Synima was also used to identify chains of syntenic genes between ‘ Ca. L. ossiferum’ and ‘ Ca. P. syntrophicum’ based on the DAGchainer software 91 . The genome synteny plot was constructed with Rideogram 92 . Phylogeny of actin-family proteins Actin-like proteins of Asgardarchaeota genomes were identified on the basis of BLASTp searches using Lokiactins 1 as queries against our curated set of Asgard genomes ( e = 1 × 10 −10 , sequence identity ≥ 25%). To retrieve a list of eukaryotic actin sequences, proteins were downloaded from UniProt based on the accession IPR004000, which corresponds to the actin domain present in the actin family. Proteins were clustered using CD-HIT 93 (-c 0.99 -d 0 -g 1) to remove highly similar sequences. Lokiactins were used as queries using BLASTp and only sequences with more than 25% identity were retained. Eukaryotic actins and Asgard actin-like proteins were aligned together with bacterial actin homologues involved cell shape determination (MreB), magnetosome organization (MamK) and plasmid segregation (ParM) based on the structural information of homologues of the DASH database using MAFFT (v.7.490; --dash --localpair --originalseqonly) followed by a trimming step with trimAl (-gappyout) 94 . The phylogenetic tree was reconstructed using IQ-TREE 2.0 (ref. 82 ) under the LG + R8 model selected by ModelFinder 95 . SEM, conventional TEM and immunogold localization For SEM, enrichment cultures of ‘ Ca. L. ossiferum’ were prepared as described previously 96 . SEM was performed using the Zeiss Auriga field-emission scanning electron microscope (Zeiss), operated at 2 kV. For conventional resin embedding of samples dedicated for transmission electron microscopy (TEM) or immunogold localization, the samples were high-pressure frozen and freeze-substituted using the Leica HPM 100 and AFS2 systems, respectively (Leica). The freeze substitution medium consisted of ethanol containing 0.5% glutaraldehyde, 0.5% formaldehyde and 0.5% uranyl acetate. The freeze substitution program, the following embedding in Epon epoxy resin and immunogold localization with Lokiactin-specific primary antibodies (ab1, see below) were performed using the protocol described previously for Phaeodactylum tricornutum 97 . Immunogold controls for primary and secondary antibody specificity were performed using Vibrio harveyi and Pyrococcus furiosus . Immunogold particle statistics were estimated for the controls as well as ‘ Ca. L. ossiferum’ (Supplementary Table 10 ). Transmission electron microscopy was performed using either the Zeiss EM 912 (Zeiss) system, operated at 80 kV and equipped with a Tröndle 2k × 2k slow-scan CCD camera (TRS, Tröndle Restlichtverstärker Systeme) or a JEOL F200 (JEOL), operated at 200 kV and equipped with a XAROSA 20 megapixel CMOS camera (EMSIS). Generation of Lokiactin-specific antibodies and western blotting The Lokiactin antibodies were raised against Lokiactin-specific peptides (ab1: CTFYTDLRVDPSEHPV; ab2: CSKNGFAGEDQPRSVF) and validated through ELISA assays using the services of Eurogentec. Peptide antibodies were designed by Eurogentec. For western blotting, an aliquot of the culture was centrifuged at 20,000 g for 10 min at 4 °C, the pellet was washed once in base MLM medium without casein hydrolysate and lysed and denatured in SDS-loading buffer (Bio-Rad) containing 1% (v/v) beta-mercaptoethanol at 95 °C. The samples were run on 4–20% Tris-Glycine gradient gels (Bio-Rad), transferred to PVDF membranes, blocked with 5% milk powder in TBST and probed with primary and secondary (goat anti-rabbit HRP, Invitrogen, 31460) antibodies. Signals were detected by enhanced chemiluminescence (ECL). Immunofluorescence imaging An aliquot of the culture was immobilized on poly- l -lysine-coated coverslips and fixed with 4% formaldehyde under a nitrogen atmosphere. Coverslips were blocked and permeabilized with 3% (w/v) BSA and 0.1% (v/v) Triton X-100 and subsequently probed with primary (anti-Lokiactin, 1:100 or 1:500 diluted) and secondary antibodies (either donkey anti-rabbit AF647, Invitrogen A-31573, 1:500 diluted; or goat anti-rabbit abberior STAR 580, abberior ST580-1002, 1:200 diluted) and counterstained with 10 µg ml −1 Hoechst 33342 (Thermo Fisher Scientific, for Airyscan imaging) or SPY505-DNA (Spirochrome, for STED samples). Coverslips were mounted with Vectashield (Vector Laboratories, Airyscan samples) or Prolong Diamond (Thermo Fisher Scientific, STED samples). The samples were imaged using a Zeiss LSM900 with Airyscan 2 detector and a ×63/1.4 NA oil-immersion objective. z -stacks of target cells were recorded using one confocal imaging track detecting the Hoechst signal and transmitted light and a separate Airyscan track detecting the Alexa Fluor 647 signal. Confocal stacks were deconvolved using the Zeiss LSM Plus processing function and Airyscan images were processed with Zeiss joint deconvolution (jDCV, 15 iterations) in ZenBlue (v.3.5). Minimum-intensity projections of the transmitted light channel and extraction of single confocal/Airyscan slices were performed in Fiji 98 . STED images were acquired using a Leica SP8 STED equipped with a ×100/1.4 NA oil-immersion objective. DNA and Lokiactin signals were detected in stack-wise sequential imaging tracks. Images were deconvolved with Huygens Professional (v.22.04; Scientific Volume Imaging; ). Cryo-ET sample preparation Samples were removed from the culture under a nitrogen atmosphere and mixed with 10 nm BSA-coated gold beads at a 1:5 ratio. The sample was kept in a nitrogen atmosphere until plunge-freezing. Then, 3.5 µl of the sample was applied to glow-discharged copper EM grids (R2/1, Quantifoil), automatically blotted from the backside (using a Teflon sheet on one side) 99 for 5–7 s and plunged into liquid ethane/propane 100 using the Vitrobot Mark IV (Thermo Fisher Scientific) 101 . Cryo-ET data collection Cryo-ET data were collected on a Titan Krios G4 (Thermo Fisher Scientific) system operating at 300 kV equipped with a BioContinuum imaging filter and a K3 direct electron detector (Gatan). Data acquisition was performed using SerialEM 102 , 103 . Owing to the low cell density of the non-concentrated sample, grids were first extensively screened using polygon montages at low magnification (×2,250). After identification of targets, tilt series were acquired using a dose-symmetric tilt scheme 104 , covering an angular range of −60° to +60° and a total electron dose of 140–160 e − Å −2 . Tilt series were either acquired at a pixel size of 4.51 Å at the specimen level using 2° angular increments between tilts and a target defocus of −8 µm or at higher magnification (pixel size of 2.68 Å) with 3° angular increments and a defocus ranging from −3 to −6 µm (for sub-tomogram averaging of the ribosome and reconstruction of the cytoskeletal filament). 2D projection images shown in the Article were recorded at a magnification of ×2,250 (pixel size of 39.05 Å) and a target defocus of −200 µm. Tomogram reconstruction, data processing and segmentation Tilt series were drift-corrected using alignframes in IMOD 105 and 4×-binned tomograms were reconstructed by weighted-back projection in IMOD. To enhance the contrast for visualization and particle picking, tomograms were CTF-deconvolved and filtered using isonet 106 . 2D projection images were lowpass-filtered using mtffilter in IMOD. Segmentations were generated in Dragonfly (Object Research Systems, 2022; ) as described previously 107 . In brief, isonet-filtered tomograms were further processed by histogram equalization and an unsharp filter, and a 5-class U-Net (with 2.5D input of 5 slices) was trained on 5–6 tomogram slices to recognize background voxels, filaments, membranes, cell surface structures and ribosomes. All neural-network-aided segmentations were cleaned up in Dragonfly, exported as a binary tiff and converted to mrc using tif2mrc in IMOD. Segmentations were visualized in ChimeraX 108 . Sub-tomogram averaging of the ribosome Sub-tomogram averaging of ribosomes was performed using RELION (v.4.0) 109 . The individual ribosome particles were manually picked using Dynamo 110 from 56 tomograms reconstructed at a binning factor of 4. The coordinates of particles and raw tilt series were imported into RELION (v.4.0) to generate pseudo sub-tomograms at a binning factor of 4 (4,126 particles). The particles were processed for 3D classification with a ribosome reference (Electron Microscopy Data Bank: EMDB-13448 ) that was low-pass filtered to 60 Å. The particles from good classes (class I and III in Extended Data Fig. 6 ) were processed for 3D auto-refinement at a binning factor of 2 and 3D local classification. The particles in the best 3D class (class II) were used for the 3D reconstruction at the binning factor of 1 and the resolution was further improved after three iterations of Tomo CTF refinement and frame alignment. The final structure of the ribosome at a resolution of 11.7 Å was reconstructed from 1,673 particles (Extended Data Fig. 6 ). Identification of ‘ Ca. L. ossiferum’ by rRNA expansion segments The ribosome sub-tomogram average was compared to a high-resolution structure of a Euryarchaeota ribosome (PDB: 6SKF , T. kodakarensis 111 ) using fitinmap in ChimeraX to identify unique rRNA features. For visualization, the high-resolution structure was lowpass-filtered to 11 Å resolution using molmap in ChimeraX. Large subunit rRNA secondary structures of ‘ Ca. L. ossiferum’ and T. kodakarensis were predicted using R2DT 112 (available at ) to identify the position of Asgard-specific rRNA expansion segments 36 and positions were mapped to the structural docking result to define ES9 and ES39 in the sub-tomogram average. In situ reconstruction of cytoskeletal filaments Filaments were analysed using a similar strategy as described in previous reports 113 , 114 and summarized in Extended Data Fig. 9a . In brief, individual filaments were manually picked from tomograms that were reconstructed at a binning factor of 4 using the filamentWithTorsion model in Dynamo. The filament segmentation was performed with an intersegment distance of 32.13 Å, resulting in a total of 10,031 segments from CTF-corrected and dose-weighted tomograms at the binning factor of 2. These segmented subvolumes were processed for the filament analysis (Extended Data Fig. 9 ): (1) each subvolume was first rotated to make the helical axis parallel to the z axis based on the orientation calculated from particle coordinates (or derived from Dynamo) and then further centred by alignment with a featureless cylinder-like reference. (2) Re-oriented subvolumes were further rotated 90° to orient particles parallel to the xy plane. (3) The central slices of segments along the z axis were extracted and projected to generate a 2D projection dataset. (4) 2D projection images were processed for RELION helical reconstruction analysis 115 with an actin reference ( EMDB-11976 ) low-pass filtered to 60 Å. (5) The orientations of 2D projection images were mapped back into the corresponding subvolumes and the polarity of the filament was validated from the orientations of segments on the same filaments, while the c entres of segments were refined in 3D space by aligning against the 3D model from 2D projection images using relion_refine. The polarity voting results were not applied during the reconstruction, as the resolution was not sufficient for clear polarity determination. The segments with refined orientations were then processed for a second iteration of filament analysis, in which the reference in the RELION analysis was updated to the 3D model from 2D projection images. After three iterations of filament analysis, a 2D classification without sampling was performed in the last iteration to select only particles in 2D classes with visible features. The particles were then processed for 3D helical refinement to optimize the helical parameters in the process. A total of 3,161 2D projections from subvolumes were used to determine a reconstructed map at a resolution of 31 Å with refined helical parameters of 28.56 Å (rise/subunit) and −168.50° (twist/subunit). To further improve the map quality, we next performed sub-tomogram averaging using RELION (v.4.0) 109 (Extended Data Fig. 9a ). Individual filaments were manually picked from 56 tomograms in Dynamo and were segmented with an intersegment distance of 32.13 Å. The coordinates of 13,569 filament segments and original raw tilt series were imported into RELION (v.4.0) to generate pseudo sub-tomograms at a binning factor of 2. The particles were processed for 3D auto-refinement without imposing helical symmetry and were then applied to 3D classification without angular sampling. The particles in the good 3D classes (class I and IV) were selected for next round of 3D refinement, in which the helical parameters from the previously obtained 2D projection model were applied and optimized during refinement. The final structure at a resolution of 24.5 Å was reconstructed from 12,585 particles imposed with the refined helical parameters (rise = 27.9 Å, twist = −167.7°) (Fig. 5c and Extended Data Fig. 9 ). Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The ‘ Ca. L. ossiferum’ genome sequence (accession CP104013) was uploaded to GenBank, under BioProject ID PRJNA847409 , BioSample accession SAMN28933922 . Sub-tomogram averages ( EMD-15987 – EMD-15988 ), representative tomograms ( EMD-15989 – EMD-15993 ) and the corresponding tilt series ( EMPIAR-11269 ) have been uploaded to the Electron Microscopy Data Bank and the Electron Microscopy Public Image Archive.
How did the complex organisms on Earth arise? This is one of the big open questions in biology. A collaboration between the working groups of Christa Schleper at the University of Vienna and Martin Pilhofer at ETH Zurich has come a step closer to the answer. The researchers succeeded in cultivating a special archaeon and characterizing it more precisely using microscopic methods. This member of the Asgard archaea exhibits unique cellular characteristics and may represent an evolutionary "missing link" to more complex life forms such as animals and plants. The study was recently published in the journal Nature. All life forms on earth are divided into three major domains: eukaryotes, bacteria and archaea. Eukaryotes include the groups of animals, plants and fungi. Their cells are usually much larger and, at first glance, more complex than the cells of bacteria and archaea. The genetic material of eukaryotes, for example, is packaged in a cell nucleus and the cells also have a large number of other compartments. Cell shape and transport within the eukaryotic cell are also based on an extensive cytoskeleton. But how did the evolutionary leap to such complex eukaryotic cells come about? Most current models assume that archaea and bacteria played a central role in the evolution of eukaryotes. A eukaryotic primordial cell is believed to have evolved from a close symbiosis between archaea and bacteria about two billion years ago. In 2015, genomic studies of deep-sea environmental samples discovered the group of the so-called "Asgard archaea", which in the tree of life represent the closest relatives of eukaryotes. The first images of Asgard cells were published in 2020 from enrichment cultures by a Japanese group. Scanning electron micrograph of a Lokiarchaeum ossiferum cell showing the long and complex cell protrusions. Credit: Thiago Rodrigues-Oliveira, Univ. Wien Asgard archaea cultivated from marine sediments Christa Schleper's working group at the University of Vienna has now succeeded for the first time in cultivating a representative of this group in higher concentrations. It comes from marine sediments on the coast of Piran, Slovenia, but is also an inhabitant of Vienna, for example in the bank sediments of the Danube. Because of its growth to high cell densities, this representative can be studied particularly well. "It was very tricky and laborious to obtain this extremely sensitive organism in a stable culture in the laboratory," reports Thiago Rodrigues-Oliveira, postdoc in the Archaea working group at the University of Vienna and one of the first authors of the study. Asgard archaea have a complex cell shape with an extensive cytoskeleton The remarkable success of the Viennese group to cultivate a highly enriched Asgard representative finally allowed a more detailed examination of the cells by microscopy. The ETH researchers in Martin Pilhofer's group used a modern cryo-electron microscope to take pictures of shock-frozen cells. One of the currently most popular evolutionary theories assumes that eukaryotes (including animals, plants and fungi) arose from the fusion of an Asgard archaeon with a bacterium. Credit: Florian Wollweber, ETH Zürich "This method enables a three-dimensional insight into the internal cellular structures," explains Pilhofer. "The cells consist of round cell bodies with thin, sometimes very long cell extensions. These tentacle-like structures sometimes even seem to connect different cell bodies with each other," says Florian Wollweber, who spent months tracking down the cells under the microscope. The cells also contain an extensive network of actin filaments thought to be unique to eukaryotic cells. This suggests that extensive cytoskeletal structures arose in archaea before the appearance of the first eukaryotes and fuels evolutionary theories around this important and spectacular event in the history of life. Future insights through the new model organism "Our new organism, called 'Lokiarchaeum ossiferum', has great potential to provide further groundbreaking insights into the early evolution of eukaryotes," comments microbiologist Christa Schleper. "It has taken six long years to obtain a stable and highly enriched culture, but now we can use this experience to perform many biochemical studies and to cultivate other Asgard archaea as well." In addition, the scientists can now use the new imaging methods developed at ETH to investigate, for example, the close interactions between Asgard archaea and their bacterial partners. Basic cell biological processes such as cell division can also be studied in the future in order to shed light on the evolutionary origin of these mechanisms in eukaryotes.
10.1038/s41586-022-05550-y
Biology
Human pharmaceuticals change cricket personality
Robin N. Abbey-Lee et al, Experimental manipulation of monoamine levels alters personality in crickets, Scientific Reports (2018). DOI: 10.1038/s41598-018-34519-z Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-018-34519-z
https://phys.org/news/2018-11-human-pharmaceuticals-cricket-personality.html
Abstract Animal personality has been described in a range of species with ecological and evolutionary consequences. Factors shaping and maintaining variation in personality are not fully understood, but monoaminergic systems are consistently linked to personality variation. We experimentally explored how personality was influenced by alterations in two key monoamine systems: dopamine and serotonin. This was done using ropinirole and fluoxetine, two common human pharmaceuticals. Using the Mediterranean field cricket ( Gryllus bimaculatus ), we focused on the personality traits activity, exploration, and aggression, with confirmed repeatability in our study. Dopamine manipulations explained little variation in the personality traits investigated, while serotonin manipulation reduced both activity and aggression. Due to limited previous research, we created a dose-response curve for ropinirole, ranging from concentrations measured in surface waters to human therapeutic doses. No ropinirole dose level strongly influenced cricket personality, suggesting our results did not come from a dose mismatch. Our results indicate that the serotonergic system explains more variation in personality than manipulations of the dopaminergic system. Additionally, they suggest that monoamine systems differ across taxa, and confirm the importance of the mode of action of pharmaceuticals in determining their effects on behaviour. Introduction Animal personality (i.e., consistent among-individual variation in behaviour), has been described in a broad range of species 1 , 2 . Despite research demonstrating that animal personality can have important ecological and evolutionary consequences 2 , 3 , 4 , the factors shaping and maintaining variation in personality are still poorly understood. Underlying genetic variation has been demonstrated in a number of species, but our understanding of the mechanisms translating genetic variation into personality variation is generally limited 5 , 6 . This calls for rigorous experimental studies using experimental manipulations of different mechanistic pathways in order to understand how genetic variation is translated into personality variation 7 , 8 , 9 . Aspects of personality have been linked to monoaminergic systems 2 , 10 , 11 , including variation in metabolite levels, methylation, and gene polymorphisms for both dopamine and serotonin. Dopamine levels, polymorphisms and differential methylation of dopamine-associated genes are related to novelty-seeking and exploratory behaviour in mammals and birds 2 , 7 , 12 , 13 , 14 . Dopamine is also involved in the recovery of aggression after social defeat in insects 15 , 16 . Additionally, serotonin levels are negatively associated with aggressiveness in several species 2 , 10 , 17 , but positively related to activity and aggression in others 13 , 14 , 18 . Polymorphisms in serotonin transporter genes are related to aggression, anxiety, and impulsivity 2 . Such evidence suggests monoamines may be one of the mechanisms translating genetic variation into personality variation 7 , 8 . However, we cannot yet clearly describe the link between monoamines and behaviour, and further work exploring the causality of observed relationships between neuroendocrinology and personality is needed. We experimentally manipulated two key monoamine systems to determine their effect on personality. For our manipulations we used human pharmaceuticals: ropinirole, which alters the dopaminergic system, and fluoxetine, which alters the serotonergic system. Ropinirole is a dopamine receptor agonist that has been linked to motor control and is prescribed to treat Parkinson’s disease and restless legs syndrome 19 , 20 . Fluoxetine is a selective serotonin reuptake inhibitor prescribed to treat depression and anxiety 21 . We used pharmaceuticals for our manipulations because they have known effects on human personality and behaviour, and as monoamine systems are evolutionarily conserved across taxa 22 , these compounds are good candidates for potentially explaining personality variation in other species. We used the Mediterranean field cricket because they are a model species for neuroethological studies 23 , 24 , 25 , have been shown to demonstrate personality 26 , and respond to monoamine manipulations 18 , 27 , 28 . Based on previous work, we predicted that both of our monoamine manipulations would affect personality by increasing cricket activity 29 , 30 , and aggressiveness 15 , 18 , and that our serotonin manipulation would increase exploration tendency 13 , 14 . These three behaviours were chosen as they are consistent within individuals, describe personality types in a variety of species 31 , and are important to individual fitness 4 . Results All raw data can be found as Supplementary Table S1 . We confirmed that our behaviours were repeatable in our population by running repeatability analyses (for details see Methods below; Table 1 ), thus they can be classified as personality traits. This allowed us to assay individuals a single time for other parts of our study. Table 1 Comparison of repeatability of cricket behavioural traits measured on 3 consecutive days (n = 24). Full size table When comparing our dopamine-manipulated and unmanipulated individuals using linear models, there were no significant differences between groups in any of their behavioural responses measured (for details see Methods below; Table 2 ). Serotonin on the other hand did alter personality: our manipulated individuals had lower activity (distance moved in familiar environment) and lower aggression (more often lost fights) than control individuals (Table 2 , Fig. 1 ). Exploration (statistically controlled for individual level activity), however, was not influenced by our serotonin manipulation. Table 2 The influence of manipulation of monoamines (dopamine via ropinirole hydrochloride or serotonin via fluoxetine hydrochloride) on cricket personality (n = 144). Full size table Figure 1 The influence of manipulation of monoamines (dopamine via ropinirole hydrochloride, or serotonin via fluoxetine hydrochloride) on cricket personality (n = 144). Mean and standard error of raw data describing ( A ) Activity, distance moved in home environment (cm); ( B ) Exploration, distance moved in novel area (cm); ( C ) Aggression, winner of fight dyad, binomial. Full size image We created a dose response curve across 6 concentrations of ropinirole and used linear models to confirm that only intermediately low (1 µM and 33 µM) doses tended to increase aggressiveness relative to control individuals (for details see Methods below; Table 3 , Fig. 2 ). However, these results were only trends, confirming our observed lack of response to ropinirole in the main experiment. Table 3 Comparison of different ropinirole concentrations on cricket personality (n = 96). Full size table Figure 2 Dose response curve for cricket response to ropinirole injections. Mean and standard error of raw data describing ( A ) Activity, distance moved in home environment (cm); ( B ) Exploration, distance moved in novel area (cm); ( C ) Aggression, winner of fight dyad, binomial. Grey is the control group (concentration of zero) and black are the range of ropinirole doses. Full size image Discussion We show that our manipulations of serotonin causally affected the personality traits investigated by making crickets less active and less aggressive compared to unmanipulated crickets. Our manipulation of dopamine did not result in altered personality, despite testing a wide range of doses. Our results add additional support confirming the often suggested link between monoamine systems and personality 2 , 10 , 11 . Specifically, our study adds further evidence of a causational relationship between serotonin manipulations and behavioural responses. Therefore, our study provides evidence that monoamines can be an underlying mechanism for personality variation. Based on previous studies, we expected our manipulations of dopamine to also affect personality e.g. 13 , 14 , 15 , 30 . However, we found limited effects of our manipulations of the dopaminergic system by the use of ropinirole. Our dose-response experiment confirmed that the ropinirole concentration used tended to increase aggression in manipulated crickets, but we found no significant effects in any of our experiments. Our maximum dose was a high human therapeutic dose, but if there are significant differences in ropinirole sensitivity between insects and humans, potentially higher doses may be effective in insects and should be tested in future studies. Importantly, we used ropinirole which is highly selective for dopamine receptors, while other studies use less selective compounds (i.e. atypical antipsychotic medications like Fluphenazine that interacts with both dopaminergic and serotonergic systems, e.g. Rillich & Stevenson 2014) with different modes of action. The mode of action of specific compounds has been found to be important for the effect on animal species 32 , 33 . Many chemicals that alter the dopaminergic system also interact with the serotonergic system 34 , therefore, a testable explanation for the difference in results between our study and others could be the mode of action and specificity of the chemical used. Additionally, the specificity of ropinirole may influence its effectiveness across taxa and our results may indicate that dopamine receptors may differ in structure between at least humans and crickets. Our results show that chemical manipulations of serotonin levels via fluoxetine injections changed individual behaviour and personality, adding further support to monoamines being key mechanisms in the maintenance of personality differences. Additionally, our findings support the larger body of work that indicates the complexity of monoamine systems and that their effect on behaviour can be dose, mode of action, and taxa dependent. Previous work and our results together highlight that the relationship between monoamines, behaviour, and personality may be highly dependent upon how the systems are manipulated. Thus, extensive future work is needed, focusing on categorizing behavioural responses to a large range of chemicals that alter monoamine systems in different, but specific ways (e.g., manipulations of only receptors vs. both receptors and transporters) as well as comparisons among monoamine systems (e.g. manipulations of single monoamine systems vs. multiple systems in conjunction) to better elucidate how the mechanism of manipulation may be a critical link to behavioural response. Methods Subjects Sexually mature, male Mediterranean field crickets (N = 264) purchased from the local pet shop were individually housed in plastic containers (9 cm × 16 cm × 10.5 cm) covered by a plastic lid. Each container was lined with paper towels and a shelter, in the form of a cardboard tube, was provided. Crickets were held at a temperature of 23 ± 2 °C with a 12 h:12 h light:dark cycle (light on from 7 am to 7 pm) with ad libitum access to food and water (consisting of apple slices and agar water cubes). All containers were visually isolated from each other and all crickets were kept isolated for at least 12 h prior to all experiments as group living minimizes aggression in crickets 35 . Personality confirmation Prior to the main study, we confirmed that the measured behaviours were repeatable in our population of crickets by behaviourally assaying 24 male crickets on three consecutive days (see description of behavioural assays below). All behaviours were repeatable (Table 1 ), confirming findings in other populations 26 . Thus, for our further work we only assayed individual behaviour a single time. Monoamine manipulation Manipulations of both monoamine systems were based on concentrations found in the literature. Manipulation of the dopaminergic system was done using ropinirole hydrochloride (Sigma-Adrich, Sweden) diluted in phosphate-buffered saline (PBS) to a 33 µM concentration 36 . Manipulation of the serotonergic system was done using fluoxetine hydrochloride (Sigma-Aldrich, Sweden) diluted to 10 μM in PBS e.g. 15 . All experimental males (N = 144) were injected by a single experimenter (LG) and received 10 µl injected between the 4–5th segment of the abdominal cavity using a micro-syringe (Hamilton, Switzerland) 29 . Experiments were run in two blocks, a dopamine group (dopamine manipulated individuals versus control individuals) followed by a serotonin group (serotonin manipulated individuals versus control individuals). All groups had 36 individuals. All control individuals were sham injected with 10 µl PBS. Behavioural response To determine if manipulated monoamine levels altered behaviour, each cricket was assayed for activity, exploration, and aggression. Trials were performed in sequence and every cricket followed the same procedure. Crickets were divided into groups of four weight-matched (±0.05 g) individuals to make aggressive dyads equivalent and to fill our four available camera setups. At time of injection, each cricket within the group was marked with a different colour combination on the pronotum so individuals could be distinguished from one another during the later aggression trial (markings used: none, red, white, red and white). Between 30 and 60 minutes post-injection, behavioural assays began 15 . First, crickets were assessed for activity in a familiar environment (using automatic tracking with Ethovision XT 10, Noldus, 2013). Individuals in their home containers were moved to the recording setup. To optimize the automatic video tracking, the lid, shelter and food/water dishes were removed from the home container. After 10 minutes of acclimation, activity was recorded for 15 minutes as total distance moved (cm) 26 . Immediately after the activity assay, individuals were moved in their home shelters to novel areas in the recording setup. The novel area was a larger clear plastic container (36 cm × 21.5 cm × 22 cm) with white sand. Shelters were placed in the back-left corner of the area. Exploration, defined as the total distance moved (cm) in this novel environment within 15 minutes of emergence, was measured automatically. The final behavioural assay, aggression trials, was conducted immediately after exploration trials. The exploration areas were divided into two using an opaque cardboard divider. Individuals of the different treatments (control vs serotonin, or control vs dopamine) were placed on either side of the divider. Crickets were given 10 minutes to acclimate before the divider was raised and behaviour was observed live for 10 minutes by an observer blind to treatment 26 . We recorded the winner of each dyad as a binomial response with the first cricket to win three consecutive interactions called ‘winner’ (scored as 1), the other ‘loser’ (scored as 0). An interaction was defined as starting when any part of one cricket came in contact with any part of the other cricket and ended when that contact was aborted for more than 2 seconds 37 . An interaction was deemed as won by the cricket that produced a victory song, whilst the other cricket fled 38 . If crickets did not interact enough times for a winner to be assigned, both individuals were recorded as losers. Dose confirmation We found no alteration in measured behavioural responses were explained by our dopamine manipulation (see Results). As ropinirole is not a commonly studied drug outside of humans, there is little available data on its dose-response curve, but it is likely to be non-linear 39 , 40 . We therefore conducted a follow up experiment to verify that we used an appropriate dose in our manipulations. We selected 6 biologically relevant dose levels ranging from the minute concentrations measured in surface waters (from human waste) to the high concentrations used for human therapeutic doses e.g. 39 , 41 . We again diluted ropinirole hydrochloride in PBS to obtain the specific concentrations of 0 µM (control, PBS only), 0.033 µM, 1 µM, 33 µM, 148 µM, and 330 µM. For each concentration level, 16 males were injected with 10 µl as described above (n = 96). We found that intermediately low doses (1 µM and 33 µM) tended to show a difference in behaviour between control and treated individuals, thus confirming our use of 33 µM concentrations for our main study (Table 2 ) and highlighting the weak effects of ropinirole on our measured behaviours. Statistics All statistical analyses were conducted using the R software (version 3.4; R Development Core Team, 2017). For ‘dose confirmation’ and ‘behavioural response’ we applied linear and generalised linear mixed-effects models to analyse our data (detailed below), for which we used the ‘lmer’ and ‘glmer’ functions (package lme4) 42 . Additionally, we used the ‘sim’ function (package arm) 43 to simulate the posterior distribution of the model parameters and values were extracted based on 2000 simulations 44 . The statistical significance of fixed effects and interactions were assessed based on the 95% credible intervals (CI) around the mean (β). We consider an effect to be “significant” when the 95% CI did not overlap zero 45 . We used visual assessment of the residuals to evaluate model fit. Personality confirmation To confirm our measured behaviours were repeatable, and thus indices of personality, repeatability calculations were calculated using the ‘rpt’ function (package rptR) 46 . Activity was log transformed to meet normality assumptions and modelled with a Gaussian distribution. Exploration (distance moved in novel area in cm) was normally distributed and modelled with a Gaussian distribution. Aggression (winner of fight) was modelled with a binomial distribution. Behavioural response We used (generalised) linear mixed models to analyse models to determine behavioural responses to our monoamine manipulations. As experiments were run independently, we ran identical but separate models to investigate the effect of manipulated levels of dopamine and serotonin. For each monoamine (dopamine, serotonin), we ran models for each response variable of interest (activity, exploration, and aggression). Activity in the serotonin manipulated group, and exploration in both dopamine and serotonin manipulated groups were non-normally distributed and so were square-root transformed. Aggression data followed a binomial distribution and was modelled as such. The models for activity and aggression were identical and included type of treatment (manipulated vs. control; categorical variable) as the fixed effect of interest. The colour marking for individual identification (none, red, white, red and white), time of injection (range: 08:30–14:00), and time since injection (30–60 min) were included as random effects. Since both exploration and activity measure the distance moved by an individual, they may be correlated 26 , thus our model of exploration included the additional fixed effect of activity score in order to model the variation in exploration alone. Dose confirmation To confirm the best dose of ropinirole, we used (generalised) linear mixed models comparing our concentration groups. We ran three models, one for each response variable of interest: activity, exploration, and aggression. Activity and exploration met normality assumptions and were modelled following a Gaussian distribution. Aggression data followed a binomial distribution and was modelled as such. All models included dose level (factor with 6 levels, one for each concentration) as the fixed effect. The colour marking for individual identification, time of injection, and time since injection were included as random effects. As described above in main study, for the model of exploration, we added the fixed effect of activity score in order to control for individual variance in activity and thus model variation in exploration alone. Data Availability All raw data is available in the Supplemental Information.
Crickets that are exposed to human drugs that alter serotonin levels in the brain are less active and less aggressive than crickets that have had no drug exposure, according to a new study led by researchers from Linköping University. The findings have been published in Scientific Reports. Individuals in many animal species show different personality types. Some individuals are, for example, consistently bolder than others. "However, in biology, we still do not fully understand what causes people or animals to show differences in personality. In humans, people with different levels of brain chemicals, such as serotonin and dopamine, often behave differently. However, we do not know if these chemicals can also explain personality differences in other species, and if the chemicals cause the observed differences or if both the differences in behavior and chemical levels are caused by another underlying factor," says Robin Abbey-Lee, postdoctoral researcher at the Department of Physics, Chemistry and Biology, IFM, and lead author of the study. The researchers, therefore, set out to experimentally change the levels of the brain chemicals serotonin and dopamine in the crickets. They did that by giving the crickets human pharmaceuticals that are known to act on the serotonin and dopamine systems and are used to treat depression and Parkinson disease, respectively. Because dopamine and serotonin systems are similar across species, these chemicals were predicted to also affect cricket behavior. "In this study we wanted to address an important gap in our knowledge by experimentally altering these brain chemicals and seeing if we could get a resulting behavioral change," says Hanne Løvlie, associate professor at IFM, and senior author. Researchers at Linköping University, Sweden, set out to experimentally change the levels of the brain chemicals serotonin and dopamine in crickets. They gave the crickets human pharmaceuticals that are known to act on the serotonin and dopamine systems. Because dopamine and serotonin systems are similar across species, these chemicals were predicted to also affect cricket behavior. Credit: Linköping University The researchers measured three different behaviors. "First, we measured the activity of crickets in a familiar environment. This is similar to how much a person moves around their own home. Second, we measured the exploration behavior of a cricket in a new environment, similar to how a human might behave on a trip to a new city. Finally, we measured cricket fighting behavior to determine how aggressive individuals were," says Robin Abbey-Lee. What the researchers found was that changing the serotonin levels made crickets less active and less aggressive. But changing the dopamine levels of crickets did not change their behavior. "This suggests that serotonin has a clearer underlying role in these behaviors," says Hanne Løvlie. The findings add to our understanding of why animals have personality. They also raise the issue of how pharmaceuticals leaking into nature through human waste water may affect animals.
10.1038/s41598-018-34519-z
Medicine
Blowing smoke? E-cigarettes might help smokers quit
E-cigarette use and the associated change in population smoking cessation: Evidence from the U.S Current Population Surveys, The BMJ, DOI: 10.1136/bmj.j3262, www.bmj.com/content/358/bmj.j3262 Editorial: Rise in e-cigarettes linked to rise in smoking cessation rates, The BMJ, www.bmj.com/content/358/bmj.j3506 Journal information: British Medical Journal (BMJ)
http://www.bmj.com/content/358/bmj.j3262
https://medicalxpress.com/news/2017-07-e-cigarettes-smokers.html
Abstract Objective To examine whether the increase in use of electronic cigarettes in the USA, which became noticeable around 2010 and increased dramatically by 2014, was associated with a change in overall smoking cessation rate at the population level. Design Population surveys with nationally representative samples. Setting Five of the US Current Population Survey-Tobacco Use Supplement (CPS-TUS) in 2001-02, 2003, 2006-07, 2010-11, and 2014-15. Participants Data on e-cigarette use were obtained from the total sample of the 2014-15 CPS-TUS (n=161 054). Smoking cessation rates were obtained from those who reported smoking cigarettes 12 months before the survey (n=23 270). Rates from 2014-15 CPS-TUS were then compared with those from 2010-11 CPS-TUS (n=27 280) and those from three other previous surveys. Main outcome measures Rate of attempt to quit cigarette smoking and the rate of successfully quitting smoking, defined as having quit smoking for at least three months. Results Of 161 054 respondents to the 2014-15 survey, 22 548 were current smokers and 2136 recent quitters. Among them, 38.2% of current smokers and 49.3% of recent quitters had tried e-cigarettes, and 11.5% and 19.0% used them currently (every day or some days). E-cigarette users were more likely than non-users to attempt to quit smoking, 65.1% v 40.1% (change=25.0%, 95% confidence interval 23.2% to 26.9%), and more likely to succeed in quitting, 8.2% v 4.8% (3.5%, 2.5% to 4.5%). The overall population cessation rate for 2014-15 was significantly higher than that for 2010-11, 5.6% v 4.5% (1.1%, 0.6% to 1.5%), and higher than those for all other survey years (range 4.3-4.5%). Conclusion The substantial increase in e-cigarette use among US adult smokers was associated with a statistically significant increase in the smoking cessation rate at the population level. These findings need to be weighed carefully in regulatory policy making regarding e-cigarettes and in planning tobacco control interventions. Introduction Current regulatory policies on electronic cigarettes vary widely across countries. The United Kingdom provides a path for licensing e-cigarettes as a smoking cessation aid if they can pass safety standards and deliver nicotine like existing nicotine replacement therapy. 1 In contrast, Australia bans the sale of e-cigarettes containing nicotine. 2 US policy falls somewhere between. If there is no claim that e-cigarettes are a smoking cessation aid, they are treated as recreational products and regulated by the Tobacco Product Center of the Food and Drug Administration. Most manufacturers of e-cigarettes avoid making explicit claims on cessation benefits. 3 4 Thus e-cigarettes are currently sold in the US with minor restrictions, as the most recent FDA rulings on e-cigarettes give grace periods for implementation of many components. 5 The scientific community is also divided in its opinion of e-cigarettes as a smoking cessation aid. The debate is not as much about the potential efficacy of e-cigarettes for individual users as it is about the overall population impact. A recent Cochrane review suggests that e-cigarettes have a similar effect as nicotine replacement therapy for individual smokers who use them. 6 Thus e-cigarettes will have a positive impact on the population cessation rate if they function as a nicotine replacement therapy and if they increase the total proportion of smokers using nicotine replacement products. 7 8 9 10 11 Others argue the overall impact of e-cigarettes on smoking in adults will be negative, even if they help some individuals to quit smoking. 12 13 The reason is that smokers who use e-cigarettes often use them occasionally, along with cigarettes. 14 Such dual use could lessen the urgency to quit smoking, delaying the cessation process. If this is true, then the positive effect of e-cigarettes on some will be offset by the negative effect on many others, rendering the overall population impact as negative. A randomized trial at the population level could resolve the debate, but such a trial is difficult to do. Instead, population studies have only compared smokers who have used e-cigarettes with those who have not. Some found that e-cigarette users quit at higher rates 9 15 whereas others found the opposite. 16 17 18 None has reported whether the overall population cessation rate (which includes both e-cigarette users and non-users) has changed because of e-cigarettes. We examined the relation between e-cigarette use and smoking cessation in the US population using the largest representative sample of smokers and e-cigarette users available to date: the 2014-15 Current Population Survey-Tobacco Use Supplement (CPS-TUS). Population surveys in the US began to measure e-cigarette use around 2010. 14 The surveys found that most users were smokers. In 2010, about 1.4% of smokers were current users of e-cigarettes. 19 Their usage among smokers increased dramatically by 2014, with estimates from various studies ranging from 15% to 30%. 14 19 20 21 Thus a comparison between CPS-TUS 2010-11 and 2014-15 provides the best chance yet to examine the effect of e-cigarettes on the overall smoking cessation rate. We investigated two questions: First, did users of e-cigarettes in 2014-15 quit smoking at a higher rate than non-users? Second, did smokers in 2014-15 as a whole quit smoking at a higher rate than those in 2010-11? For a longer historical view, we also compared the 2014-15 survey with surveys earlier than 2010-11. Methods Data source The US Current Population Survey-Tobacco Use Supplement (CPS-TUS) is a periodic tobacco survey attached to the Current Population Survey and administered by the US Census Bureau. It provides data from a nationally representative sample of US households of non-institutionalized civilians. Details of the design are published elsewhere. 22 Our analysis included five surveys, 2001-02, 2003, 2006-07, 2010-11, and 2014-15. The sample sizes (excluding proxy respondents) were 185 568, 183 810, 172 023, 171 365, and 163 920, respectively. We did not include surveys earlier than 2001-02 because they did not assess quit attempts for all smokers. Participants For analysis of the prevalence of e-cigarette use in the 2014-15 survey, we included 161 054 of the total sample of 163 920 (we excluded those who did not answer questions on cigarettes or e-cigarettes). The analysis on smoking cessation included respondents aged 18 or older and who answered “every day” or “some days” to the question: “Around this time 12 months ago, were you smoking cigarettes every day, some days, or not at all?” Sample sizes for the five surveys for this analysis were 38 999, 34 440, 31 497, 27 280, and 23 270, respectively. The rapidly declining sample size of eligible smokers over the years (as opposed to the change in total sample sizes for the surveys shown in the previous section) was largely a result of declining smoking prevalence over time. The smoking prevalence based on these surveys was, 21.0%, 18.9%, 18.5%, 16.1%, and 13.7%, for 2001-02, 2003, 2006-07, 2010-11, and 2014-15, respectively. Measures Current smokers were defined as having smoked at least 100 cigarettes in their lifetime and smoking every day or some days at the time of interview. A quit attempt was defined as having tried to quit smoking and achieving it for at least 24 hours. The “cessation rate” was the percentage of those who had quit for at least three months at the time of the interview among those who were smoking 12 months before the interview. 23 24 Ever users of e-cigarettes were those who “ever used e-cigarettes, even one time.” Before survey respondents were asked about their experience with e-cigarettes, they were first presented with the following description: “The next question is about electronic or e-cigarettes. You may also know them as vape-pens, hookah-pens, e-hookahs, or e-vaporizers. Some look like cigarettes and others look like pens or small pipes. These are battery-powered, usually contain liquid nicotine, and produce vapor instead of smoke.” Current users of e-cigarettes were ever users who answered “every day” or “some days” to the question: “Do you now use an e-cigarette every day, some days, or not at all?” Those who answered “not at all” were asked when they stopped using e-cigarettes. The “use of e-cigarettes” included any use within the past 12 months: those who reported currently using at the time of survey and those who reported using in the past 12 months but who had stopped by the time of survey. Analysis To compute quit attempt and cessation rate, we used those who reported smoking 12 months before the survey as the denominator. 25 26 All estimates were weighted using published weights for CPS-TUS, which accounted for demographic makeup of the sample and adjusted for non-response bias. 22 The basic population weight controlled for age, race, sex, Hispanic origin, and individual state. The supplemental weight dealt with non-response. CPS-TUS collected data using both self report and proxy report. In the present study we used only the data from self report, treating proxy reported data as no response. The replicate weights were derived using balanced repeated replication. 27 Each CPS-TUS conducted its survey in three waves; the response rate averaged over the waves for each survey was 64.0%, 63.6%, 62.0%, 61.2%, and 54.2% for 2001-02, 2003, 2006-07, 2010-11, and 2014-15, respectively. 28 29 30 31 32 We used χ 2 tests or normal approximation to χ 2 tests to compare independent proportions. To provide 95% confidence intervals we computed variances of point estimates using SAS-Callable SUDAAN, version 11. 33 Results Table 1 ⇓ shows the rates of ever use and current use of e-cigarettes in the 2014-15 survey by demographics. Overall, 8.5% of US adults (n=161 054) had ever tried e-cigarettes and 2.4% were currently using them. Men were more likely to use e-cigarettes than women, and younger groups were more likely than older groups. The prevalence also differed by ethnicity and education. Table 1 Rates of ever use and current use of e-cigarettes by demographics, 2014-15 US Current Population Survey View this table: View popup View inline Table 2 ⇓ shows e-cigarette use by cigarette smoking status. Of 161 054 respondents to the 2014-15 survey, 104 788 were never smokers, 22 548 were current smokers, and 2136 recent quitters (those who quit for less than one year). Never smokers had the lowest rate for e-cigarette use: 2.0% had ever used them. Recent quitters had the highest ever use rate, 49.3%, which was even higher than that of current smokers, 38.2%. Table 2 Ever e-cigarette use by smoking status, 2014-15 US Current Population Survey-Tobacco Use Supplement (CPS-TUS) View this table: View popup View inline Table 2 ⇑ also shows the distribution of these ever e-cigarette users by subgroups. Overall, 28.0% of ever users (n=13 042) were current users; the rest had stopped. About 22.8% stopped within the year before the survey, 22.1% stopped for one year or more, and 27.2% gave no date. Again, recent quitters (n=984) were most likely to have continued using e-cigarettes, 38.7%. If they had stopped e-cigarettes, they also were most likely to have only stopped within a year, 30.4%. Longer term former smokers had stopped e-cigarettes for a longer period. The more time since quitting smoking, the less likely respondents were to give a stopping date for e-cigarettes. Table 3 ⇓ shows the prevalence of current e-cigarette use. Only 0.3% of never smokers currently used e-cigarettes at the time of survey. Again, recent quitters had the highest prevalence, 19.0%, even higher than that of current smokers, 11.5%. Table 3 Current e-cigarette use by smoking status, 2014-15 US Current Population Survey-Tobacco Use Supplement View this table: View popup View inline Table 3 ⇑ also shows the distribution of these current users of e-cigarettes by subgroups. Overall, 33.7% of them were daily users. Former smokers were more likely than current smokers to be daily users, with the highest proportion, 72.7%, in recent quitters. Figure 1 compares those who had used e-cigarettes within one year with those who had not. The within-one-year users included those who reported currently using e-cigarettes at the time of survey and those who reported using e-cigarettes within the past 12 months but had stopped by the time of survey. Fig 1 Quit attempt rate and annual cessation rate by e-cigarette use status, 2014-15, USA. CPS-TUS=Current Population Survey-Tobacco Use Supplement Download figure Open in new tab Download powerpoint The top panel of figure 1 shows that within-one-year users were more likely to have made an attempt to quit smoking in the previous 12 months than all the other three subgroups: ever users who had stopped e-cigarettes for one year or more, ever users who stopped e-cigarettes but did not give dates, and never users. The attempt rates were 65.1%, 48.4%, 48.8%, and 37.8%, respectively. The bottom panel of figure 1 shows the annual cessation rate for these four subgroups. The cessation rates follow the pattern of quit attempts: 8.2%, 5.4%, 5.3%, and 4.6%, respectively. Figure 2 compares the results of 2014-15 with those of 2010-11, and with the three other surveys. The top panel shows that the overall quit attempt rate changed little until 2014-15: 40.3%, 40.5%, 39.9%, 41.4%, and 45.9%, respectively. Using the 2001-02 survey as a reference, the 2010-11 and 2014-15 surveys had statistically higher quit attempt rates (P<0.05 and P<0.001). Fig 2 Quit attempt rate and annual cessation rate from 2001-02 to 2014-15, USA. CPS-TUS=Current Population Survey-Tobacco Use Supplement Download figure Open in new tab Download powerpoint The attempt rate in the 2014-15 survey was noticeably higher. For ease of comparison, we combined the results for the three groups that did not use e-cigarettes in the past year (see fig 1) into one group (fig 2). The last two bars show that quit attempts for smokers who did not use e-cigarettes within one year (40.1%) were similar to those in previous surveys. Quit attempts for e-cigarette users were, however, statistically significantly higher than for non-users., Numerically speaking, it was this e-cigarette user subgroup that raised the overall quit attempt rate for 2014-15, and thus the rate was statistically significantly higher than in all previous survey years. The bottom panel of figure 2 shows the annual cessation rate, which follows the pattern of the quit attempt rate. The annual cessation rates did not change much until 2014-15: 4.3%, 4.3%, 4.5%, 4.5%, and 5.6% for 2001-02, 2003, 2006-07, 2010-11, and 2014-15, respectively. Using 2001-02 as a reference, only 2014-15 had a statistically higher overall cessation rate (P<0.001). Compared with 2010-11, 2014-15 had a statistically significantly higher cessation rate, 5.6% v 4.5% (change=1.1%, 95% confidence interval 0.6% to 1.5%). Again, the 2014-15 survey had a noticeably higher overall cessation rate because the e-cigarette user subgroup had a higher cessation rate than those who did not report e-cigarette use in the past year (8.2% v 4.8%, respectively, (P<0.001). The absolute difference of 3.5% (rounded from 3.48) translates into a 73% relative increase. Discussion This study has two principal findings. First, in 2014-15, e-cigarette users in the United States attempted to quit cigarette smoking and succeeded in quitting at higher rates than non-users. Second, the overall population smoking cessation rate in 2014-15 increased statistically significantly from that in 2010-11. The 1.1 percentage point increase in cessation rate (from 4.5% to 5.6%) might appear small, but it represents approximately 350 000 additional US smokers who quit in 2014-15. Strengths and limitations of this study The main strength of our study is that we used the largest representative sample of e-cigarette users among the US population. Moreover, by using the ongoing US Current Population Surveys, we evaluated the impact of e-cigarettes in a larger context through comparing the quit rate in 2014-15 with that of the same population survey from previous years. This provides the clearest result to date that e-cigarette use is not only associated with a higher smoking cessation rate at the individual user level but also at the population level. Our study has limitations common to population surveys. Self report is subject to recall biases. Survey questions were limited, preventing more detailed analysis of the quitting process. For example, the lack of information on pharmacotherapy use in this survey prevents comparisons between e-cigarette users and those who use traditional pharmacotherapy. Though another national survey in 2014 found that more smokers in the US used e-cigarettes than FDA approved pharmacotherapy, 34 similar to results from England, 11 the inability to compare these two subgroups in the present survey is a limitation. Also, lack of information on the type of e-cigarette product used (ie, open versus closed system) limits further comparison to other population studies of e-cigarettes that might have been informative. Another limitation is that this study is not a randomized trial at the population level. Thus it is important to examine other possible influences on the population smoking cessation rate. We discuss two major interventions that occurred at the national level and took place around the 2010-11 and 2014-15 surveys. First, in 2009 there was an increase in federal tobacco tax. 35 The national cigarette tax increased by 158%, resulting in an immediate reduction in cigarette uptake among US adolescents. 35 In our study we found a small but statistically significant increase in quit attempts among US adults, from 39.9% in 2006-07 to 41.4% in 2010-11 (fig 2, top panel). However, the total cessation rate did not change: 4.5% for both surveys (fig 2, bottom panel). Thus the effect of the 2009 federal tax on quitting by adult smokers, if there was an immediate one, was no longer detectable by 2010-11. This lack of change in smoking cessation under such a dramatic tax increase accentuates the difficulty in improving quit rates at the population level. 23 It does provide a reference point to evaluate the magnitude of change reported for the 2014-15 US Current Population Survey-Tobacco Use Supplement (CPS-TUS). Second, since 2012 there have been annual, national media campaigns aimed at increasing quit rates among adult smokers. 36 The TIPS from Former Smokers campaign used evocative television spots showing the serious health consequences of tobacco use. This campaign, running from nine to 20 weeks in any given year, reached a large segment of the smoking population. A national survey after the first round of the campaign found that 78% of smokers saw at least one media spot. 37 By 2015, there had been four rounds of the campaign. Surveys found a statistically significant increase in quit attempts, and the cessation rate of those who made a quit attempt was estimated to be between 5.7% and 6.1%. 36 37 In the present study we found statistically significant increases in both quit attempt and cessation rates from 2010-11 to 2014-15. This period coincided with the TIPS campaign and the dramatic increase in e-cigarette use. 36 Could TIPS alone explain the increase? Given the reach of the first TIPS campaign, after four rounds it was expected to reach most US smokers by 2014-15. 36 37 However, the majority of smokers did not appear to change their quitting behavior: smokers who did not use e-cigarettes were the majority (77%) in 2014-15. Neither their attempt rate nor the annual cessation rate was statistically different from that of all smokers in 2010-11 (fig 1). It was e-cigarette users in 2014-15 who showed a dramatically higher quit attempt rate and a higher cessation rate. Thus it would be unlikely that the TIPS campaign was solely responsible for the overall increase because that would mean the TIPS messages only resonated with those who happened to use e-cigarettes in 2014-15. Given that the e-cigarette user subgroup was the only group that had statistically significantly higher rates in 2014-15 (fig 2), it is tempting to attribute the increase in the overall smoking cessation rate in 2014-15 solely to e-cigarette use. However, e-cigarette use itself could be an indicator of motivation to quit smoking, which would predict a higher quit rate. 34 Thus, attributing the full 73% relative difference to e-cigarettes is likely an overestimate of their effect. What is more probable is that e-cigarettes and tobacco control measures, such as the TIPS campaign and other state level activities for tobacco control, worked synergistically to produce the first substantial increase in population cessation in the US in the past 15 years (fig 2): tobacco control campaigns increased smokers’ desire to quit, and e-cigarettes increased the probability of motivated smokers making a quit attempt and staying abstinent. Viewed from the context of an earlier analysis that examined the US population data from 1991 onward, 23 this is the first time in almost a quarter of a century that the smoking cessation rate in the US has increased at the population level. Comparison with other studies The results from our study agree with some studies and differ from others. 9 11 15 16 17 18 Our study supports a report from England where e-cigarette use was found to be associated with a higher success rate of quit attempts. 11 More importantly, we found that e-cigarette use was also associated with a higher quit attempt rate, which eventually translates into a higher overall population cessation rate. 23 The study differs from some other studies that compared e-cigarette users with non-users and found a negative correlation between e-cigarette use and smoking cessation. The key difference seems to be related to the difference in timing of data collection for this study compared with that of earlier studies. 16 18 38 First, as e-cigarettes grew more popular over time, more people used them intensively. 14 This study found that 33.7% of current e-cigarette users were daily users. Earlier studies rarely reported on intensity of usage. 13 Many did not report if e-cigarettes were used within the past 12 months, but only identified the users as ever users. 13 Intensive use of e-cigarettes is key to their potential effect as a smoking cessation aid. 14 15 34 Our study, based on the largest representative sample of e-cigarette users in the US in 2014-15, found that more than 70% of recent quitters who used e-cigarettes were still using e-cigarettes daily. The daily use might have been critical in preventing relapse. Second, e-cigarette products have evolved, and open system devices have become more popular. 39 Open systems generally deliver a greater concentration of nicotine and engender a higher level of perceived control than do closed systems. 39 40 The use of open systems was associated with higher smoking cessation rates. 40 41 42 Thus if the proportion of open system users increases, it may lead to higher quit rates for all e-cigarette users. A more important finding from this study that differs from previous studies is that the increased smoking cessation rate among e-cigarette users translated into a higher overall population cessation rate in 2014-15, compared with 2010-11. If the overall population cessation rate did not increase in 2014-15, then the meaning of the subgroup difference between e-cigarette users and non-users would be less clear. The use of e-cigarettes was not a randomly assigned condition but rather self selected by smokers. Such a self selection process could result in no change in the overall population rate, even if there are subgroups differences. In fact, one of the most vexing results in the smoking cessation literature is that the population cessation rate in the US has shown no discernible trend from 1991 to 2010, even though an increasing proportion of smokers used proven pharmacotherapy. 23 During the two decades before 2011, the annual cessation rate hovered around 4.4%. 23 Our study replicated this finding in the first four CPS-TUSs in the 21st century. From 2001-02 to 2010-11, the cessation rate ranged from 4.3% to 4.5%. It was in 2014-15 CPS-TUS, where we found the first statistically significant increase in the population cessation rate (fig 2). With an increase in the overall population cessation rate as context, the subgroup difference between e-cigarette users and non-users found in this study takes on more importance. The cessation rate for those who did not use e-cigarettes in 2014-15 CPS-TUS remained statistically indistinguishable from those of the previous years (see fig 2). It was the e-cigarette users who quit at a clearly higher rate (8.2%) that brought the overall population cessation rate to a higher level. Such a data pattern makes it more reasonable to conclude that e-cigarette use contributes to the increase in the overall smoking cessation rate. Our study replicates other US studies on the ethnic representation of e-cigarette use. 19 20 E-cigarette use is observed across all ethnic groups, but rates are higher for some groups. 19 20 Our study focuses its analysis on the overall population effect of e-cigarette on smoking cessation in adults in the US context. Future studies might examine the e-cigarette effect by demographic subgroups. The main results, based on the diverse US population, suggest a similar effect may be observed in other jurisdictions if a sufficient proportion of smokers use e-cigarettes on a daily basis as aids to quit smoking. Conclusion and policy implications This study, based on the largest representative sample of e-cigarette users to date, provides a strong case that e-cigarette use was associated with an increase in smoking cessation at the population level. We found that e-cigarette use was associated with an increased smoking cessation rate at the level of subgroup analysis and at the overall population level. It is remarkable, considering that this is the kind of data pattern that has been predicted but not observed at the population level for cessation medication, such as nicotine replacement therapy and varenicline. 23 25 26 43 44 This is the first statistically significant increase observed in population smoking cessation among US adults in nearly a quarter of a century. 23 These findings need to be weighed carefully in regulatory policy making and in the planning of tobacco control interventions. 45 What is already known on this topic Researchers have offered competing hypotheses about whether the dramatic increase in e-cigarette use helps or hinders smoking cessation at the population level What this study adds E-cigarette users in 2014-15 were more likely than non-users to make a quit attempt and succeed in quitting smoking The overall rate of smoking cessation for the US population was significantly higher in 2014-15 (when e-cigarette use among smokers was high) than in 2010-11 (when e-cigarette use was very low), as well as than in all previous survey years (when e-cigarette use was practically non-existent) E-cigarettes appear to have helped to increase smoking cessation at the population level Footnotes We thank Jessica Sun for her comments on earlier drafts of the paper and for her help in preparing the manuscript. Contributors: S-HZ and Y-LZ conceived the study. Y-LZ, S-HZ, and SW analyzed the data. S-HZ, Y-LZ, SW, SEC, and GJT interpreted the data. S-HZ, Y-LZ, SEC, GJT, and SW helped draft the manuscript. All authors had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. S-HZ is guarantor for the study. Funding: This study was supported by the National Cancer Institute of the National Institutes of Health under the State and Community Tobacco Control (SCTC) Initiative (award No U01CA154280). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Funders of this study had no role in the study design; collection, analysis, and interpretation of the data; writing of the manuscript; or decision to submit the manuscript for publication. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: S-HZ has received a grant from the National Institutes of Health for this work. All authors declare no financial relationships with any organizations that might have an interest in the submitted work in the previous three years; and no other relationships or activities exist that could appear to have influenced the submitted work. Ethical approval: This study is a secondary data analysis of publicly available data, approved by the UCSD Human Research Protection Program (institutional review board No 140821). Data sharing: The full dataset is publicly available from the US Census Bureau. Transparency: The lead author (S-HZ) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: .
People who used e-cigarettes were more likely to kick the habit than those who didn't, a new study found. Nicotine patches, gums and medications are known to aid smoking cessation, but there's no consensus on whether vaping devices can help anti-smoking efforts. The U.S. research is the largest look yet at electronic cigarette users and it found e-cigarettes played a role in helping people quit. "It's absolutely clear that e-cigarettes help smokers replace cigarettes," said Peter Hajek, director of the health and lifestyle research unit at Queen Mary University in London, who wasn't part of the study. Smoking rates have been generally declining for decades. Health experts have credited taxes on tobacco products and anti-smoking ads for the drop. E-cigarettes have been sold in the U.S. since 2007. Most devices heat a liquid nicotine solution into vapor and were promoted to smokers as a less dangerous alternative since they don't contain all the chemicals, tar or odor of regular cigarettes. Researchers analyzed and compared data collected by the U.S. Census from 2001 to 2015, including the number of adult e-cigarette users from the most recent survey. About two-thirds of e-cigarette users tried to quit smoking compared to 40 percent of non-users, the study found. E-cigarette users were more likely to succeed in quitting for at least three months than non-users—8 percent versus 5 percent. The research was published online Wednesday in the journal, BMJ. It was funded by the National Institutes of Health. The rate of people quitting smoking in the U.S. has remained steady at about 4.5 percent for years. It jumped to 5.6 percent in 2014-2015, representing about 350,000 fewer smokers. It was the first recorded rise in the smoking cessation rate in 15 years. While national anti-smoking campaigns likely helped, the results show e-cigarette use also played an important role, said lead author Shu-Hong Zhu of the University of California, San Diego. Hajek, who wasn't part of the research, said vaping devices shouldn't be strictly regulated, but instead be allowed to compete directly with cigarettes. "That way, smokers can get what they want without killing themselves," he said. Earlier this month, a House panel renewed its efforts to prevent the Food and Drug Administration from requiring retroactive safety reviews of e-cigarettes already on the market. Others warned that the long-term side effects of e-cigarettes are unknown. "We just don't know if moving to e-cigarettes is good enough to reduce the harm," said Aruni Bhatnagar, director of the American Heart Association's Tobacco Research and Addiction Center. Chris Bullen, who authored an accompanying editorial , said although the long-term safety of e-cigarettes is unclear, any ill effects are "likely to be rare compared with the harms of continuing to smoke." The latest results strongly suggest that more lenient control of e-cigarettes could improve population health, said Bullen, a professor of public health at the University of Auckland. "If every smoker was to change over to e-cigarettes completely, there would be a dramatic and almost immediate public health benefit," he said in an email.
www.bmj.com/content/358/bmj.j3262
Biology
Exploring evolution acceptance for better science education
Ryan D. P. Dunk et al. A multifactorial analysis of acceptance of evolution, Evolution: Education and Outreach (2017). DOI: 10.1186/s12052-017-0068-0
http://dx.doi.org/10.1186/s12052-017-0068-0
https://phys.org/news/2017-11-exploring-evolution-science.html
Abstract Background Despite decades of education reform efforts, the percent of the general US population accepting biological evolution as the explanation for the diversity of life has remained relatively unchanged over the past 35 years. Previous work has shown the importance of both educational and non-educational (sociodemographic and psychological) factors on acceptance of evolution, but has often looked at such factors in isolation. Our study is among the first attempts to model quantitatively how the unique influences of evolutionary content knowledge, religiosity, epistemological sophistication, and an understanding of the nature of science collectively predict an individual’s acceptance or rejection of evolution. Results Our study population had a high acceptance of evolution, with an average score of 77.17 (95% C.I. ± 1.483) on the Measure of Acceptance of the Theory of Evolution (MATE) instrument. Our combined general linear model showed that, of the variables in our model, an understanding of the nature of science explained the greatest amount of variation in acceptance of evolution. This was followed in amount of variance explained by a measure of religiosity, openness to experience, religious denomination, number of biology courses previously taken, and knowledge of evolutionary biology terms. Conclusions Understanding of the nature of science was the single most important factor associated with acceptance of evolution in our study and explained at least four times more variation than measures of evolutionary knowledge. This suggests that educational efforts to impact evolutionary acceptance should focus on increasing an understanding of the nature of science (which may be expected to have additional benefits towards generalized science denial). Additionally, our measure of epistemological sophistication had a unique, significant impact on acceptance of evolution. Both epistemological sophistication and an understanding of the nature of science are factors that might change throughout a liberal arts education, independent of the effect of direct evolutionary instruction. Background Evolution is the unifying theme of all biology. Living organisms and the interactions between them can be understood most clearly through the lens of evolution; this is reflected in the near-universal acceptance of evolution among biologists (Graffin 2003 ), who have studied the evidence supporting evolutionary theory and use it to guide their work. However, among the general public, acceptance of evolution is much less prevalent. Despite decades of efforts toward science education reform that might be expected to improve evolutionary understanding and acceptance, little change has occurred in the number of people who accept evolutionary explanations of life’s diversity (Gallup 2014 ). This rejection of biology’s overarching theme leads to an inability to correctly understand and to reason appropriately regarding biological phenomena (Dobzhansky 1973 ). In addition, science denial by those responsible for setting policy leads to difficulties in implementation of sound science curricula in schools as well as poor potential outcomes regarding future funding for biological sciences. It is for these reasons and more that a public accepting of evolutionary biology is not only desirable, but necessary. In this study, we attempt to explore how different educational and sociodemographic factors interact with acceptance of evolution in college students. Knowledge of evolution is perhaps the most intuitive factor related to evolution acceptance. Rutledge and Warden ( 2000 ) found that among high school teachers, knowledge of evolution was significantly correlated with acceptance of evolution (see also Deniz et al. 2008 ; Glaze et al. 2015 ). This link was also found in an undergraduate sample (Carter and Wiles 2014 ). Other studies (Heddy and Nadelson 2013 ; Mazur 2004 ; Wiles 2014 ) have found more generally that higher education levels lead to greater acceptance of evolution. Barone et al. ( 2014 ) found a significant correlation between knowledge of evolutionary terms and acceptance of evolution among visitors to a natural history museum. However, other researchers have found no significant link between knowledge and acceptance of evolution, especially when other variables are considered in the same model (Cavallo and McCall 2008 ; Sinatra et al. 2003 ). Multiple studies have found an understanding of the nature of science to be significantly related to acceptance of evolution (Carter and Wiles 2014 ; Cavallo and McCall 2008 ; Glaze et al. 2015 ; Johnson and Peeples 1987 ; Rutledge and Mitchell 2002 ; Trani 2004 ). Compared to the more equivocal support of the role of evolutionary content knowledge in evolution acceptance described above, this consistent trend seems to indicate acceptance of evolution might be more strongly influenced by a general understanding of the aims and process of science. Indeed, many of the major creationist criticisms of evolutionary biology stem from a misunderstanding of the nature of science (Matthews 1997 ; Pigliucci 2008 ). Cognitive factors have also been found to have a strong effect on acceptance of evolution. Deniz et al. ( 2008 ) found thinking dispositions to be the most significant predictor of evolution acceptance in preservice biology teachers in Turkey. Sinatra et al. ( 2003 ) found a measure of epistemological sophistication and a disposition towards actively open-minded thinking to be significantly correlated with acceptance of human evolution (but no relation was found for acceptance of animal evolution). Hawley et al. ( 2011 ) found that openness to experience, a psychological metric measuring intellectualism and creativity (John et al. 2008 ), to be significantly negatively related to acceptance of creationist reasoning. In this study, we consider epistemological sophistication to be a general term referring to a mature manner of understanding the nature of knowledge. Openness to experience is used here as a proxy for epistemological sophistication. It is generally known that, at least among many Christian denominations in the United States, people who are more strongly religious tend to have greater concern over evolution, especially as it applies to humans. Many authors have found a link between strength of religious convictions and lack of acceptance of evolution (Barone et al. 2014 ; Carter and Wiles 2014 ; Glaze et al. 2015 ; Heddy and Nadelson 2013 ; Mazur 2004 ; Moore et al. 2011 ; Nehm and Schonfeld 2007 ; Trani 2004 ), although Hawley et al. ( 2011 ) found contradictory results. Religiosity, loosely defined as the degree to which religious faith and conviction have an impact on daily life, is a preferred measure over religious denomination because it indicates a level of religious activity and how strongly religion may influence understandings and decision making. Combining the factors described above, we present a working model of evolutionary acceptance whereby acceptance of evolution is impacted separately by knowledge of evolution, religiosity, epistemological sophistication, and an understanding of the nature of science. As described previously, all of these factors have been shown to be related to acceptance of evolution. However, very few studies include multiple factors, and to our knowledge, none has quantitatively evaluated their comparative effects simultaneously. To correctly understand the relative impact each factor has, they must be analyzed in a model together, along with demographic variables. This is the aim of our study. Specifically, we predicted that, when analyzed together in a general linear model, greater epistemological sophistication, evolutionary content knowledge, and understanding of the nature of science will each be associated with higher levels of acceptance of evolution, while higher levels of religiosity will be associated with lower levels of acceptance of evolution. Methods Survey methodology To assess the relative importance of different variables on the acceptance of evolution in college students, we conducted a survey of 284 undergraduates in an introductory anatomy and physiology course at the University of Wisconsin–Milwaukee. This sample population is unlikely to be representative of the general population, but is likely comparable in most respects to students with similar experience at other institutions with regard to the variables being examined. The survey consisted of six sections, 1–2 pages each, for a total length of 188 items on 7 pages, exclusive of the consent form which served as a removable cover page. The sections were ordered to attempt to eliminate potential biases in response from subconscious priming (Strack 1992 ). All of the sections except the final one consisted of instruments developed and used in other studies (Table 1 ; see Additional file 1 : Methods for a more detailed description of the survey instruments). These were included intact to allow for maximal comparison between the present study and others employing the same instruments. Table 1 Survey instruments used in the present study and their original sources Full size table The final portion of the survey consisted of demographic questions and other variables we thought might be related to acceptance of evolution. Participants were asked to provide via free response personal information about age, sex, ethnicity, religious denomination, perceived importance of church, frequency of church attendance, college major or concentration, number of college science classes taken, number of college biology classes taken, and rurality of childhood home. Participants were also asked to provide their net college grade point average [GPA; choices provided both numerical and descriptive approximation, e.g., “2.5–2.9 (Mix of Cs and Bs)”], general interest in science (on a 5-point Likert scale), and highest level of schooling completed by mother and father (asked separately: choices were less than high school, high school diploma or GED, some college, 2-year degree, 4-year degree, graduate education). All data collection, coding, and analyses were performed according to an ethics review board approved protocol. Data entry and coding All survey responses were electronically transcribed by one of three individuals, either one of the authors (RDPD), a graduate student assistant, or an undergraduate assistant. Terms from the terms index were marked as present or absent and all Likert questions were entered as answered. For the demographic questions, answers were entered verbatim except for small edits for clarity or brevity. Each survey was transcribed in duplicate by at least two individuals and checked for consistency by the first author. Free-response variables were coded separately by the first and last authors using the guide given in Additional file 1 : Table S1 and compared for consistency. Where inconsistencies were found, both coders reached agreement via conference. The full dataset used to generate our results has been publicly archived along with supplemental materials on the figshare repository (doi: 10.6084/m9.figshare.5072137 ). Statistical methodology Summary statistics were calculated for all linear variables, and frequency tables were produced for all categorical and ordinal variables. For a few of our variables, the initial coding created inadequate sample sizes for some groups; we revised this by combining codes, and the final coding is reflected in Additional file 1 : Table S1. A wide range of religious denominations with seven or fewer representatives were grouped under “other”. This highly variable group was dropped from all subsequent analyses. We selected the Measure of Acceptance of the Theory of Evolution (MATE; Rutledge and Warden 1999 ) as our dependent variable as it is a widely-used measure of acceptance of evolution that has been validated among university undergraduate students (Rutledge and Sadler 2007 ). The individual influence of each independent variable on students’ MATE scores was tested using ordinary least-squares regression analyses for continuous variables, and for each categorical variable a one-way ANOVA was completed. Ordinal variables present in the dataset were treated as categorical variables; this was done for statistical simplicity, but it gives a conservative estimate of relative importance (Agresti 2010 ). Variables that were found to have a significant effect on MATE score were included in a large, exploratory General Linear Model (specifically, a multifactorial ANCOVA without interaction) to explore their independent effects on MATE score. Variables that were found to affect MATE score in a significant or nearly significant way (using an α of 0.10) were chosen for inclusion in the final model. Five variables were presumed to be measurements of religiosity. These included the Likert scale items “My religion impacts my daily life”, “My religion influences my decisions”, “I am a religious person”, as well as frequency of church attendance and importance of church. These five variables were subjected to a principle components analysis with varimax rotation and were found to all form one highly consistent factor. However, the factor scores were not as robust in explaining variation in MATE scores as some of the individual components, so the factor was not used in our models. The final model was again run as a multifactorial ANCOVA. Interaction terms between variables were not included as they could not be reliably estimated in the full model. To the limited extent interactions were able to be estimated, their relative contribution to the model was small. The final eight factors were assessed for adherence to the ANCOVA model. Specifically, the homogeneity of regression slopes was tested for and found to be upheld. Effect sizes were calculated as η 2 (Kline 2004 ). Analyses were conducted using SYSTAT or SPSS, except effect sizes which were calculated manually. Results The average MATE score was 77.17 (95% C.I. ± 1.483), right at the lower threshold of what is considered high acceptance (Rutledge and Sadler 2007 ). Scores ranged from 28 to 100 and thus all levels of acceptance had a sample of students (Table 2 ). On average, respondents tended to have a moderate level of familiarity with evolutionary terms, and were not particularly knowledgeable about the nature of science (Table 3 ). Demographically, our study was skewed young (mean age = 21.7), white (66.9%), and female (69.7%), with a high proportion of health majors (80.8%), who were not well experienced in biology (average number of college biology classes taken = 1.82). In most other measures, such as rurality of childhood home, GPA, and parents’ levels of educational achievement, our population was more diverse (Tables 3 , 4 ). Table 2 Number of respondents scoring within each level of evolution acceptance on the MATE Full size table Table 3 Summary statistics for linear variables Full size table Table 4 Frequency tables for select categorical variables Full size table With regard to religion, our sample was heavily represented by Christian (57%; including 3 orthodox Christians coded as “Other”) and areligious (37%) individuals. The remainder included a variety of other faiths. Among Christians, Catholics were the denomination group most strongly represented (26% of full sample), followed by Protestants and Non-Denominational Christians (a group which often is heavily composed of fundamentalist evangelicals and members of stand-alone “megachurches”; Table 4 ). Looking across religious identities, those who claimed no religious affiliations scored highest on the MATE (mean: 83.14, 95% C.I. ± 2.124), followed by Catholics (mean: 76.76, 95% C.I. ± 2.433) and Protestants (mean: 72.68, 95% C.I. ± 4.826). Non-Denominational Christians had the lowest MATE score amongst denominational identities (mean: 67.31, 95% C.I. ± 3.615). However, while these results are in line with expected trends, we caution against generalizing our sample of college undergraduates to religious affiliations as a whole, especially regarding the high proportion of areligious individuals. Table 5 shows the results for each variable’s individual relation to the MATE. Regarding the linear variables, scores on both the understanding of science and Familiarity With Evolutionary Terms are significantly correlated with MATE score, as are number of both science and biology courses taken, age, and two of the factors from the Big Five Inventory, openness to experience and extraversion. All of these are positively correlated with the MATE except for extraversion (viz., increased extraversion leads to decreased score on the MATE). The other three factors from the big five inventory (neuroticism, conscientiousness, and agreeableness) did not have a statistically significant impact on MATE score. Table 5 Raw (uncorrected) p-values of association with score on the Measure of Acceptance of the Theory of Evolution (MATE) Full size table Regarding the categorical variables, all five measures of religiosity showed significant association with the MATE, with denomination, interest in science, ethnicity, and sex showing significant relations as well. GPA, rurality of childhood home, major, mother’s education level, and father’s education level were not significant. All of the variables with a significant solo association with MATE score were combined into an exploratory full ANCOVA (Additional file 1 : Table S3). As previously noted, those variables with a significance at or below p = 0.10 in this exploratory model were included in a final ANCOVA model. The final analysis (given in Table 6 ) included five linear variables (understanding of science score, openness to experience, Familiarity With Evolutionary Terms score, extraversion, and number of college biology courses taken) and three categorical variables (“My religion influences my decisions”, denomination, and importance of church in life). The significant terms in this final model explain 32.6% of the variation in MATE score in our study. Knowledge of the nature of science had the greatest association with MATE score, with over 13% variance uniquely explained. This was followed by the religiosity measure “My religion influences my decisions” (10.1% variance explained), openness to experience (5.1%), denomination (2.5%), number of college biology courses taken (1.6%), and Familiarity with Evolutionary Terms (1.2%). Another religiosity measure (self-described importance of church) and extraversion were no longer significant in the final model. Table 6 Final ANCOVA model of the Measure of the Acceptance of the Theory of Evolution Full size table One of the most important assumptions to be met in the ANCOVA model is homogeneity of regression slopes (Huitema 2011 ; Rutherford 2001 ): that is, for each level of a categorical variable, the regression lines of the dependent variable and a covariate must be parallel. This is borne out in our data. Significantly heterogeneous regression slopes were tested for by running single ANCOVAs (Myers and Well 2003 ). MATE score was the dependent variable and each covariate and categorical variable were paired in a two-way ANCOVA with an interaction term included. Significant interaction terms would signify significant heterogeneity; none were found to be significant, even without correction for multiple tests. Discussion Our survey showed an overall average score on the MATE of 77.17 (95% C.I. ± 1.483), which is at the lower cutoff for “high acceptance” as defined by Rutledge and Sadler ( 2007 ). Although high in comparison to other studies of college students (Deniz et al. 2008 ; Rutledge and Sadler 2007 ), gifted high school students (Wiles and Alters 2011 ), and biology teachers (Rutledge and Warden 1999 ), this average MATE score is nearly identical to that found in a sample of patrons of a natural history museum in the same region (Barone et al. 2014 ). Our initial associations with MATE score showed a high impact of the knowledge of the nature of science, Familiarity with Evolutionary Terms, number of college biology and science courses taken, openness to experience, all religiosity measures, denomination, and interest in science, with a smaller but still significant relationship between MATE and ethnicity, sex, age, and extraversion (Table 5 ). However, once the variables were combined, many of them no longer retained significance. Ethnicity, age, number of college science courses taken, interest in science, sex, and three of the religiosity measures (“My religion impacts my daily life”, “I am a religious person”, and frequency of church attendance) all have p-values in the full ANCOVA much higher than 0.10 (Additional file 1 : Table S3). This underscores the importance of our approach; the methods used in most previous studies would find and report significance for parents’ education levels (Deniz et al. 2008 ), or sex (Grose and Simpson 1982 ) (or age or ethnicity) in acceptance of evolution; in our study, however, these relationships appear to be driven by underlying variation in other variables. In the final ANCOVA model we found all variables that were significant in the “Full” exploratory model remained so. In addition, number of college biology courses becomes a significant predictor of score on the MATE due to the increased power of the test. Our final model provides support for all of the factors described earlier. All together, the significant terms in the final model explain nearly a third of the variance in MATE score; this is a satisfactory amount, especially for a model of human cognition, but it bears note that much of the variation in MATE score was left unexplained by the significant terms in our model. The final model is striking in the order of the importance of its terms as well; nature of science knowledge explained over 13% of the variation in MATE score, and religiosity an additional 10%. Evolutionary knowledge (measured in terms and number of courses combined) only accounts for about 2.8% of the variation, while openness to experience (our proxy for epistemological sophistication) explained nearly twice that amount (5.1%). Finally, our study agrees with that of Barone et al. ( 2014 ) in finding a significant impact of religious denomination on MATE score, although we found that once religiosity and other measures were accounted for, the impact was greatly reduced (with 2.5% of variance explained). Conclusions These findings have direct implications for our understanding of evolution acceptance. We found that, in our sample, evolutionary content knowledge has a statistically significant but relatively small impact. This may account for the general long-term failure of pedagogical changes alone to effect changes in evolution acceptance. In order for future efforts to be more successful, they should include increased instruction on the nature of science. As students develop better understandings of the nature of science, this should have a direct, measurable impact on acceptance of evolution, at least in post-secondary settings. Furthermore, we take heart in the finding of the importance of openness to experience. While this psychological trait may not be simple to teach directly, we should hope that a liberal arts education would effect a change in related epistemological sophistication and hence increase evolution acceptance [which could account for the significant impact of educational attainment on evolution acceptance as seen in Barone et al. ( 2014 )]. In regards to the importance of religiosity, rather than expecting or effecting change in levels of religiosity, which is neither the province nor the goal of science educators, an effective strategy may be to help reduce students’ perceived conflicts between their religious identities and acceptance of evolution by discussing the matter frankly (Barnes and Brownell 2016 ; Barnes et al. 2017 ). In conclusion, we found that acceptance of evolution is related to a variety of factors, some which are influenced by formal science education, and some that are not. Among education-related factors, the majority of the impact came from an understanding of the nature of science, which may be underemphasized in many levels of science education. For factors not related to formal science education we found that while religiosity explained the largest amount of variation, epistemological sophistication, which should be expected to change with increased educational exposure, was also important. Thus, both sets of factors can have an important contribution in changing evolutionary acceptance. Abbreviations GPA: grade point average MATE: Measure of Acceptance of the Theory of Evolution
Understanding the nature of science is the greatest predictor of evolution acceptance in college students, a new study finds. With a minority of American adults fully accepting evolution, the fundamental principle of biological science, this research provides guidance for educators to improve science literacy. "Asking why it is critical that students accept evolution is almost like asking why it is critical that students understand biology," says Ryan Dunk, biology Ph.D. candidate in the College of Arts and Sciences and the paper's lead author. Evolution's many applications include understanding human disease and impacts of climate change, he explains. The research was published online on July 17, 2017, in the open-access journal Evolution: Education & Outreach. SpringerOpen also published Dunk's invited blog post on the findings on Sept. 11, 2017. Dunk surveyed introductory anatomy and physiology students at the University of Wisconsin-Milwaukee (UWM) with co-author Benjamin Campbell, UWM associate professor of anthropology. Andrew Petto, UWM distinguished lecturer of biology, and Jason Wiles, Syracuse associate professor of biology, join as co-authors. In this study, Dunk and colleagues used statistical models to pinpoint how an individual's understanding of science, knowledge of evolution, personality traits, religiosity and demographic traits predict student acceptance of evolution. While previous research looked for single traits predicting evolution acceptance, Dunk combined all of the above variables into one comprehensive model. Unlike previous "one variable" models, this comprehensive approach enabled Dunk to see which variables are more important relative to each other. "We were able to look at interactions that are actually going on in people—every individual has a level of knowledge of evolution, a level of their knowledge of nature of science, a level of religiosity, and so on," Dunk says. In this study, evolution acceptance was determined by student responses to a commonly used questionnaire called Measure of the Acceptance of the Theory of Evolution (MATE.) The MATE asks questions about the age of the Earth, how organisms have changed over time, how and if humans have changed over time and whether evolutionary theory is supported by historical and laboratory data. The greatest predictor of student evolution acceptance was greater understanding of the nature of science, which includes recognizing the types of questions science can answer and how the scientific method is used to test hypotheses. Intrinsic religiosity, the extent to which an individual relies on religion for decision and opinion making, was also an important factor, but to a lesser extent. Dunk explains that the study's results go against the old strategy of "teach evolution better" to increase acceptance: "We know we don't just need to teach them the facts better, because we have been working on evolution curriculum reforms for decades that have moved the needle very little on wide-scale acceptance." Additionally, the authors don't see religion as a roadblock to fostering evolution acceptance. "Many religious leaders have made peace with evolution," Wiles notes, referring to resources like the Clergy Letter Project and Voices for Evolution. Prior research from the Wiles lab found that students who had become more accepting of evolution had not become any less religiously active. Wiles explains, "It may be that as students learn more about how science works, they rely more on scientific explanations for natural phenomena, but that doesn't mean they must abandon religion in the process." Dunk, who recently received a $2,492 Rosemary Grant Award (RGA) for Graduate Student Research from the Society for the Study of Evolution, will continue exploring evolution acceptance with his Ph.D. advisor, Wiles. In addition to tracking evolution acceptance across Syracuse undergraduates over two years, the RGA will fund focus groups and individual interviews with select participants. "All the people we survey are currently students. But they're going to be educated members of the general public," Dunk says. "Our work aims to help them have an appreciation for scientific inquiry and nature itself."
10.1186/s12052-017-0068-0
Medicine
New data changes the way scientists explain how cancer tumors develop
Hyun Jung Park et al. 3′ UTR shortening represses tumor-suppressor genes in trans by disrupting ceRNA crosstalk, Nature Genetics (2018). DOI: 10.1038/s41588-018-0118-8 Journal information: Nature Genetics
http://dx.doi.org/10.1038/s41588-018-0118-8
https://medicalxpress.com/news/2018-05-scientists-cancer-tumors.html
Abstract Widespread mRNA 3′ UTR shortening through alternative polyadenylation 1 promotes tumor growth in vivo 2 . A prevailing hypothesis is that it induces proto-oncogene expression in cis through escaping microRNA-mediated repression. Here we report a surprising enrichment of 3′UTR shortening among transcripts that are predicted to act as competing-endogenous RNAs (ceRNAs) for tumor-suppressor genes. Our model-based analysis of the trans effect of 3′ UTR shortening (MAT3UTR) reveals a significant role in altering ceRNA expression. MAT3UTR predicts many trans-targets of 3′ UTR shortening, including PTEN , a crucial tumor-suppressor gene 3 involved in ceRNA crosstalk 4 with nine 3′UTR-shortening genes, including EPS15 and NFIA . Knockdown of NUDT21 , a master 3′ UTR-shortening regulator 2 , represses tumor-suppressor genes such as PHF6 and LARP1 in trans in a miRNA-dependent manner. Together, the results of our analysis suggest a major role of 3′ UTR shortening in repressing tumor-suppressor genes in trans by disrupting ceRNA crosstalk, rather than inducing proto-oncogenes in cis. Main Widespread 3′ UTR shortening (3′US) through alternative polyadenylation (APA) occurs during enhanced cellular proliferation and transformation 1 , 5 , 6 , 7 , 8 . Recently, we reported that NUDT21 -mediated 3′US promotes glioblastoma growth, further underscoring its significance to tumorigenesis 2 . A prevailing hypothesis is that a shortened 3′ UTR results in activation of proto-oncogenes in cis through escaping microRNA (miRNA)-mediated repression. Indeed, several well-characterized oncogenes, such as CCND1 , have been shown to use 3′US to increase their protein levels, but mostly in cell lines 5 . However, in recent PolyA sequencing 7 and our TCGA RNA sequencing (RNA-Seq) APA analysis 1 (5 and 358 tumor/normal pairs, respectively), most oncogenes with 3′US previously identified in vitro 5 displayed almost no changes in their 3′UTR lengths in tumors (Fig. 1a ). For example, we identified 1,346 recurrent (occurrence rate >20%) 3′US genes in 358 tumor/normal pairs 1 . However, CCND1 is not on that list as its 3′US occurred in only a very small portion (8 out of 358; 2.2%) of tumors (Fig. 1b ) . Furthermore, similar to random genes, 3′US genes from all 5 previous APA studies have little overlap with the top 500 ( P < 0.01) high-confidence oncogenes as defined on the basis of distinct somatic mutational patterns of >8,200 tumor/normal pairs 9 (Fig. 1c ). These results challenge the previous hypothesis and suggest a different function of 3′US for tumorigenesis. Fig. 1: 3′US genes are not strongly associated with oncogenes. a , TCGA RNA-Seq data for CCND1 demonstrates no change in 3′UTR usage between tumors (yellow) and matched normal samples (blue). A similar pattern was also observed in PolyA-seq 7 of CCND1 . b , ΔPDUI values for 3′US genes (red) and all genes (gray) in 358 TCGA tumor/normal pairs 1 (upper panel). A negative ΔPDUI represents 3′UTR shortening. The lower panel shows ΔPDUI values for CCND1 across 358 tumor/normal pairs 1 . Significant CCND1 3′ UTR shortening occurred only in a very small portion (8 out of 358; 2.2%) of tumors. c , Overlap P values and the ratios between previously identified 3′US genes and oncogenes. ‘Random ( n = 100)’ represents the averaged P value from 100 random sampling of 100 RefSeq genes. The error bar represents standard variation values of P values from 100 random trials. Full size image Aside from regulating its cognate transcript in cis, the 3′UTR has also been implicated in competing-endogenous RNA (ceRNA) regulation in trans 10 . Although the scope is not fully understood, ceRNA is generally thought to form global regulatory networks (ceRNETs) controlling important biological processes 11 . For example, the tumor suppressor PTEN ’s ceRNAs, CNOT6L and VAPA , have been shown to regulate PTEN and phenocopy its tumor-suppressive properties 12 . As the ceRNA’s regulatory axis is mostly based on miRNA-binding sites on 3′ UTRs, we hypothesize that when genes with shortened 3′ UTRs no longer sequester miRNAs, the released miRNAs would then be directed to repress their ceRNA partners, such as tumor-suppressor genes, in trans, thereby contributing to tumorigenesis. To test this hypothesis, we first used well-established strategies to reconstruct two ceRNETs from 97 TCGA breast tumors and their matched normal tissues, respectively, based on miRNA-binding-site overlap and co-expression 13 , 14 between genes of active ceRNA regulation ( Methods ). In general, transcripts are less correlated between each other in tumors than in normal tissues, partially due to tumor heterogeneity 15 and global reduction of miRNA expression in tumors 16 (Fig. 2a ). As expected, the loss of co-expression results in a much smaller (tenfold reduced) ceRNET for tumors than for normal tissues (Fig. 2b ). Fig. 2: 3′UTR shortening contributes to ceRNET disruption. a , Pearson’s correlation coefficients of 100,000 randomly selected transcript pairs with significant miRNA-binding-site overlap in breast tumors and matched normal tissues. b , The number of ceRNA pairs in breast tumor and the matched normal ceRNETs. The numbers in parentheses are normalized to the number of edges shared between tumor and normal. c , Gene expression of EPS15 (3′US gene) and PTEN (ceRNA partner) on 68 estrogen-receptor-positive (ERP) breast tumors and matched normal samples. The horizontal lines represent the mean expression values of PTEN , which is decreased in tumors (FDR = 2.1 × 10 −10 ). d , The upper heatmap exhibits significant APA events (rows) across 68 ERP tumor/normal pairs (columns), ranked by the number of 3′US genes. The Venn diagrams show the number of ceRNA pairs in the normal and tumor ceRNET. The numbers in parentheses are normalized to the number of edges shared between tumor and normal tissues. The P value was calculated from a one-tailed Pearson’s chi-squared test. Full size image To investigate the role of 3′US in ceRNET disruption, we focused on estrogen-receptor-positive (ER + ) breast tumors, which comprise the majority (68/97) of TCGA breast tumor samples. We built normal and tumor ceRNETs using the same procedure as above. Using the DaPars algorithm 1 , we identified 427 3′US genes recurring in >20% of tumors. Close inspection indicates that 3′US is associated with ceRNET disruption. For example, we identified PTEN and EPS15 as a ceRNA pair in normal ceRNET (4 miRNA-binding-site overlap and ρ = 0.63 co-expression). However, since EPS15 underwent 3′US in 23 (33.8%) out of 68 tumors, thereby losing its capability to compete with PTEN for miRNAs, it lost ( ρ = 0.32) the co-expression (and ceRNA) relationship with PTEN in tumors (Fig. 2c ). Globally, the top 100 ceRNAs with the most significant 3′US genes all lost their interactions in tumors, while 12 out of 100 ceRNAs lacking 3′US retained ( P = 0.0002) their interactions. Furthermore, in separate ceRNETs from 30 tumor/normal pairs with the least and most amount of 3′US (upper panel in Fig. 2d ), more 3′US is clearly associated with more ceRNET loss (38.6 versus 16.4 in fold decrease, P < 1 × 10 −16 , lower panel in Fig. 2d ). From these findings, we conclude that 3′US is strongly associated with ceRNA network disruption in tumors. To understand the function of 3′US-mediated ceRNET disruption, we selected 381 3′US genes and 2,131 of their ceRNA partner genes (3′US ceRNAs), including 591 3′US ceRNA hub and 1,540 3′US ceRNA non-hub genes, in the normal ceRNET (Supplementary Table 1 , Methods ). We hypothesized that 3′US genes released their miRNAs to repress their ceRNA partners in trans. Consistent with our hypothesis, expression changes of 2,131 3′US ceRNA genes in tumors are anti-correlated ( r = −0.21; P = 5 × 10 −24 ) with the degree of 3′US of the associated 3′US genes (Supplementary Fig. 1a ). As a result, among 976 genes in normal ceRNET downregulated in tumors, 816 (83.6%) are ceRNAs of 3′US genes. Surprisingly, 3′US ceRNA hub genes are enriched in tumor-suppressor genes ( P ~ 1 × 10 −20 ) but not in oncogenes (Fig. 3a ), suggesting that the 3′US represses tumor suppressors in trans. For example, 3′US of EPS15 would contribute to downregulating its ceRNA partner PTEN in tumors (Fig. 2c ). Globally, 160 expressed tumor-suppressor genes from 3′US ceRNAs are more likely downregulated than 226 control tumor-suppressor genes not in ceRNET ( P = 8 × 10 −3 , Fig. 3b ), indicating a significant association between 3′US and tumor-suppressor gene repression. Fig. 3: 3′UTR shortening represses tumor-suppressor genes in TCGA breast cancer. a , Functional enrichment of 3′US ceRNA hub genes (red), 3′US ceRNA non-hub genes (blue), 3′US genes (purple) and random RefSeq genes (gray). We randomly sampled each gene category to the same number (381) 100 times; averaged P values with standard deviation are plotted. b , Relative expression (tumor/normal) of tumor-suppressor genes that are 3′US ceRNAs ( n = 160, left box) is lower than for those that are not in ceRNET ( n = 226, right box) ( P = 8 × 10 −3 ). c , 3′US genes (left) might repress their ceRNA partner PTEN in trans through miRNAs (middle) commonly released through 3′ UTR shortening. d , A heatmap in the top panel showing APA events for the nine 3′US genes (rows). The boxplots in the bottom panel show PTEN expression levels in 10 tumors with the most (left) or least (right) 3′ UTR shortening. e , Western blot analysis of lysates from MCF7 cells treated with control (Con.) or EPS15 -targeting siRNAs. The image is representative of three independent experiments. f , Quantification of luciferase activity from cell lysates derived from MCF7 cells transfected with a luciferase reporter containing the PTEN 3′ UTR and EPS15 -targeting siRNAS. Data are the average luciferase activity ± standard deviation from three independent experiments ( P = 0.011 and P < 0.001, two-sided t -test). g , PTEN 3′ UTR luciferase reporter activity in MCF7 cells transfected with EPS15 - and DICER -targeting siRNAS. Data are the average luciferase activity ± standard deviation from three independent experiments ( P = 0.045, P = 0.003 and P = 0.645, two-sided t -test). h , Indirect immunofluorescence of MCF7 cells transfected with either a heterologous reporter containing a vector-derived 3′ UTR (Con.) or the EPS15 3′ UTR together with a GFP construct. PTEN was detected by anti-PTEN antibody conjugated with Alexa Fluor-594. The arrows highlight PTEN + transfected cells. A representative image is shown from three independent experiments. Scale bar, 20 µM. i , The number of PTEN-positive cells in the transfected cells with either the EPS15 3′ UTR ( n = 335) or the control 3′ UTR ( n = 357) from three images. Full size image Additional analyses on sequence features partially explain why 3′US genes, but not tumor suppressors in their ceRNA partners, are likely to have alternative proximal polyadenylation sites, leading to 3′US ( Supplementary Note ). We have also analyzed TCGA 450K methylation array data and found that the 3′US-mediated ceRNA repression is independent of promoter hypermethylation ( Supplementary Note ). To better quantify the trans effects of 3′US, we developed a mathematical model (MAT3UTR) based on its 3′US gene(s) expression, 3′US level, miRNA-binding site(s) and miRNA expression(s) ( Methods ). In 1,548 differentially expressed 3′US ceRNAs, MAT3UTR can explain 47.6% of variation in gene expression (Supplementary Fig. 3c ). In contrast, the MAT3UTR-control model, which considers miRNA expression but not 3′US, explains only 27.2% of variation (Supplementary Fig. 3d ), consistent with previous reports 17 that miRNA alone has a weak role in regulating gene expression. The results suggest that the trans effects of 3′US plays a major role in regulating ceRNA gene expression. MAT3UTR predicts many trans-target genes of 3′US, including PTEN , in ceRNA crosstalk 11 , 12 , 13 (top 1% MAT3UTR score, Supplementary Table 2 ). In normal ceRNET, PTEN is predicted to be a ceRNA of nine 3′US genes (Fig. 3c ). When we ranked 97 breast tumor/normal pairs by the amount of 3′US across these nine genes (upper panel in Fig. 3d ), tumors with more 3′US showed more downregulation of PTEN ( P = 0.03, lower panel in Fig. 3d ). Furthermore, MAT3UTR can explain 86.9% of the variation in PTEN ’s expression across tumors (Supplementary Fig. 3g), suggesting that the trans effects of 3′US play a major role in downregulating PTEN . To empirically test the hypothesis that 3′US can downregulate PTEN in trans, we focused on EPS15 among the nine 3′US genes ( Methods ). We observed that depletion of EPS15 by siRNA in MCF7 cells reduces PTEN expression (Fig. 3e ). To ascertain whether this effect depends on miRNA-based targeting of the PTEN 3′ UTR, we used a luciferase reporter vector with the PTEN 3′ UTR (pLightSwitch-PTEN 3′ UTR) to test the effect of EPS15 knockdown on its expression. We observed that reduction of EPS15 reduces PTEN 3′ UTR luciferase activity (Fig. 3f ). To further understand whether the crosstalk is miRNA-dependent, we depleted DICER1 to abolish miRNA biogenesis and found that loss of DICER1 can relieve the influence of EPS15 knockdown on PTEN 3′ UTR expression (Fig. 3g ). Finally, overexpression of the EPS15 3′ UTR increased the number of PTEN-positive cells (Fig. 3h,i ). Thus, EPS15 3′US may impact PTEN expression. To gain insights into the global cause-and-effect relationship between 3′US and the repression of tumor-suppressor genes, we revisited our previous data from NUDT21 -knockdown HeLa cells, since NUDT21 is one of the master regulators of 3′US 2 . We identified 1,168 3′US ceRNAs in NUDT21 -knockdown cells solely on the basis of significant miRNA-binding-site overlap with 1,450 3′US genes, since co-expression cannot be effectively estimated from two replicates of our experiments. With 9,914 expressed RefSeq genes with no significant miRNA-binding-site overlap with 3′US genes as random controls, the tumor-suppressor genes remain strongly enriched in 3′US ceRNAs ( P ~ 1 × 10 −38 , Fig. 4a ). Among 57 tumor-suppressor genes in 3′US ceRNAs, 33 (57.9%) showed repression in NUDT21 -knockdown samples; whereas a smaller portion (44.5%) of 339 control tumor-suppressor genes showed repression ( P ~ 0.03, Fig. 4b ), suggesting that NUDT21 -mediated 3′US represses tumor-suppressor genes in trans. In spite of potentially higher false positives due to lack of co-expression in ceRNA identification, these results are highly consistent with our observations in TCGA breast cancer. On the basis of these results, we posit that repression of tumor-suppressor ceRNAs would correlate with increased occupancy of AGO2 in the RISC complex. To formally test this hypothesis, we isolated cytoplasmic fractions from control or NUDT21 -knockdown cells and conducted RNA immunoprecipitation (RIP) using anti-AGO2 antibodies. On average, we observed ~200-fold enrichment of ceRNAs in Ago2 RIP complexes relative to control IgG (Supplementary Fig. 4b ). Reduced expression of NUDT21 does not impact AGO2/DICER1 expression and GAPDH messenger RNA binding to AGO2 (Fig. 4c,d and Supplementary Fig. 4b ). Furthermore, we sequenced miRNAs from control and NUDT21 -knockdown cells, and found that miRNAs are equally likely to be upregulated or downregulated (Supplementary Fig. 4d ), ruling out a general effect on miRNA biogenesis. Importantly, we could detect increased association of multiple tumor-suppressor ceRNAs with AGO2 following NUDT21 depletion that ranged from 1.5-fold to nearly 7-fold (Fig. 4d ). These results demonstrate that 3′US can lead to reduction of tumor-suppressor genes through their increased association with repressive AGO2 complexes. Fig. 4: NUDT21 -mediated 3′ UTR shortening causes tumor-suppressor repression in trans. a , Oncogene or tumor-suppressor gene enrichment of 3′US ceRNAs (red), 3′US genes (blue) and RefSeq genes (gray), in the NUDT21 -knockdown (KD) experiment. We randomly sampled each gene category to the same number ( n = 1,168) 100 times, and reported the averaged P values with standard deviation (error bar). b , Expression change of tumor-suppressor genes that are 3′US ceRNAs ( n = 57, left box) or that are not connected to 3′US genes ( n = 339, right box). 3′US ceRNA tumor-suppressor genes showed lower expression in NUDT21 -knockdown samples ( P = 0.03). c , Knockdown of NUDT21 in HeLa cells using CRISPR/Cas9 and reduced NUDT21 was detected by western blot analysis in three independent experiments. d , RIP was performed with anti-AGO2 antibody; normal mouse IgG served as a control. The RIP complexes were detected by western blot with a distinct AGO2 antibody from rat (inset). The indicated ceRNAs associated with AGO2 enrichment in NUDT21 -knockdown cytoplasmic lysates versus the control are shown with average fold change ± standard deviation from three independent assays ( P = 0.0002, P = 5.2 × 10 −6 , P = 0.0004, P = 0.0005, P = 0.0006, P = 0.01 and P = 5.47 × 10 −7 , two-sided t -test, ** P < 0.001, * P < 0.01). Full size image To further validate the miRNA-dependent, repressive trans effects of 3′US, we monitored expression of the tumor-suppressor genes PHF6 and LARP1 and their ceRNA partners, YOD1 and LAMC1 (Supplementary Table 3 ). We consistently observed that PHF6 and LARP1 expression levels were decreased in NUDT21 -knockdown cells while both YOD1 and LAMC1 expression levels were increased (Fig. 5a ). To determine whether the 3′ UTR mediated this effect, we transfected luciferase reporters containing the 3′ UTR of either PHF6 or LARP1 into control or NUDT21 -knockdown cells and measured luciferase activity. We found that both reporters were downregulated after NUDT21 knockdown (Fig. 5b ). Both PHF6 and LARP1 have been shown as tumor-suppressor genes 9 , 18 , 19 and downregulation of PHF6 or LARP1 in HeLa cells increases cell growth, confirming their tumor suppressive activity (Supplementary Fig. 5 ). Fig. 5: NUDT21 -mediated 3′UTR shortening represses the tumor-suppressor genes PHF6 and LARP1 . a , Western blot of 3′US ceRNA tumor-suppressor genes ( PHF6 / LARP1 ) and 3′US genes ( LAMC1 / YOD1 ) in NUDT21 -knockdown cells. A representative image is shown from three independent experiments. b , Activity of the PHF6 3′ UTR and LARP1 3′ UTR luciferase reporter constructs in NUDT21 -knockdown cells relative to control siRNA-transfected cells. The data are the average of luciferase activity ± standard deviation from three independent experiments ( P = 0.037 and 0.05; P = 0.016 and 0.025, two-sided t -test). c , NUDT21 knockdown induces 3′ UTR shortening and upregulation of YOD1 , allowing miR-3187-3p and miR-549 to repress PHF6 . d , Western blot analysis using the indicated antibodies on lysates from HeLa cells transfected with siRNA for NUDT21 (si- NUDT21 -4) and two antagomirs that block miR-549 and miR-3187-3p. The image is representative of two independent experiments. e , Activity of the PHF6 3′ UTR luciferase reporter construct in HeLa cells with the indicated siRNAs, miRNAs or antagomirs. The data are the average of luciferase activity ± standard deviation from three independent experiments ( P = 0.004, P = 0.90, P = 0.018 and P = 0.015, two-sided t -test). f , Western blot analysis of cell lysates from cells transfected with either control siRNA or YOD1 siRNA. In the third lane, the cells were transfected with YOD1 siRNA and then transfected with YOD1 cDNA. The data are representative of three independent experiments. g , Activity of the PHF6 3′ UTR luciferase reporter in cells treated with the same experimental design as in f . The data are the average of luciferase activity ± standard deviation from three independent experiments ( P = 0.016 and P = 0.01, two-sided t -test). h , Activity of the PHF6 luciferase reporter in cells transfected with the indicated siRNAs. The data are the average of luciferase activity ± standard deviation from three independent experiments ( P = 0.025, P = 0.009 and P = 0.99, two-sided t -test). Full size image To further investigate the mechanism of tumor-suppressor ceRNA downregulation, we chose PHF6 on the basis of MAT3UTR analysis and experimental results ( Methods ). We selected two miRNAs targeting PHF6 (Fig. 5c ), which were released by 3′US of YOD1 (miR-3187-3p as the highest and miR-549 as the sixth highest in terms of \({\beta }_{{{\rm{miR}}}_{j}}\) in equation ( 3 ); Methods and Supplementary Table 4 ). Neither of these miRNAs was found to change its expression following NUDT21 knockdown (Supplementary Fig. 4d ). However, PHF6 expression was partially rescued by an antagomir blocking the activity of miR-549 and completely rescued by an antagomir targeting miR-3187-3p (Fig. 5d ). Moreover, PHF6 3′ UTR-mediated luciferase activity was partially rescued by the miR-3187-3p antagomir or YOD1 siRNA (Fig. 5e ). To understand whether reduced expression of PHF6 depends on YOD1 levels, we transfected YOD1 cDNA into cells depleted of YOD1 and found that re-expression of YOD1 could not restore either the expression of endogenous PHF6 (Fig. 5f ) or the expression of the PHF6 3′ UTR-mediated luciferase (Fig. 5g ), suggesting that the trans effect on PHF6 is due to the 3′ UTR of YOD1 . Finally, to determine whether the crosstalk between PHF6 and YOD1 is miRNA-dependent, we also showed that depletion of DICER1 abolishes PHF6 and YOD1 crosstalk (Fig. 5h ). Collectively, the data strongly suggest that NUDT21 -mediated 3′US causes tumor-suppressor gene repression in trans in a miRNA-dependent manner. Although analyzing ceRNA crosstalk in light of 3′US has been briefly suggested 20 , 21 , 22 , our MAT3UTR analysis of 97 breast cancer RNA-Seq data followed by functional validation suggests a widespread causal role of 3′US in repressing tumor-suppressor genes in trans. While the trans effect further emphasizes the importance of APA in tumor progression, it also provides an additional layer of gene regulation and underscores the need for further investigation into other potential mechanisms 23 , 24 that could perturb ceRNA crosstalk, such as RNA editing and competition with RNA-binding proteins. Methods Tumor-suppressor genes and oncogenes The tumor-suppressor genes and oncogenes used in this study were defined by the TUSON algorithm from genome sequencing of >8,200 tumor/normal pairs 9 , namely residue-specific activating mutations for oncogenes and discrete inactivating mutations for tumor-suppressor genes. TUSON is a computational method that analyzes patterns of mutation in tumors and predicts the likelihood that any individual gene functions as a tumor-suppressor gene or oncogene. We ranked genes by their TUSON prediction P values from the most to the least significant and used the top 500 genes ( P < 0.01) as the reference tumor-suppressor genes or oncogenes. After removing 30 genes in common, 470 tumor-suppressor genes and oncogenes were used for the enrichment analysis. Note that there were very few breast tumor-specific tumor-suppressor genes and oncogenes (36 and 3 with breast q -value ≤ 0.5, respectively) and 90% of them were found in the top 500 pan-cancer predictions. Previously identified 3′US genes in cancers Xia et al. identified 1,187 3′US genes across 7 TCGA cancer types 1 . Mayr and Bartel selected 23 3′US genes from 27 cancer cell lines 5 . Fu et al. identified 428 3′US genes in human breast cancer cell lines 6 . Lin et al. reported 120 3′US genes in major cancers and tumor cell lines 7 . Morris et al. found 286 3′US genes in human colorectal tumor samples 8 . The 3′US genes of Xia et al. were randomly sampled to 100 genes for a fair comparison. Selection of miRNA-binding sites Predicted miRNA-binding sites were obtained from TargetScanHuman version 6.2 25 . Only those with a preferentially conserved targeting score (Pct) more than 0 were used 1 . Experimentally validated miRNA-binding sites were obtained from TarBase version 5.0 26 , miRecords version 4 27 and miRTarBase version 4.5 28 . The binding sites found in indirect studies such as microarray experiments and high-throughput proteomics measurements were filtered out 29 . Another source is the microRNA target atlas composed of public AGO-CLIP data 30 with significant binding sites ( q -value <0.05). The predicted and validated binding site information was then combined to use in this study. TCGA breast tumor RNA-Seq and miRNA-Seq data Quantified gene expression files (RNASeqV1) for primary breast tumors (TCGA sample code 01) and their matching solid normal samples (TCGA sample code 11) were downloaded from the TCGA Data Portal 31 . We used 97 breast tumor samples that have matched normal tissues. A total of 10,868 expressed RefSeq genes (fragments per kilobase of transcript per million mapped reads (FPKM) ≥ 1 in >80% of all samples) were selected for downstream analyses. To better quantify gene expression in the presence of 3′US, we used only coding regions (CDS) to quantify mRNA expression. Exon and CDS annotation for TCGA data and miRNA expressions (syn1445790) were downloaded from Sage Bionetworks’ Synapse database. CeRNA identification in TCGA breast tumors CeRNAs were identified by miRNA-binding-site overlap and expression correlation 13 , 14 . Only microRNAs with intermediate expression (between 0.01 and 100 in averaged fragments per million mapped fragments (FPM)) were used to capture dynamic interactions 14 . After removing genes with fewer than six such miRNA-binding sites, gene pairs with significant miRNA-binding-site overlap (<0.05 in Benjamini–Hochberg-corrected P value) were selected. Among them, pairs correlated (>0.6 in Pearson’s correlation coefficient) ( P < 1 × 10 −10 ) in gene expression were defined as ceRNAs. To account for mRNAs with variable 3′ UTRs, we used only CDS to quantify mRNA expression. Genes that are connected with >500 ceRNAs were defined as hub genes. Model-based analysis of trans effect of 3′US (MAT3UTR) Suppose transcript x has a constitutive proximal 3′ UTR (pUTR) and a distal 3′ UTR that might be shortened in tumors (dUTR) (Supplementary Fig. 3a ). We define \({\rm{MiRs}}(x,\,{{\rm{miR}}}_{j})\) as the amount of binding sites for miRNA \({{\rm{miR}}}_{j}\) in x . $$\begin{array}{cc}\\ {\rm{MiRs}}(x,{{\rm{miR}}}_{j})= & \left({\rm{pUTR}}\left(x,\,{{\rm{miR}}}_{j}\right)+{\rm{dUTR}}\left(x,\,{{\rm{miR}}}_{j}\right)\times {\rm{PDUI}}\left(x\right)\right)\times {\rm{FPKM}}\left(x\right)\end{array}$$ (1) where \({\rm{pUTR}}(x,\,{{\rm{miR}}}_{j})\) and \({\rm{dUTR}}(x,\,{{\rm{miR}}}_{j})\) are the numbers of \(mi{R}_{j}\) binding sites in pUTR and dUTR of x , and \(FPKM\left(x\right)\) is expression of x . \({\rm{PDUI}}\) indicates the percentage of dUTR usage index 1 . Note that equation ( 1 ) can also estimate for genes with no distal 3′ UTR by setting \({\rm{PDUI}}=1.\) To estimate the trans effect of 3′US on gene y ′, we define X to be a set of 3′US genes that are ceRNA partners of y ′ (Supplementary Fig. 3b ) and Y to be a set of ceRNA partners to \(x\in X\) , including y ′. Only moderately expressed miRNAs are considered, since they are likely to bind all possible binding sites. Thus, we can roughly use the amount of miRNA-binding sites to represent the miRNA function. The \({{\rm{miR}}}_{j}\) -binding effect on each copy of y ′ can be defined as follows: $${\rm{TransE}}\left(y^{\prime} ,\,{{\rm{miR}}}_{j}\right)=\frac{{\rm{FPM}}\left({{\rm{miR}}}_{j}\right)}{{\sum }_{x\in X}{\rm{MiRs}}\left(x,{{\rm{miR}}}_{j}\right)+{\sum }_{y\in Y}{\rm{MiRs}}\left(y,{{\rm{miR}}}_{j}\right)}$$ (2) where \({\rm{FPM}}\left({{\rm{miR}}}_{j}\right)\) is the \({{\rm{miR}}}_{j}\) expression level. Since miRNA can bind to any binding sites in the genes connected by the ceRNA relationship ( \(X\cup Y\) ), both X and Y need to be considered. The high-dimensional MAT3UTR input data are often highly correlated with each other (for example, 588 miRNAs in equation ( 2 )). Therefore, MAT3UTR employs the ridge regression that is known to address the dimensionality and collinearity 32 , 33 in biological data. Indeed, the ridge regression yields a remarkably higher prediction power than classical linear regression. For example, MAT3UTR has a much smaller mean square error (0.38) than classical linear regression (mean square error = 10.84) (Supplementary Fig. 3f ). $${\rm{MAT3UTR}}\left(y^{\prime} \right)=\sum _{{{\rm{miR}}}_{j}\in 3^{\prime} {\rm{UTR}}(y^{\prime} )}{\beta }_{{{\rm{miR}}}_{j}}\times {\rm{log}}\,\frac{{\rm{transE}}{\left(y^{\prime} ,{{\rm{miR}}}_{j}\right)}_{{\rm{tumor}}}}{{\rm{transE}}{\left(y^{\prime} ,{{\rm{miR}}}_{j}\right)}_{{\rm{normal}}}}+{\epsilon }_{y^{\prime} }$$ (3) subject to \({\sum }_{{{\rm{miR}}}_{j}\in 3^{\prime} {\rm{UTR}}(y^{\prime} )}{\beta }_{{{\rm{miR}}}_{j}}\le t\) , the ridge regression penalty. \(MAT3UTR\left(y^{\prime} \right)\) is the trans effect of 3′US; \({\beta }_{{{\rm{miR}}}_{j}}\) is the regression coefficient of \({{\rm{miR}}}_{j}\) ; \({\epsilon }_{y^{\prime} }\) is the gene-specific error term. We use R 2 to show how much variation in gene expression can be explained by the \(MAT3UTR\) model. We also used 10-fold cross-validation (CV) to choose the optimal regularization parameter t with 75% of data for training and the remaining 25% for testing. CV error is measured by mean-squared error. Then, to estimate β , we fit the ridge regression with the entire data set using the selected regularization parameter as chosen by CV. As a result, y ′ would be more repressed following 3′US, if: y ′ contains more miRNA-binding sites in its 3′ UTR; X and Y contain fewer miRNA-binding sites; and more transcripts in X undergo 3′US. The MAT3UTR-control model, which considers miRNA expression but not 3′US, is defined as: $${{\rm{MAT3UTR}}\!{\mbox{-}}\!{\rm{control}}}\,\left(y^{\prime} \right)=\sum _{{{\rm{miR}}}_{j}\in 3^{\prime} {\rm{UTR}}(y^{\prime} )}{\beta }_{{{\rm{miR}}}_{j}}\times {\rm{log}}\,\frac{{\rm{FPM}}{\left({{\rm{miR}}}_{j}\right)}_{{\rm{tumor}}}}{{\rm{FPM}}{\left({{\rm{miR}}}_{j}\right)}_{{\rm{normal}}}}+{\epsilon }_{y^{\prime} }$$ (4) where \({\rm{FPM}}\left({{\rm{miR}}}_{j}\right)\) is the \({{\rm{miR}}}_{j}\) expression level. For model comparison between MAT3UTR and MAT3UTR-control, we randomly selected 75% of data for training and the remaining 25% for testing. We perform random division 100 times to evaluate the performance of the MAT3UTR and MAT3UTR-control models, where 10-fold CV also confirms that MAT3UTR has a 2-fold higher prediction power on gene expression variation than the MAT3UTR-control model (Supplementary Fig. 3e ). Selecting genes for experimental validations To test the trans repressive effect of 3′US on PTEN, we chose EPS15 on three grounds. First, its expression is easily detected in MCF-7 cells; second, analysis of RNA-Seq from MCF-7 cells 34 indicates distal polyA site usage of the EPS15 transcript; third, the EPS15 3′ UTR contains four microRNA target sites that compete with the PTEN 3′ UTR. To investigate the tumor-suppressor ceRNA downregulation mechanism, we chose PHF6 , because among 57 tumor-suppressor genes in 3′US ceRNAs, PHF6 was predicted as a strong (sixth highest in MAT3UTR score, Supplementary Table 3 ) trans-target of 3′US, was significantly downregulated (second highest in gene expression) and was the most enriched in AGO2 RIP complexes of the ceRNA tested (Fig. 4d ). Statistical analyses Differential expression analyses were carried out by edgeR (version 3.8.6) 35 (tumor samples versus normal samples) with false discovery rate (FDR) control at 0.05. The significance of observed values for a particular class compared to its control is calculated from one-tailed Pearson’s χ 2 test. Each variable follows either a binomial or multinomial distribution and each case consists of at least five counts, which meets the assumption of Pearson’s χ 2 test. To test whether there is a significant enrichment of tumor-suppressor genes or oncogenes among a gene list of our interest, we conducted hypergeometric tests with normalized overlap counts, since assessing overlap between sets meets all criteria to use hypergeometric tests, including trials without replacement. To compare means of two groups that have different variances, we used Welch’s t -test, which does not assume equal population variance. To check the normality assumption for the t -test, we conducted a Shapiro-Wilk normality test for small samples ( n < 50). All statistical computations were performed in the Python scipy stats package (version 0.15.1) or R (version 3.1.1). RNA-Seq for NUDT21 depletion experiment We previously sequenced two control and two NUDT21 depletion samples of HeLa cells by HiSeq 2000 (LC Sciences) 2 . After trimming adaptors using Trim Galore (version 0.4.1), paired-end RNA-Seq reads of 101 base pairs in each end were used to reconstruct the transcriptome in the Tuxedo protocol 36 (TopHat 2.0.6 and Cufflinks 2.1.1). The resulting FPKM values were normalized for comparison using Cuffdiff 2.2.0. Further analyses are based on 10,681 expressed (FPKM ≥ 1 in >3 samples) RefSeq genes. We sequenced miRNAs from control and NUDT21 -knockdown cells to utilize only miRNAs with intermediate expression in ceRNA identification. CeRNA identification in the NUDT21 -knockdown experiment in the HeLa cell line Due to the small sample size (two for each condition wild-type and NUDT21 knockdown), ceRNAs were identified solely on the basis of miRNA-binding-site overlap. We considered only binding sites for miRNAs with intermediate expression (between 0.01 and 100 in averaged FPM). A total of 1,450 3′US genes identified by DaPars had significant miRNA-binding-site overlap with 1,168 ceRNA genes (3′US ceRNA partners). MiRNA-Seq for the NUDT21 depletion experiment HeLa cells were transfected with control or NUDT21 siRNA. NUDT21 depletion was validated as previously described 2 . Small RNA libraries were generated from one control and one NUDT21 depletion sample using the Illumina Truseq Small RNA Preparation kit, and sequenced on Illumina GAIIx. Raw sequencing reads (40 nucleotides) were obtained using Illumina’s Sequencing Control Studio software following image analysis and base-calling by Illumina’s Real-Time Analysis (v 1.8.70). Then a script ACGT101-miR v 4.2 (LC Sciences) was used for data analysis, where reads are mapped to the reference database (miRbase). The script also normalizes the counts by a library size parameter for comparison. CeRNA tumor-suppressor repression in HeLa cells with NUDT21 knockdown Parental HeLa cells were purchased from ATCC (cat. no. CCL-2) and maintained in Eagle’s minimum essential medium (Lonza, cat. no. 12-604F) with 10% fetal bovine serum. The cells were made mycoplasma free by incubating with Plasmocin (InvivoGen, cat. no. ant-MPT) for two weeks before transfection with three different siRNAs for NUDT21 (Sigma Aldrich, ID: SASI_Hs01_00146875~77) and negative control siRNA (Sigma Aldrich, ID:SIC002) using previously established approaches 2 . Western blotting was also performed as described in our previous work 2 using antibodies raised against: PHF6 (Santa Cruz, cat. no. sc-271767), YOD1 (abcam, ab178979), NUDT21 (Proteintechlab, cat. no. 10322-1-AP) and GAPDH (Sigma, G9545). To block miRNA function, we selected two miRNAs with a strong trans effect targeting PHF6 (miR-3187-3p and miR-549) and HeLa cells were co-transfected with siRNA for NUDT21 and the two antagomirs, to block the two predicted miRNAs, miR-549 and miR-3187-3p in the PHF6 3′ UTR. The two antagomirs were designed 37 and synthesized from Sigma-Genosys: Antagomir-3187-3p: 5′-[mU]s[mU]s[mG]mG][mC][mC][mA][mU][mG][mG][mG][mG][mC][mU][mG] [mC][mG]s[mC]s[mG]s[mG]s-chol-3′; and Antagomir-549: 5′-[mU]s[mG]s[mA][mC] [mA][mA][mC][mU][mA][mU][mG][mG][mA][mU][mG][mA][mG][mC]s[mU]s[mC]s[mU]s-chol-3′. PHF6 and YOD1 expression were detected by western blotting and quantified by Image Lab software (version 5.2.1) from Bio-Rad. Detection of ceRNA tumor-suppressor gene enrichment by RIP with quantitative PCR HeLa cells were seeded in a 6-well plate at 4 × 10 5 cells per well and transfected with a Cas9 and single-guide RNA (sgRNA) plasmid targeting NUDT21 or with Cas9 and GFP as a control. sgRNAs for NUDT21 (top, ccggccgcccaatcgctcgcagac; bottom, aaacgtctgcgagcgattg ggcgg) were synthesized (Sigma), and the annealing double-stranded DNA was cloned into pGL3-U6-sgRNA-PGK-puromycin. The transfected cells from three wells were combined and then selected with 10 µg ml −1 blasticidin for three days. NUDT21 -knockdown efficiency was determined by western blot with NUDT21 antibody. RIP was performed with anti-AGO2 antibody, and AGO2-associated RNAs were purified and measured by quantitative real-time PCR 38 . Briefly, the cells were harvested and lysed with 100 µl polysome lysis buffer (100 mM KCl, 5 mM MgCl 2 , 10 mM Hepes pH 7.0, 0.5% NP50, 1 mM DTT and 1×PI cocktail). The cell lysate was centrifuged at 10,000 g for 15 min and added to magnetic beads (A+G) with 5 µg anti-Ago2 antibody or normal mouse IgG suspended in 900 µl of NET2 buffer (50 mM Tris-Cl pH 7.4, 150 mM NaCl, 1 mM MgCl 2 , 0.05% NP-40, 17.5 mM EDTA pH 8.0, 1 mM DTT and 100 units ml −1 RNaseOUT). The beads were washed six times with NT2 buffer (50 mM Tris-Cl pH 7.4, 150 mM NaCl, 1 mM MgCl 2 , 0.05% NP-40). Beads were resuspended in 150 µl proteinase K buffer (50 mM Tris-Cl pH 7.4, 150 mM NaCl, 1 mM MgCl 2 , 0.05% NP-40 and 1% SDS) with 9 µl proteinase K. Samples were incubated at 55 °C for 30 min and isolate total RNAs with 150 µl phenol–chloroform. The total RNA was reverse transcribed and the candidate ceRNAs were determined by quantitative real-time PCR using primers described in Supplementary Table 5 (Bio-Rad real-time PCR system). LightSwitch luciferase reporter assay with PTEN , PHF6 and LARP1 3′ UTR LightSwitch luciferase reporter constructs with PTEN , PHF6 and LARP1 3′ UTR were purchased from SWITCHGEAR genomics. Briefly, HeLa cells were seeded in a 96-well white TC plate in 100 µl total volume to yield ≥80% confluence at the time of transfection. For each transfection, the following reagents were combined: 50 nM siRNA and/or miRNAs and/or antagomir RNA, individual GoClone reporter (30 ng µl −1 ) 3.33 µl and 1 ng Rluc reporter. Lipofectamine 2000 was diluted in OPTI-MEM medium at 1:10 and incubated at room temperature for 5 min and then added to each tube. Following a 20-min incubation at room temperature, 80 µl of pre-warmed (37 °C) OPTI-MEM medium per replicate was added for a total of 100 µl per replicate transfection. All 100 µl of the transfection mixture was added to each well and incubated overnight. The luciferase reporter assays were performed according to the manufacturer’s protocol (Invitrogen). Immunofluorescence staining for PTEN in MCF7 cells with EPS 3′ UTR pLightSwitch- EPS15 3′ UTR construct was purchased from SWITCHGEAR genomics and transfected into MCF7 cells. PTEN expression was detected by immunofluorescence staining with anti-PTEN antibody from Cell Signaling. Briefly, 1 × 10 5 MCF7 cells were seeded in 4-well chamber slides overnight, and transfected with pLightSwitch- EPS15 3′UTR/GFP constructs at 10:1 or pLightSwitch-3′ UTR/GFP constructs as a control. One day after transfection, the cells were fixed with 90% cold methanol at −20 °C overnight. The next day, 0.5% Triton X-100 in PBS was added and incubated at room temperature for 30 min. Samples were blocked in 3% BSA in PBS at room temperature for 1 h. PTEN antibody was used at 1:200 dilution in 3% BSA/PBS and 200 μl per well was added to the chamber slides and incubated for 1 h at room temperature. After washing three times, the cells were incubated with Alexa-594-conjugated secondary antibody in 3% BSA/PBS for 1 h at room temperature, in the dark. The cells were rinsed three times with PBS, with the third wash containing DAPI. The coverslips were mounted in anti-fade mounting medium and detected by immunofluorescence microscopy. Both PTEN - and GFP -positive cells were counted in EPS15 3′ UTR/GFP cells and pLightSwitch-3′ UTR/GFP control cells. Reporting Summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Code availability The open source MAT3UTR program (version 0.9.2) is freely available at with necessary example data for this analysis. Data availability Raw and processed miRNA-Seq data for the NUDT21 -depletion experiment have been deposited to GEO under the accession number GSE78198 .
A collaborative research team has uncovered new information that more accurately explains how cancerous tumors grow within the body. This study is currently available in Nature Genetics. Researchers led by scientists at The University of Texas Medical Branch at Galveston and Baylor College of Medicine found that a losing a section of messenger RNA that was previously thought to transform normal cells into cancerous ones actually acts by blocking a body's ability suppress the formation of tumors. The finding could completely alter the way that medical science approaches the formation of tumors. In molecules throughout the body, the three-prime untranslated region, or 3'UTR, is a section of messenger RNA that can alter gene expression. It's known that shortening this RNA section promotes cancerous tumor growth. "Researchers have historically thought that this was because 3'UTR shortening induces the expression of proto-oncogenes, normal genes that when altered by mutation or expressed too high, become oncogenes that can transform a normal cell into a cancer cell," said Eric Wagner, UTMB associate professor in the department of biochemistry and molecular biology. "However, using a combination of computational approaches and cancer cell models, we found that 3'UTR shortening in tumors actually causes tumor-suppressing genes to be turned off." In the study, the researchers used "Big data" analyses to reconstruct the RNA thought to form global regulatory networks within breast tumor cells and their matched normal tissues. This approach identified the fact that 3'UTRs are vital in regulating these global regulatory networks. Using this new information, they then disrupted these networks within breast cancer cells to test the effects on tumor growth.
10.1038/s41588-018-0118-8
Nano
Generating ultra-violet lasers with near-infrared light through 'domino upconversion' of nanoparticles
Tianying Sun et al, Ultralarge anti-Stokes lasing through tandem upconversion, Nature Communications (2022). DOI: 10.1038/s41467-022-28701-1 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-022-28701-1
https://phys.org/news/2022-05-ultra-violet-lasers-near-infrared-domino-upconversion.html
Abstract Coherent ultraviolet light is important for applications in environmental and life sciences. However, direct ultraviolet lasing is constrained by the fabrication challenge and operation cost. Herein, we present a strategy for the indirect generation of deep-ultraviolet lasing through a tandem upconversion process. A core–shell–shell nanoparticle is developed to achieve deep-ultraviolet emission at 290 nm by excitation in the telecommunication wavelength range at 1550 nm. The ultralarge anti-Stokes shift of 1260 nm (~3.5 eV) stems from a tandem combination of distinct upconversion processes that are integrated into separate layers of the core–shell–shell structure. By incorporating the core–shell–shell nanoparticles as gain media into a toroid microcavity, single-mode lasing at 289.2 nm is realized by pumping at 1550 nm. As various optical components are readily available in the mature telecommunication industry, our findings provide a viable solution for constructing miniaturized short-wavelength lasers that are suitable for device applications. Introduction Luminescent materials that convert excitation photons into prescribed emissions are at the core of many photonics technologies such as varicolored displays and programmable photoactivation 1 , 2 . Amongst various luminescence processes, photon upconversion characterized by high-energy emission upon excitation of lower-energy photons is of exceptional interest. Upconversion primarily takes advantage of lanthanide-doped materials, in which the stepwise excitation through the energy levels of the lanthanide activators results in visible and ultraviolet emissions by successive absorption of multiple near-infrared photons 3 , 4 , 5 , 6 , 7 . The unique upconversion process has enabled a diversity of applications ranging from bioimaging to solar energy conversion and optical storage 8 , 9 , 10 , 11 , 12 , 13 . In particular, upconversion is considered as a promising solution to generating short-wavelength lasing by pumping with longer-wavelength light sources that are more readily acquired 14 , 15 . Frequency upconversion holds potential for cost-effective construction of miniaturized deep-ultraviolet (UV) emission devices that find enormous medical and industrial applications, such as microbial sterilization and biomedical instrumentation 16 , 17 , 18 , 19 . However, the implementation of such a technique has been constrained by the limited spectral tunability of upconversion, which occurs in special lanthanide ions comprising fixed sets of energy levels. For example, one important class of light sources are lasers operating in the telecommunication wavelengths (1260 to 1675 nm) 20 , 21 , which are extensively used in fiber-optic communication and photonic circuits because of minimal optical attenuation, ready accessibility in various forms, and low cost for device fabrication. In addition, the wavelengths fall in the second near-infrared window (NIR-II) that is favorable for high-resolution in vivo bioimaging owing to maximal tissue transparency and minimal autofluorescence 22 , 23 . However, only a small number of Er 3+ -sensitized materials are capable of upconverting excitation light in this wavelength range, which displays dominated Er 3+ emissions across a limited spectrum 24 , 25 , 26 , 27 , 28 . It remains a daunting challenge to achieve deep-UV emission by excitation in the telecommunication wavelengths. To expand the spectral tunability of upconversion, herein we propose a domino upconversion (DU) scheme, in which energy amassed in one upconversion course triggers another succeeding upconversion process (Fig. 1 ). By a tandem combination of Er 3+ - and Tm 3+ -based upconversion in a core–shell–shell nanostructure, deep-ultraviolet emission is realized by excitation at 1550 nm with an ultralarge anti-Stokes shift of up to 1260 nm. We systematically investigate the energy cascade processes in the core–shell–shell nanostructures and demonstrate deep-ultraviolet lasing at 289.2 nm through the DU scheme by excitation at the telecommunication wavelength. Fig. 1: Comparison of the conventional energy transfer upconversion (ETU) and the proposed domino upconversion (DU) processes. a In an ETU process, the excitation energy is only amassed in one type of lanthanide upconverting ion. b In a DU process, the excitation energy amassed in one upconverting ion triggers energy amassment in a second upconverting ion, leading to an ultralarge anti-Stokes shift. Full size image Results Synthesis and characterization As a proof-of-concept experiment, we constructed a NaYF 4 :Yb/Tm@NaErF 4 :Ce @NaYF 4 core−shell−shell nanoparticle with the Tm 3+ - and Er 3+ -based upconversion processes separately incorporated into the core and interlayer of the nanoparticle, respectively (Fig. 2a ). The outermost shell of NaYF 4 was designed to protect the nanoparticle against surface quenching 29 , 30 . The spatial separation of the dopant ions was intended to minimize the cross-talk between different upconversion processes 31 , 32 , which were independently optimized in their respective doping domains. To facilitate the DU process through interfacial energy transfer, we also employed high concentrations of Er 3+ and Yb 3+ dopants that are highly resistant to concentration quenching (Fig. 2a ) 25 , 33 , 34 . Fig. 2: Ultralarge anti-Stokes emission through DU in core–shell–shell nanoparticles. a Schematic design of a NaYF 4 :Yb/Tm@NaErF 4 :Ce@NaYF 4 core–shell–shell nanoparticle for DU (left panel) and proposed energy transfer mechanism in the nanoparticle. b HAADF-STEM image of the NaYF 4 :Yb/Tm@NaErF 4 :Ce@NaYF 4 nanoparticles highlighting the layered structure. c Digitally processed high-resolution TEM image of a NaYF 4 :Yb/Tm@NaErF 4 :Ce@NaYF 4 nanoparticle showing the single crystalline nature. d An enlarged view of the selected area in c , indicated by a white box, showing the hexagonal structure of the lattice in accord with the NaYF 4 crystal (right panel). e Schematic illustration of the waveguide circuit for excitation of upconversion nanoparticles. Due to the convergence of the laser beam, the power density in the waveguide circuit ( I W ) was amplified relative to that in the incident fiber ( I F ). f Emission spectrum of the NaYF 4 :Yb/Tm@NaErF 4 :Ce@NaYF 4 nanoparticles by excitation of the waveguide circuit at 1550 nm with a high-power density of 2073 kW cm −2 . Full size image The nanoparticles were synthesized by a layer-by-layer epitaxial growth protocol 35 , which involved the preparation of NaYF 4 :Yb/Tm core nanoparticle followed by the epitaxial growth of the NaErF 4 :Ce interlayer and the NaYF 4 shell (Supplementary Fig. 1a ). Figure 2b shows the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) image of the sample, revealing the highly uniform size and morphology of nanoparticles, with a distinguished Z -contrast between the NaErF 4 :Ce (30%) interlayer and outermost NaYF 4 layer. The high-resolution transmission electron microscopy (HR-TEM) and powder X-ray diffraction (XRD) measurements further confirmed the high crystallinity of the nanoparticles with a single hexagonal phase (Fig. 2c, d and Supplementary Figs. 1b , 2). Ultralarge anti-Stokes emission through DU We next measured the emission spectrum of the nanoparticles under 1550 nm excitation. The nanoparticles were deposited on the top of a waveguide structure that functioned as the excitation source (Fig. 2e ). The waveguide circuit was semi-buried in a SiO 2 substrate, with the top surface exposed to contact the nanoparticles (Supplementary Fig. 3a, b ). Owing to its small dimension, the waveguide structure spatially confines the incident light and thus enhances the power density of the excitation field 36 . In a specific case, we estimated a high excitation power density of 2073 kW cm −2 at an input power of 311 mW (Supplementary Fig. 3c ). The emission spectrum consists of characteristic emission peaks of Tm 3+ that can be assigned to the 1 I 6 → 3 H 6 and 3 F 4 (290 and 347 nm), 1 D 2 → 3 H 6 and 3 F 4 (362 and 453 nm), 1 G 4 → 3 H 6 and 3 F 4 (478 and 649 nm), and 3 H 4 → 3 H 6 (803 nm) transitions, respectively (Fig. 2f ). The observation of strong upconversion emission in the short-ultraviolet wavelength region suggests an efficient Tm 3+ upconversion sensitized by Er 3+ . Note that the Yb/Tm-doped upconversion layer alone does not respond to the 1550 nm excitation (Supplementary Fig. 4 ). It is worth noting that the inclusion of Ce 3+ dopants in the NaErF 4 layer is essential for achieving the DU process. Figure 3a compares the emission spectra of nanoparticles without and with Ce 3+ dopants in the interlayer, which revealed substantial attenuation of Tm 3+ emissions in the absence of Ce 3+ . The Ce 3+ ions contributed to the DU by inhibiting high-order upconversion in Er 3+ ions through cross-relaxation (Supplementary Fig. 5 ), which resulted in a preferential population of the 4 I 11/2 state 28 . A large 4 I 11/2 population facilitated energy transfer to Yb 3+ ions and subsequent upconversion in the Tm 3+ ions (Fig. 3b, c and Supplementary Fig. 6a ). Without the Ce 3+ dopants, the Er 3+ ions were straightforwardly excited to the higher-lying excited states, followed by radiative transitions to the ground state that gave rise to the dominated emission of Er 3+ ions (Supplementary Fig. 6b ). Fig. 3: Mechanistic investigation of Ce 3+ -induced cross-relaxation. a Emission spectra of the NaYF 4 :Yb/Tm@NaYF 4 :Ce/Er@NaYF 4 nanoparticles under excitation of 1550 nm at 2073 kW cm −2 as a function of Ce 3+ doping concentration in the interlayer. b , c Proposed energy transfer pathways in the NaYF 4 :Yb/Tm@NaYF 4 :(Ce/)Er@NaYF 4 nanoparticles without and with Ce 3+ dopants, respectively. d Emission spectra of the NaYF 4 @NaYF 4 :(Ce/)Er@NaYF 4 nanoparticles without and with Ce 3+ dopants under 1550 nm excitation at high (2073 kW cm −2 ) and low (5 kW cm −2 ) powers, respectively. e Emission intensity at 346 nm (Tm 3+ ) as a function of excitation power density in the NaYF 4 :Yb/Tm@NaYF 4 :(Ce/)Er@NaYF 4 nanoparticles without and with Ce 3+ dopants, respectively. f Schematic illustrations of excitation power-dependent preferential population of energy levels in Er 3+ ions through cross-relaxation with Ce 3+ dopants. g Simulated populations of 4 I 11/2 energy levels as a function of excitation power density in NaYF 4 :(Ce/)Er without and with Ce 3+ dopants, respectively. Mechanistic calculations by formulating the rate equations as in the Supplementary Methods. Full size image The DU process is strongly affected by the content of Ce 3+ ions. By correlating the emission intensity with Ce 3+ concentration in the interlayer, the optimal Ce 3+ doping concentration was determined to be 30% (Supplementary Fig. 7b ). The reduction of upconversion emission at substantially high Ce 3+ concentration (>30%) is partially attributed to the large lattice mismatches between the core/shell components, which resulted in nonuniform epitaxial growth processes (Supplementary Fig. 7a ) 37 , 38 , 39 . To substantiate the role of Ce 3+ ions in the selective quenching of Er 3+ ions, we compared the visible and NIR emissions of NaYF 4 @NaErF 4 :Ce@NaYF 4 nanoparticles with and without Ce 3+ dopants (Supplementary Fig. 8 ). Yb 3+ and Tm 3+ ions were removed to avoid disturbance to the Er 3+ emission. As anticipated, we observed enhancement of the NIR emission in the Er 3+ ions at the expense of the visible emissions due to the inclusion of Ce 3+ dopants (Fig. 3d ). Furthermore, the decay times of the 4 S 3/2 and 4 F 9/2 states of the Er 3+ ions were shortened by the Ce 3+ dopants, indicating a nonradiative energy transfer from Er 3+ to Ce 3+ ions (Supplementary Fig. 9 ). It is worth noting that a high excitation power density is also essential for achieving the DU process 40 , 41 . We observed that the Er 3+ emission at around 980 nm ( 4 I 11/2 → 4 I 15/2 ) was quenched by Ce 3+ dopants at low excitation powers (Fig. 3d , bottom panel). Correspondingly, DU emission in Tm 3+ ions was also quenched by Ce 3+ under low-power excitation (Fig. 3e and Supplementary Fig. 10 ). The results were ascribed to 4 I 11/2 → 4 I 13/2 cross-relaxation in Er 3+ ions induced by Ce 3+ . A high excitation power promotes the 4 I 13/2 → 4 I 9/2 excitation process, which subsequently enhances population in the 4 I 11/2 state through the 4 I 9/2 → 4 I 11/2 cross-relaxation (Fig. 3f ). The numerical simulations based on rate equations confirmed that Ce 3+ dopants increase the population in the 4 I 11/2 state of Er 3+ only at high excitation powers (Fig. 3g ). In a further set of experiments, we demonstrate the critical role of Yb 3+ in mediating the energy transfer from the Er 3+ to the Tm 3+ ions across the core/shell interface. When the Yb 3+ ions in the core level were replaced by optically inert Lu 3+ ions (Supplementary Fig. 11 ), the Tm 3+ emission was hardly detected even at a high excitation power density of 2073 kW cm −2 (Fig. 4a ). The observation was ascribed to the large physical separation between the Er 3+ and Tm 3+ ions. Owing to the low dopant concentration of Tm 3+ (1%), their average distance from the core/shell interface was too far for the energy transfer to proceed. The introduction of a high concentration (40%) of Yb 3+ ions created an energy conduit to the Tm 3+ ions by forming a network of the Yb 3+ lattice, which permits fast energy migration over a long distance 42 , 43 . Fig. 4: Mechanistic investigation of energy transfer in the DU process. a Emission spectra of NaYF 4 :Yb(Lu)/Tm@NaErF 4 :Ce@NaYF 4 nanoparticles under excitation of 1550 nm at 2073 kW cm −2 , demonstrating the necessity of Yb 3+ ions for mediating energy transfer across the core/shell interface. b Emission spectra of NaYF 4 :Yb/Tm@@NaYF 4 ( d )@NaErF 4 :Ce@NaYF 4 nanoparticles by 1550 nm excitation, verifying involvement of interlayer energy transfer in the DU process. c Power-density-dependent emission spectra of NaYF 4 :Yb/Tm/Er/Ce@NaYF 4 nanoparticles by 1550 nm excitation, necessitating the use of core–shell–shell design for obtaining high-efficiency DU. d Schematic of uncontrollable energy exchange interactions in the homogeneously doped nanoparticle. Full size image Yb 3+ ions facilitated the energy extraction from the interlayer also due to their relatively large absorption cross-sections (~10 −20 cm 2 ) and resonant energy level with Er 3+ donors 44 , which resulted in a long critical distance of energy transfer. Our control experiments revealed that the Yb 3+ mediated energy transfer can still proceed when the Er/Ce shell was isolated from the Yb/Tm core by a NaYF 4 spacing layer of 2.5 nm (Fig. 4b and Supplementary Fig. 12 ). The energy transfer distance is appreciably larger than that observed for other ionic systems such as Gd 3+ and Tb 3+ (~1.1 nm) 45 . The core−shell−shell structure is also essential for achieving the DU process. As we homogeneously doped all the lanthanide ions in the core layer of a NaYF 4 :Yb/Tm/Er/Ce@ NaYF 4 core−shell nanoparticle (Fig. 4c, d and Supplementary Fig. 13 ), the overall emission was rather weak and the Tm 3+ emission can hardly be detected. The result was ascribed to extensive and uncontrollable energy exchange interactions among the Yb 3+ , Tm 3+ , Ce 3+ , and Er 3+ ions, which resulted in significant dissipation of excited energy. Note that the quenching processes in the quadruply-doped system were too strong to be alleviated by high-power excitation in our experiments (Fig. 4c ). Deep-UV lasing through DU The laser characteristics of the NaYF 4 :Yb/Tm@NaErF 4 :Ce@NaYF 4 nanoparticles were examined under free-space excitation of a 1550 nm pulse laser with 6 ns frequency duration and 10 Hz repetition rate. The as-synthesized nanoparticles were incorporated into a toroidal microresonator as the laser cavity (Fig. 5a ), which supports whispering gallery mode at the internal boundaries of the nanoparticle-doped microtoroidal resonator (Fig. 5b ) 46 , 47 . Partly owing to the high processability and small size of the upconversion nanoparticles, the composite microcavity displayed a uniform size and smooth surface (Fig. 5c ). Correspondingly, a high-quality factor ( Q -factor) of about 2 × 10 5 was determined by assessing the transmission characteristic of a 1550 nm band diode laser (Santec, TSL-710) that was coupled to the microresonator through a tapered fiber (Fig. 5d ). Fig. 5: Deep-UV lasing in microresonator incorporated DU nanoparticles. a Schematic setup of the microtoroidal resonator platform for upconversion lasing. b Simulation of the excited WGMs (given in 2D cross-sectional geometry) on the surface of the microcavity with a diameter of 4 µm and ring width of 0.2 µm, respectively. c SEM image of a typical UCNPs-doped microresonator. d The transmission spectrum of a UCNPs-doped microresonator, revealing a Q -factor of around 2 × 10 5 . Inset: schematic of the measurement setup (left) and top-view photograph of the system under measurement (right). e Emission spectra of a microresonator with D m = 17 μm at different excitation powers. f Logarithmic plot of output intensity versus excitation power for the microresonator. g Lasing spectra of microresonators with different D m . h Plots of measured mode spacing (Δ λ ) and threshold pump power ( P th ) of the microresonator as a function of D m . Data points for threshold pump powers represent mean ± standard deviation (SD, n = 3). Error bars indicate SD. Full size image To examine the lasing action, the emission spectra of a typical microresonator with a 17-μm diameter were recorded as a function of pump power. As shown in Fig. 5e , a sharp emission peak (linewidth < 0.05 nm) centered at 289.2 nm ascended from the emission spectrum as the excitation power increased above the threshold pump power ( P th , around 0.28 J cm −2 ). Moreover, the dependence of output intensity on the excitation power exhibited an “S” shape with three distinct regions (Fig. 5f ), representing the transition from spontaneous emission through amplified spontaneous emission to gain saturation 48 , 49 . These results together confirm the onset of single-mode upconversion lasing. We also demonstrated that the lasing features such as mode spacing, mode numbers, and threshold power can be precisely controlled by tuning the size of the microresonator (Supplementary Fig. S14 ). As the diameter of the microresonator increased, the number of lasing modes increased due to a decrease in the mode spacing (Fig. 5g, h ). The observed mode spacing was well correlated with the parameters of the microresonators according to the following equation 49 , 50 : $$\varDelta \lambda ={{\lambda }_{0}}^{2}/{n}_{{{{{{\mathrm{eff}}}}}}}L$$ (1) where Δ λ is the mode spacing, λ 0 is the center peak wavelength, n eff (=1.52) is the effective refractive index and L is the perimeter length of the microresonators. All of the above results confirm that, upon 1550 nm pumping, the toroidal microresonators were sufficient to create population inversion of a higher-lying excited state of Tm 3+ ions through DU for ultraviolet lasing. It is worth mentioning that multi-wavelength lasing action can be recorded at different emission peaks of the Tm 3+ and Er 3+ dopants (Supplementary Fig. S15 ). The remarkable tunability of lasing emission in the judiciously designed NaYF 4 :Yb/Tm@NaErF 4 :Ce@NaYF 4 nanoparticles certainly expands the possibility in future studies. The 289 nm lasing from UCNPs-doped microresonator with an ultralarge anti-Stokes shift is susceptible to the Q -factor of the cavity, which enables sensitive detection of small biological species by monitoring the P th shift. As a proof of principle, we used a polystyrene (PS, 300 nm in diameter) sphere as the simulant of cancer cell secretion to conduct the sensing measurement. As anticipated, P th values of the 290 nm lasing increased considerably from 0.13 to 2.34 J cm −2 by attaching a single PS sphere to the microresonator, due to the reduction of Q -factor from 2 × 10 5 to around 4 × 10 4 (Supplementary Fig. S16 ). The results demonstrate that our device integrating upconversion gain medium with high- Q microresonator structure is promising for designing high-quality sensing platforms. Discussion In summary, we have established a DU scheme that allows upconverting excitation light in the telecommunication wavelength into deep-ultraviolet emissions with an ultralarge anti-Stokes shift of 1260 nm (~3.5 eV). The DU was realized by the cooperation of two distinct upconversion processes that are integrated into a single NaYF 4 :Yb/Tm@NaErF 4 :Ce@NaYF 4 nanoparticle through a Yb-mediated energy transfer at the core/shell interface. By using the DU nanoparticles as gain media, we further developed a novel toroid microcavity laser that manifested single-mode lasing at 289.2 nm. Our findings initiate an effective tactic to obtain upconversion lasers operating in the deep-ultraviolet regime by excitation at the telecommunication wavelength, which minimizes optical attenuation in SiO 2 -based photonic circuits. Besides, the study of tandem combining different upconversion processes through heavy lanthanide doping also raises new possibilities of constructing upconversion nanocrystals with highly tunable excitation and emission spectra for advanced biological and photonic applications. Methods Nanoparticle synthesis The multilayered NaYF 4 :Yb/Tm@NaErF 4 :Ce@NaYF 4 nanoparticles were synthesized according to the method in ref. 35 . Additional experimental details are provided in the Supplementary Information. Fabrication of toroid microcavity comprising upconversion nanoparticles A sol-gel silica film doped with upconversion nanoparticles was first made by using an acid-catalyzed hydrolysis-condensation reaction approach. Next, toroid microcavities were prepared from the as-synthesized upconversion sol-gel silica using a sequence of photolithography, etching, and laser-induced reflow. Additional experimental details are provided in the Supplementary Information. Theoretical modeling The electrical field in the waveguide structure was simulated by the three-dimensional finite-difference time-domain (3D-FDTD) method. The upconversion process in the Er 3+ -Ce 3+ system was simulated by the rate equations of direct excitation and interionic cross-relaxation. Physical measurement HAADF-STEM images and HR-TEM images were measured with FEI Tecnai G2 F30 at 300 kV. The upconversion emission spectra were recorded with Ocean Optics USB 2000 and Maya 2000 PRO spectrometers. The lasing emission was measured by a monochromator (iHR-320) coupled with a photomultiplier tube. All measurements were performed at room temperature. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data generated and analyzed during this study are available from the corresponding author upon reasonable request.
Strong and coherent ultraviolet light emission devices have enormous medical and industrial application potential, but generating ultraviolet light emission in an effective way has been challenging. Recently, a collaborative research team co-led by researchers from City University of Hong Kong (CityU) developed a new approach to generate deep-ultraviolet lasing through a "domino upconversion" processing of nanoparticles using near-infrared light, which is commonly used in telecommunication devices. The findings provide a solution for constructing miniaturized high energy lasers for bio-detection and photonic devices. In the world of nanomaterials, "photon upconversion" means that when nanomaterial is excited by light or photons with a long wavelength and low energy, it emits light with a shorter wavelength and higher energy, such as ultraviolet light. Challenge in achieving photon upconversion Photon upconversion characterized by high-energy emission upon excitation of lower-energy photons is of exceptional interest among scientists. This is because it holds potential for cost-effective construction of miniaturized deep-ultraviolet emission devices, which have enormous medical and industrial application potential, such as microbial sterilization and biomedical instrumentation. However, the photon upconversion process has limited flexibility, as it occurs mainly in special lanthanide ions comprising fixed sets of energy levels. A research team co-led by Professor Wang Feng, from Department of Materials Science and Engineering, and Professor Chu Sai-tak, from Department of Physics at CityU, together with Dr. Jin Limin from the Harbin Institute of Technology (Shenzhen), overcame the obstacle by introducing a "domino upconversion" tactic. Special structural design of nanoparticles Domino upconversion is like a chain reaction, in which energy amassed in one upconversion course triggers another succeeding upconversion process. By using a doughnut-shaped microresonator, incorporated with specially designed "upconversion nanoparticles," the team successfully generated high-energy, deep-ultraviolet light emission at 290 nm by excitation of low-energy infrared photons at 1550nm. "As the excitation wavelength was in the telecommunication wavelength range, the nanoparticles can be readily used and integrated into existing fiber-optic communication and photonic circuits without complicated modification or adaptation," said Professor Wang. The findings were published in the journal Nature Communications, titled "Ultralarge anti-Stokes lasing through tandem upconversion." The idea of constructing "domino upconversion" was inspired by a previous study of energy transfer in core-shell nanoparticles by Professors Wang and Chu. The core-shell structure design of the nanoparticle allows the multiphoton luminescence process in erbium (Er3+) ions. By adapting a similar synthetic protocol, the team successfully constructed "core-shell-shell" nanoparticles through a wet-chemistry method to explore the energy-transfer mechanism of lanthanide ions, including thulium (Tm3+) ions. Doughnut-shaped microresonator Through the careful design of doping composition and concentration in different layers or shells of the upconversion nanoparticles, the team successfully achieved a tandem combination of Er3+ and Tm3+ ions-based upconversion processes (domino upconversion). In the experiment, the Er3+ ions contained in the outer shell responded to 1550 nm near-infrared photon excitation, a wavelength located in the telecommunication range. By incorporating the nanoparticles into a doughnut-shaped microresonator cavity, the team further generated a high-quality ultraviolet microlaser, demonstrating lasing action at 289 nm by 1550 nm excitation. "The upconversion nanoparticles act as 'wavelength converters' to multiply the energy of incident infrared photons," explained Professor Wang. He expects the findings to pave the way for the construction of miniaturized short-wavelength lasers and says they may inspire new ideas for designing photonic circuits. He added that the miniaturized ultraviolet laser using this domino upconversion technology can provide a platform for sensitive bio-detection, such as the detection of cancer cell secretion, by monitoring the lasing intensity and threshold, which offers great biomedical application potential in the future.
10.1038/s41467-022-28701-1
Biology
Mountaineering ants use body heat to warm nests
K. M. Baudier et al, Structure and thermal biology of subterranean army ant bivouacs in tropical montane forests, Insectes Sociaux (2016). DOI: 10.1007/s00040-016-0490-2
http://dx.doi.org/10.1007/s00040-016-0490-2
https://phys.org/news/2016-06-mountaineering-ants-body.html
Abstract Active brood-warming in army ant nests (bivouacs) is well documented for surface-dwelling Eciton burchellii and E. hamatum colonies in lowland tropical forests. However, little is known about thermoregulation by the below-ground bivouacking army ants that comprise all other species in subfamily Dorylinae. Here we report the first observations of subterranean Labidus praedator bivouacs in tropical montane and premontane conditions (Monteverde, Costa Rica), and present the first evidence for active nest warming in underground bivouacs. We measured bivouac temperatures at depth increments of 10 cm through the center of a 1565 m elevation bivouac and compared these to simultaneous measurements at the same soil depths 1 m outside the bivouac. The bivouac was actively heated to over 6 °C higher than the adjacent soil. Another bivouac showed warming of up to 3.7 °C above surface ambient. We measured critical thermal maxima (CT max ) and minima (CT min ) of L. praedator workers of a range of body sizes including callows, as well as thermal tolerances of inquiline millipedes from the bivouac. CT max varied positively with worker body size. CT min was lower for mature than for callow workers. Symbiotic millipedes had lower CT max and higher CT min than ant workers. Temperatures below the thermal tolerance ranges of symbiotic millipedes and near the bottom thermal tolerance range for callow workers were recorded in the bivouac periphery and in adjacent soil, suggesting active bivouac warming protects some members of L. praedator bivouac communities from cold-limitation at high elevations in the tropics. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction The typically soft-bodied, altricial brood of many social insects are more sensitive to thermal variation than adult nest mates (Nalepa 2011 ). Social insect nests are often thermally homeostatic with temperature control achieved by passive and/or active thermoregulation (Seeley and Heinrich 1981 ; Jones and Oldroyd 2007 ). Passive thermoregulation involves behavioral responses to environmental thermal gradients (Jones and Oldroyd 2007 ). Examples of passive thermoregulation among social insects include foraging site and nest site selection, nest construction and orientation, and relocation of brood within nests (Coenen-Stass, Schaarschmidt and Lamprecht 1980 ; Frouz 2000 ; Penick and Tschinkel 2008 ; McGlynn et al. 2010 ; Jílková and Frouz 2014 ). Actively thermoregulating organisms use physiology to modify internal or ambient temperatures via metabolic heating or physical activity (Seeley and Heinrich 1981 ; Jones and Oldroyd 2007 ). Examples of active thermoregulation among social insects include worker clustering, flight muscle twitching to generate heat, and wing-fanning to promote evaporative cooling (Heinrich 1993 ; Anderson, Theraulaz and Deneubourg 2002 ; Weiner et al. 2010 ). Most ant species rely on passive thermoregulation to modify nest temperatures because workers are wingless, preventing active thermoregulation via fanning or flight muscle shivering (Hölldobler and Wilson 1990 ). However, some Neotropical army ants in the subfamily Dorylinae create homeostatic thermal conditions within their temporary nests (bivouacs) via active thermoregulation (Jones and Oldroyd 2007 ). In two above-ground active species, Eciton burchellii and E. hamatum , bivouacs are composed of hundreds of thousands of clustering worker bodies that surround and insulate the brood and queen. These bivouacs are actively warmed via collective metabolic heat, and bivouac temperature variation is lower than ambient (Schneirla, Brown and Brown 1954 ; Jackson 1957 ; Coenen-Stass, Schaarschmidt and Lamprecht 1980 ; Franks 1985 , 1989 ; Anderson, Theraulaz and Deneubourg 2002 ; Jones and Oldroyd 2007 ; Kadochová and Frouz 2014 ). All other army ant species bivouac below ground (Rettenmeyer 1963 ). Here, we present the first measurements of subterranean bivouac thermal physiology and brood developmental synchrony in the army ant Labidus praedator (Smith 1858). Previous studies of army ant thermoregulation have primarily focused on above-ground E. burchellii and E. hamatum bivouacs (Schneirla, Brown and Brown 1954 ; Jackson 1957 ; Rettenmeyer 1963 ). Lowland E. burchellii bivouacs thermoregulated their brood at relatively stable elevated temperatures of 28 ± 1 °C (average 2 °C higher than ambient, maximum 6 °C higher than nocturnal low temperatures) (Schneirla, Brown and Brown 1954 ; Franks 1989 ), while lowland bivouacs of E. hamatum were on average 1 °C higher and less thermally variable than ambient conditions of 22–29°C (Jackson 1957 ). Active bivouac thermoregulation and homeostasis is thought to be tightly linked to the synchronous brood development cycles that characterize the dichotomous statary (egg/pupal) and nomadic (larval) phases of Eciton colony activity (Schneirla, Brown and Brown 1954 ; Jackson 1957 ; Franks 1989 ). Whether active or passive bivouac thermoregulation and brood synchrony occur in other species of Dorylinae is unknown. Eciton species are not representative of Dorylinae, in part because they raid and frequently bivouac above ground (Schneirla 1933 ; Schneirla, Brown and Brown 1954 ; Rettenmeyer 1963 ; Gotwald Jr 1995 ). In contrast, most Neotropical army ant species are at least partly subterranean, bivouacking underground and raiding partly or entirely below ground (Rettenmeyer 1963 ; Gotwald Jr 1995 ). These differences in foraging and bivouacking soil microhabitat correspond to differences in sensory investment and thermal tolerance (Baudier et al. 2015 ; Bulova et al 2016 ). Due to the challenge of tracking underground mobile nests, the subterranean bivouacs of many common army ants remain undescribed (Rettenmeyer 1963 ; Berghoff et al. 2002 ; Dunn 2003 ). Above-ground bivouacking in Eciton is a derived state among army ants (Brady 2003 ; Brady et al. 2014 ). Therefore, data on bivouacking behavior in other doryline genera can provide evidence of how bivouac thermoregulation evolved in the above-ground species. Labidus praedator is an abundant subterranean-bivouacking army ant that raids both on the surface of the forest floor and underground (Rettenmeyer 1963 ; Kaspari and O’Donnell 2003 ; O’Donnell et al. 2007 ). Labidus praedator ranges from the southern United States (N30°50′, W93°45′) to Argentina and Southern Brazil (S30°2′, W51°12′). Labidus praedator occurs in the lowlands near sea level, though this species is most abundant from 1000 to 1600 masl in Costa Rica (Schneirla 1949 ; Watkins 1976 ; Kaspari and O’Donnell 2003 ; Longino 2010 ; O’Donnell et al. 2011 ; Economo and Guénard 2016 ). Brood development in L. praedator is apparently synchronous with statary and nomadic colony phases observed in Neotropical lowland wet forests of Panama (N09°09′, W79°51) as well as in the tropical dry forest of southern Mexico (N18°27′, W96°12′) (Rettenmeyer 1963 ; Sudd 1972 ). Possible evidence of seasonal asynchrony in brood development has been reported at the southernmost extent of this species’ range in Paraguay (S 25°20′, W57°32′) during extended winter statary phases with air temperatures reaching below 12.5 °C at night (Fowler 1979 ). Fowler ( 1979 ) suggested temperatures within a Paraguay L. praedator bivouac were less variable than surface air temperatures but bivouac temperatures were not elevated. However, Fowler ( 1979 ) measured temperature at a single bivouac point and did not measure adjacent soil temperatures, making interpretation of the data problematic. We asked whether tropical L. praedator colonies warm and/or buffer temperatures within their bivouacs when exposed to relatively low ambient temperatures in montane and premontane forest. Air temperatures are relatively low year-round in high-elevation tropical sites, which can select for localized low-temperature adaptations (Janzen 1967 ; Ghalambor et al. 2006 ). However, army ant colonies are nomadic. For example, colonies of the highly epigaeic army ant E. burchellii parvispinum move nomadically across elevations in premontane and montane forests of Monteverde, potentially experiencing a variety of mean annual temperatures at different elevations (Soare et al. 2014 ). Eciton burchellii bivouacs are more likely to be located in sheltered refuges in montane forest than in lowland forest, suggesting army ant nest site selection is a behavioral mechanism for dealing with thermal challenges (Soare et al. 2011 ). Underground environments have reduced daily and seasonal thermal variation, but closely match the local annual mean air temperature (Harkness and Wehner 1977 ; Parton and Logan 1981 ; Tschinkel 1987 ). Subterranean bivouacking L. praedator therefore likely experiences less temporally variable temperatures, but similar geographic (elevational) variation in mean temperature, compared to surface-bivouacking army ants. To date, there are no published records of bivouacking behavior or thermal biology for montane or premontane subterranean army ants. Here, we report observations of the structure and thermal properties of subterranean bivouacs of L. praedator from montane forests (1500–1565 masl) and premontane forest (950 masl) near the center of this species’ latitudinal range in Costa Rica (approximately 10°N) (Watkins 1976 ). We compared bivouac temperature conditions with thermal tolerances of the ants and their symbionts, and addressed two questions regarding bivouac thermal properties: 1. do subterranean army ants maintain elevated bivouac temperatures? 2. Do surface or sub-surface conditions exceed thermal tolerance limits of army ant workers or symbionts at high elevations, and do bivouacs buffer against these conditions? In tropical premontane wet forests of Monteverde, Costa Rica, mean annual temperature at 1460 masl is 18.8 °C (Nadkarni and Wheelwright 2000 ). We recorded average air temperatures of 15.9 °C while recording bivouac temperatures at 1565 masl in March of 2015; March is a relatively cool dry-season month in Monteverde (Nadkarni and Wheelwright 2000 ). This temperature is lower than optimum for brood development in most tropical ant species (Franks 1989 ; Abril, Oliveras and Gómez 2010 ; Kipyatkov and Lopatina 2015 ). We therefore predicted active metabolic warming would be used to elevate L. praedator bivouac temperatures at this high elevation site. Army ant colony members may not be the sole beneficiaries of a climatically moderated bivouac. Army ant colonies are host to the most species-rich array of animal associates known to science (Rettenmeyer et al. 2011 ). Many of these nest associates are arthropod species that live within the bivouac (Rettenmeyer 1962a ; Eickwort 1990 ; Beeren, Maruyama and Kronauer 2016 ; Parker 2016 ). Myremecophiles that live within the nest of their hosts are referred to as inquilines (Rettenmeyer 1962a ). Little is known about potential thermal benefits of inquilinism, but the thermal biology of army ant bivouacs is potentially relevant to the climate niche and responses of these ants and of associated symbionts to climate change. Brood, callow (newly eclosed) workers, and myrmecophilic inquilines are seldom seen outside bivouacs except during colony emigrations, suggesting bivouacs could buffer them from thermal extremes (Schneirla, Brown and Brown 1954 ; Rettenmeyer 1962a , 1963 ; Rettenmeyer et al. 2011 ). We tested whether surface or sub-surface conditions exceeded the thermal tolerance limits of inquiline Calymmodesmus sp. millipedes collected from a subject L. praedator bivouac. Methods Nest structure Two L. praedator bivouacs were observed in montane forest in July 2014, March 2015, and June 2015 on the Pacific slope of the continental divide near Monteverde, Costa Rica. Another bivouac was observed in premontane forest in April 2016 in the Children’s Eternal Rainforest near San Gerardo Research Station on the Atlantic slope. The taxon referred to here as L. praedator was morphologically consistent with ‘matte-face’ Labidus sp. Cac 1 as described by Barth, Moritz and Kraus ( 2015 ). External nest structure and ant activity were observed over multiple days for two active bivouacs (bivouacs A and B) as follows: Bivouac A (N10°18.113′ W84°48.109′, 1500 masl) was observed active 11–22 July 2014. During this time we noted external nest structure, lightly probing the bivouac with a machete and spade to assess ant presence and activity in the vicinity of the surface. We encountered bivouac A on 11 July 2014, and checked the bivouac site four additional times on 20 July 2014, 22 July 2014, 27 July 2014, and nearly 1 year later on 18 June 2015 (after the ants had departed). The ants had emigrated sometime between 22 July and 27 July 2014, the last two observations being of the evacuated bivouac site. We did not excavate this bivouac to observe brood developmental stage. However, the absence of discarded pupal cases prior to and after the colony’s emigration from this site suggests the colony’s brood were larvae. Larval brood are associated with the nomadic phase in Eciton (Schneirla, Brown and Brown 1954 ; Jackson 1957 ). Bivouac B (N10°17.816′ W84°47.951′, 1565 masl) was an active statary-phase bivouac observed 19–26 March 2015. Observations of surface nest structure, colony presence and surface activity were made during this time in the same manner as for bivouac A. Bivouac B was first seen on 19 March 2015. We checked the bivouac site seven additional times on 20, 21, 22, 24, 25 and 26 March 2015, and 18 June 2015. Our March field season ended before the emigration of this colony, but the ants were absent from this site on 18 June 2015. After 5 days of temperature data collection (below), bivouac B was excavated twice, followed by observations and photographing of internal structure. The two excavations of bivouac B took place 24 h apart (25 and 26 March 2015), and yielded consistent results, the ants having reformed the bivouac structure overnight. Bivouac C (N10°22.375′ W84°46.532′, 950 masl) was an active statary-phase bivouac observed on 22 April 2016. Due to a mat of thick buttress roots associated with the bivouac, a full excavation could not be completed, but we performed a partial excavation and nest surface description. Temperature and humidity measurements Temperature and relative humidity of bivouac B were recorded using alternating iButton hygrochron (measuring humidity and temperature), and thermochron (measuring only temperature) data loggers placed alternately every 10 cm along a thermally inert (wooden) vertical probe from soil surface to 40 cm depth in the bivouac (iButton: Maxim IntegratedTM, San Jose, CA, USA) (Fig. 1 ). A reference probe with an identical configuration was placed in soil 1 m away from the bivouac (Fig. 1 ). Data were collected every 5 min for 5 days. Temperature accuracy of all iButtons was confirmed to be within ±0.5 °C (the manufacturer-reported instrument error for thermochrons) via hot and cold water bath (42 and 0 °C respectively) using a certified glass thermometer. Relative humidity accuracy of hygrochrons was confirmed to be within ±1 % at 0 and 100 % by using desiccants and suspension over enclosed water bath at 25 °C. Additional temperature readings were taken at various depths during excavation using a hand-held infrared (IR) thermometer (BAFX Products, Milwaukee, WI, USA). IR thermometer accuracy was confirmed using a calibrated thermocouple. Temperatures were recorded with the IR thermometer for the bivouac C surface, soil surface 1 m away from the bivouac, and in the upper portions of the underground bivouac cavities at 10 and 15 cm (within a gallery underneath one of the roots). Fig. 1 Structure and probe placement for temperature and humidity measurements of bivouac B Full size image Thermal tolerance Critical thermal maximum (CT max ) and minimum (CT min ) were measured for callow and non-callow workers of different body sizes, as well as for inquiline millipedes found within bivouac B. CT max and CT min were measured using standard dynamic methods (Lutterschmidt and Hutchison 1997 ; Diamond et al. 2012 ; Oberg, Toro and Pelini 2012 ) with thermal ramping at a rate of 1 °C every 10 min. Insects showing immobility for a duration of 10 s were considered to have surpassed their CT max or CT min respectively. Half of the ants were allowed to acclimate to lab conditions for 24 h prior to thermal assays (ranging from 19 to 26 °C over the course of the day), while half were run in thermal assays 30 min after collection. The thermal tolerances of these groups were compared to test for effects of acclimation. Statistical analyses All analyses were performed in R statistical software. We used standard linear multifactor analyses of variance to identify significant predictors of bivouac temperatures, environmental temperatures and thermal tolerances (Quinn and Keough 2002 ). We used the ANOVA function to test for significance of predictors by comparing the fit of linear models with and without the inclusion of each predictor variable. Starting with all measured predictors, Aikake Information Criteria (via the drop1 function) was used to select the order of predictor variable testing and elimination from the model. If the predictor had a significant effect on the model’s fit, it was included in the full model for subsequent analyses. Mean, maximum and minimum daily temperatures and relative humidity were compared across depths and treatments (bivouac versus soil reference probes). Linear regressions of soil depth versus daily mean were also performed within each location. Predictors of CT max and CT min included head width (as a proxy for body size), whether the ant was from the acclimated or non-acclimated group, and whether ants were callow. Student’s t tests were run to compare millipedes to all ant size classes for both CT max and CT min . Results Nest surface structure Bivouac A was found active at the base of saplings and a small tree trunk at 1500 masl on 11 July 2014 (Fig. 2 , Supplementary Fig. 1). Bivouac B was found alongside and below the root mass of a fallen tree on 19 March 2015 at 1565 masl (Fig. 1 , Supplementary Fig. 1). Bivouac C was found between and beneath the intertwining roots of two live trees at the top of a ridge on 22 April 2016 at 950 masl (Supplementary Fig. 1). The surface structure of all three bivouacs consisted of low, wide mounds of loose excavated soil (in fine particles) intermixed with colony refuse. In all cases, the excavated soil was distinctive and easily visible against nearby leaf litter. Fig. 2 Surface nest structure of bivouac A on 11 July 2014, showing the loose soil mound covered in fine bits of colony refuse; cleared bivouac surface appeared prominent and easily distinguised from surrounding leaflitter Full size image The surface mound of bivouac A was approximately circular and larger in area than the other two mounds (length: 124 cm, width: 141 cm). The surface mound of bivouac B was crescent-shaped around the base of the root mass of a fallen tree (length: 59 cm, width: 32 cm). Mid-elevation bivouac C was visible on the surface as a series of small mounds between root buttresses of two mature trees (entire area length: 81 cm, width: 76 cm). Unlike the other two observed bivouac mound surfaces, that of bivouac B was punctuated with craters approximately 1 cm in diameter. All three bivouacs were covered in a mixture of excavated soil and colony refuse. For bivouac B, this refuse consisted largely of discarded pupal cases, isopod tergites and other arthropod body parts (Supplementary Fig. 2). The surface of bivouac C was covered in what appeared to be fine bits of excavated decomposing wood in addition to excavated soil and colony refuse (cockroach tergites, isopod tergites, and discarded ant pupal cases). The lack of discarded pupal cases on the bivouac surface of bivouac A suggests the colony was nomadic at the time. We also noted two possible abandoned bivouac sites within a 500 m radius of bivouac A with similar loose soil and broad circular shape, also located at the base of saplings. Bivouac B was in the late statary phase, with the majority of the pupae having eclosed by the time our observations in March 2015 were complete. Bivouac C was likely in the late statary phase as well, having large numbers of discarded army ant pupal cases among colony refuse. In June of 2015, former bivouac sites of bivouac A and bivouac B were revisited and found to be void of any army ant activity. At this time, the soil on the surface of both abandoned nest mounds appeared to have sunken in the absence of L. praedator (27 cm subsidence for bivouac A, 17 cm subsidence for bivouac B) (Supplementary Fig. 3). Internal nest structure and colony strata The initial excavation of bivouac B was performed at 5 pm. At that time, some ant activity was observed on the soil surface. Excavating the top 10 cm of soil produced considerable defensive activity with the arrival of several hundred soldiers (also observed in bivouac A the previous year and bivouac C the following year). Upon excavation, bivouac B contained mature workers, callow workers of various size castes and hundreds of pupae, indicating synchronous brood development and a colony in the late statary phase; no army ant larvae were observed. The depth at which the first callow workers were observed was 17 cm, however, callow workers were at highest density from depths of 20–35 cm. These callow workers were interspersed with pupae found from 27 to 35 cm depth (Fig. 1 ). Bivouac B did not appear to occupy a large central cavity, but rather consisted of many small tunnels (<1 cm) and chambers within loose soil. Soil adjacent to the bivouac was more compacted and less porous than soil within the bivouac, particularly around roots of the fallen tree. Some tunnels were close to small (0–1 cm diameter) roots of this fallen tree, though the majority of the bivouac structure consisted of small interconnected chambers and tunnels independent of the tree’s root system. On the second day of excavation, bivouac structure was similar, the bivouac having been reconstructed over night. There was one exception: fewer pupae were present than the previous day and callow workers were relatively more abundant. This is likely due to overnight eclosion of workers. We did not observe the emigration of bivouac B. Excavating bivouac C within 10 cm of the surface of the bivouac revealed large galleries (approximately 2 cm in diameter) beneath some of the most superficial, large (>10 cm diameter) roots. Nest-associated arthropods Numerous white inquiline millipedes (genus Calymmodesmus ) were found within bivouac B (Loomis 1959 ; Rettenmeyer 1962b ). Calymmodesmus were in high density in the vicinity of the pupae and extended from 20 cm to 43 cm depth in the bivouac (Figs. 1 , 3 ). Other inquilines included one other species of millipede, two morpho-species of Acari, two morpho-species of wingless phorid ( Ecitomyia spp.) found roaming the mound surface, five sp. of staphylinid beetle (one individual limuloid beetle of genus Vatesus was collected in refuse atop the bivouac), and one species of Scydmaemidae beetle. Other notable associates encountered include one sp. winged Phoridae that arrived en masse when we excavated high ant-density portions of the nest, as well as two morphospecies of collebmola (1sp. Entomobryidae, 1sp. Poduroomorpha). Only Calymmodesmus millipedes were used in thermal tolerance assays as they were the only myrmecophile collected in sufficient numbers to enable within-species replication of critical thermal measures. Photographs of each collected myrmecophile morphospecies are included in online supplementary materials. Fig. 3 Thermal tolerances of three types of assayed subjects extracted from the bivouac; black background area represents the bivouac temperatures recorded within the range of depths where each category was collected; grey background area represents reference probe temperatures at equivalent depths Full size image Temperature and humidity Temperatures at bivouac B as measured by IR thermometer during excavation were 14–15 °C on the surface of the soil 1 m away from the bivouac, with 23.1 °C measured at the brood center of the bivouac during excavation. This was slightly higher than the maximum temperature of 22.6 °C recorded by the 40 cm depth iButton probe in the bivouac. Worker ants may have moved among bivouac depths in response to variation in surface temperature. At 7:15 am on a cold morning (13.6 °C measured by iButton on bivouac surface), adult worker ants were not observed at depths less than 13 cm, while on a warm day (surface iButton measured 19.1 °C at 9:49 am) ant activity was observed within 1 cm of the surface of the bivouac. Mean, maximum and minimum daily temperatures (recorded by iButtons) were always equal or higher in the bivouac than at depth-matched reference points in the soil nearby (Figs. 4 , 5 ; mean F 1,46 = 81.75 p < 0.001; maximum F 1,46 = 166.55 p < 0.001; minimum F 1,46 = 59.09 p < 0.001). Mean daily temperature increased with depth from the surface within the bivouac ( R 2 = 0.91, F 1,23 = 245, p < 0.001); the mean temperature at 40 cm depth was 6.2 °C higher than the depth-matched reference point in soil (Fig. 4 ). Mean temperatures (±standard deviation) experienced by pupae at depths of 30–40 cm were 21.8 ± 0.4 °C. Daily mean belowground temperatures were only slightly higher than surface (0.8 °C at 40 cm depth) in soil with no ants ( R 2 = 0.75 F 1,23 = 72, p < 0.001) (Fig. 4 ). Time of day was a significant predictor of temperature ( F 1,286 = 212.46, p < 0.001), accounting for much of surface thermal, 10 cm, and 20 cm variability in both the bivouac and depth-matched reference points (Fig. 5 ). Daily fluctuation in temperature decreased similarly with soil depth in both the reference and bivouac samples (Fig. 5 ). Standard deviation of bivouac temperatures at each depth was equal to or slightly greater than those in the nearby soil (Fig. 4 ). The bivouac therefore experienced reduced variation in temperature relative to surface air temperature, but did not experience reduced variation in temperature compared to nearby soil. Relative humidity remained near 100 % as recorded across all surface and sub-surface probes for both bivouac and soil reference transects (Fig. 4 ). Daily maximum humidity was 100 % for all depths regardless of ant presence. Minimum relative humidity across 5 days was 88.1 % on the surface with a minimum bivouac humidity of 96.7 %. Fig. 4 a Bivouac and uninhabited (reference) soil temperatures at varying depths from the surface ± standard deviation; perforated lines represent maximum and minimum recorded temperatures across all 5 days; b bivouac and reference percent relative humidity at varying depths from the surface ± standard deviation Full size image Fig. 5 Bivouac ( a ), and reference ( b ) temperatures across times of day, recorded every 5-min and averaged across 5 days of recording; different lines denote probe depth in soil Full size image Ambient and bivouac surface temperatures at premontane bivouac C were higher (as measured using IR thermometer) than corresponding temperatures at montane bivouac B. At 13:52 on 22 April 2016, surface soil temperatures 1 m from bivouac C ranged from 21.6 to 22.0 °C, while the surface of the bivouac was 22.0–23.6 °C. 10 cm below the bivouac surface ranged in temperature from 24.0 to 24.8 °C, and within a gallery beneath a root approximately 15 cm below the surface, temperatures ranged from 24.5 to 25.3 °C. Thermal tolerances CT max ( F 1,10 = 0.001, p = 0.919) and CT min ( F 1,11 = 0.011, p = 0.973) did not differ with 24 h of acclimation. Acclimation treatments were therefore pooled in subsequent analyses. Larger ants had higher CT max ( F = 12.53, df = 14, p = 0.004), but CT min did not covary with body size ( F 1,14 = 2.540, p = 0.135). Callow ants had higher CT min than non-callow ants ( F 1,13 = 106.66 p < 0.001), while CT max did not differ significantly between callow and non-callow ants ( F 1,12 = 0.489 p = 0.498) (Fig. 4 ). CT max was higher for mature worker ants than inquiline millipedes ( F = 19.119, p < 0.001). Labidus praedator workers also had lower CT min than millipedes ( F = 28.619, p < 0.001). Low temperatures were not below CT min for any of the assayed individuals at the bivouac depths where they occurred, (Fig. 3 ). However, minimum recorded surface temperatures were colder than tolerable by the most sensitive millipedes, and were only 1.1 °C warmer than tolerated by the most sensitive callow workers (Fig. 3 ; Supplementary Table 1). Discussion Bivouac site selection All observed active and apparently abandoned montane and premontane L. praedator bivouac sites occurred at the base of live or dead trees (Supplementary Fig. 1), suggesting (together with previous accounts) that L. praedator selects bivouac sites in loose soil or cavities created by live tree roots, fallen trees, or other similar structures (Sumichrast and Norton 1868 ; Rettenmeyer 1963 ; Sudd 1972 ; Monteiro, Sujii and Morais 2008 ). However, both high volumes of loose soil on the surface and soil subsidence after ant departure indicate excavation by the ants (Supplemental Fig. 2). Bivouac B was able to rebuild overnight suggesting rapid excavation of several liters of soil. Bivouac thermal and humidity conditions Above-ground bivouacking army ants achieve nest homeostasis via active metabolic warming and active thermal buffering using the interlocked bodies of the ants themselves (Schneirla, Brown and Brown 1954 ; Franks 1989 ; Jones and Oldroyd 2007 ). Our data show that two high-elevation L. praedator colonies at a tropical latitude (N10°18′) actively warmed their bivouacs via collective metabolic heating, while possibly relying on passive thermal buffering effects of soil to reduce dial thermal fluctuations in temperature. Nest heating via combined metabolic activity of ant workers in close spatial proximity is an active thermoregulation mechanism (Jones and Oldroyd 2007 ), while bivouac moderation of widely varying air temperatures could not be distinguished from passive soil buffering. The L. praedator placement of immobile pupae and callow workers within bivouac B along the thermal gradient, and adult worker movements in response to solar warming, are passive thermoregulatory mechanisms common among many ant species (Jones and Oldroyd 2007 ; Penick and Tschinkel 2008 ). The warmest regions of the statary L. praedator bivouac B were the lower level strata containing the brood (pupae) and millipedes. Thermal probes showed these regions were on average 21.8 ± 0.4 °C, with infrared recordings of up to 23.1 °C at the brood center. This is not as high as previously suggested thermoregulatory target windows of E. burchellii (28 ± 1 °C) and E. hamatum (26.6 ± 1.1 °C) in the lowlands (Jackson 1957 ; Franks 1989 ). At 40 cm depth from the surface, the L. praedator bivouac sustained mean daily temperatures 6.2 °C higher than at the same depths in surrounding soil. This is a greater and longer-sustained warming effect than previously reported for any army ant bivouac (Jackson 1957 ; Franks 1989 ). We also recorded higher bivouac surface temperatures relative to soil surface temperatures 1 m away, and increasing temperatures with bivouac depth, within the superficial portions of mid-elevation bivouac C (950 masl). Both suggest that bivouac metabolic warming is not a montane bivouac phenomenon. However, the fact that even superficial portions of mid-elevation bivouac C were warmer than the warmest portions of montane bivouac B (1550 masl), suggests that internal bivouac temperature is not uniform across elevations, and that high elevation bivouacs may struggle to raise bivouac temperatures to those optimum for brood. Temperatures in above-ground bivouacs at high elevations have yet to be measured, but will likely shed further light on this interplay between microhabitat and elevational thermal effects on bivouac warming. Ants in general, and particularly soft-bodied ant larvae, are susceptible to desiccation in a wide variety of environments (Hölldobler and Wilson 1990 ). For bivouac B, although our humidity and temperature measurements were taken during the dry season, sub-surface and surface relative humidities near 100 % were recorded regardless of ant presence (Fig. 4 ). This suggests L. praedator bivouacs are not limited by moisture availability in the lower montane wet forest life zone. Future studies of how L. praedator bivouacs respond to dry season conditions in seasonally dry forests (such as in Guanacaste) may shed light on adaptations for dealing with low humidity. The presence of discarded tergites in the refuse of L. praedator confirms previous observations that this species feeds largely on isopods in the premontane life zones of Monteverde (Supplementary Fig. 2) (Longino, J. T. pers. comm.) (Longino 2010 ), with evidence of small cockroaches and other insects consumed as well. We observed L. praedator feeding on terrestrial amphipods at a raid fronts in Monteverde and San Gerardo in June 2015 and April 2016 respectively. Although we observed refuse atop all montane and premontane bivouacs, there are no accounts of either loose-dirt mound construction or refuse-topping in low elevation L. praedator bivouacs (Sumichrast and Norton 1868 ; Rettenmeyer 1963 ; Fowler 1979 ). Whether this refuse topping serves as an adaptive thermal warming function remains to be tested. Thermal sensitivities of bivouac occupants Inquiline millipedes ( Calymmodesmus sp.) were more thermally sensitive to both heat and cold than mature or callow ants, while callow worker ants differed from mature workers in their ability to function at low temperatures. The higher CT min of callow workers may be related to differences in cuticular thermal resistance (Galushko et al. 2005 ). The impact of cold on callow movement may, at least in part, explain the common observation of callow workers being carried in emigrations immediately following the statary phase (Rettenmeyer 1962b , 1963 ). In the case of Calymmodesmus millipedes, bivouac surface temperatures were within 1.1 °C of the mean CT min for all millipedes, and were lower than what could be tolerated by the most sensitive individuals. Choice of location within the bivouac corresponded to these sensitivities, with Calymmodesmus being found deep within the nest where temperatures were farthest from their CT min . These findings show millipede thermal specialization to bivouac homeostatic conditions, suggesting an obligate relationship between host and inquiline. This more thermally stable region of the bivouac was also where the pupae were housed, indicating a narrow thermal tolerance range for L. praedator pupae. Callow workers were less sensitive than millipedes and correspondingly were encountered at shallower depths than inquiline millipedes or pupae, where temperatures are more variable over the course of the day. The widest distribution of activity within the bivouac was seen by the mature workers, inhabiting bivouac depths from 0 cm down beyond 45 cm in low densities. The cold sensitivity of callow adult workers combined with low temperature ambient conditions suggests that active warming in L. praedator bivouacs is an adaptive response to cold limitation at this high elevation site.
For their colonies to survive at high altitudes, army ants keep their underground nests as much as 13 degrees F warmer than surface temperatures, according to a new study by Drexel University scientists. Although they're a nomadic species—which is relatively rare for ants—Labidus praedator create underground nests (called bivouacs) that harbor their eggs and young offspring (brood). How hot or cold that bivouac gets may be critical for the ability of the ants to stay mobile and raise their young. "As is the case for most insects, army ant brood temperature is a key determiner of the time required for each hatched egg to reach adulthood," said Kaitlin Baudier, a graduate student in Drexel's College of Arts and Sciences, who teamed with Drexel professor Sean O'Donnell, PhD, to publish their findings in Insectes Sociaux. L. praedator carefully time the lifecycle of their young. Efficiency is key. When their young are larvae (freshly hatched offspring), the colony can remain on the move. But when those larvae become pupae (similar to a chrysalis in butterflies, the stage just before the ants become adults), the colony is tied down to one bivouac for weeks. Since there is an ideal temperature range that best facilitates offspring growth, it's important for the ants to keep their nest nice and toasty. And when those bivouacs occur at higher elevations that becomes especially vital. "At high elevations, bivouac heating may be even more important than at low elevations because ambient air temperatures are further below optimum growth temperatures," Baudier explained. Studying army ant colonies in Costa Rica, Baudier and O'Donnell tracked three different bivouacs, the lowest constructed at 950 meters above sea level and the highest at 1,565. While Drexel's researchers measured surface and soil temperatures that frequently dropped too low for the ants' young, the bivouacs were consistently kept warm enough to remain in their ideal temperature range. In fact, the warmest temperatures were recorded lower in the nest, about 40 centimeters down, where the youngest offspring resided. Underground areas near the bivouac were only slightly warmer than above-ground—just about 1 degree F. Meanwhile, the bivouacs' mean temperature was 13 degrees F above the surface. Higher temperatures didn't just result from residual heat energy in the ground—the ants' bodies were warming it. While previous research focused on above-ground army ants who make their bivouacs at lower (and warmer) elevations, Baudier and O'Donnell's research showed the resiliency of army ants when confronting colder mountain environments. "This study lifts the roof on what we thought army ants were capable of in terms of warming their young in the face of the more extreme cold and wet conditions at high elevations," Baudier said. Still, that doesn't mean that the ants are ready to climb Mount Everest. Army ants do a good job of warming their nests, but there might be a ceiling to their capabilities. In the lower bivouac the researchers studied (constructed at 950 meters above sea level) even the coolest portions were consistently warmer than the highest temperatures recorded in the bivouac at 1,565 meters above sea level. "The record high elevation for an army ant in Costa Rica was a specimen of Labidus coecus—a close relative, though more subterranean than L. praedator—that was found at 3,000 meters above sea level. However, in the case of L. praedator, the highest I'm aware of is about 1,750 meters above sea level," Baudier said. "I do suspect that cold temperatures are a major factor in setting these upper elevational ranges. The highest bivouacs seem to struggle to keep warm in wet, cold soil." "We suspect ants in the mountains have to expend a lot of energy to keep their nests warm," O'Donnell added.
10.1007/s00040-016-0490-2
Computer
A space-time coding metasurface antenna for efficient and secure communications
Geng-Bo Wu et al, Sideband-free space–time-coding metasurface antennas, Nature Electronics (2022). DOI: 10.1038/s41928-022-00857-0 Journal information: Nature Electronics
https://dx.doi.org/10.1038/s41928-022-00857-0
https://techxplore.com/news/2022-11-space-time-coding-metasurface-antenna-efficient.html
Abstract Applications such as microwave wireless communications, optical light fidelity, and light detection and ranging systems require advanced interfaces that can couple guided waves from in-plane sources into free space and manipulate the extracted free-space waves. Spatiotemporally modulated metasurfaces can control electromagnetic waves, but such systems are typically limited to free-space-only and waveguide-only platforms. Here we report a 1-bit space–time-coding metasurface antenna that can extract and mould guided waves into any desired free-space waves in both space and frequency domains. The waveguide-integrated metasurface antenna also provides a self-filtering phenomenon that overcomes the issue of sideband pollution found in traditional spatiotemporally modulated metasurfaces. To illustrate the capabilities of the approach, we use the metasurface antenna for high-efficiency frequency conversion, fundamental-frequency continuous beam scanning and independent control of multiple harmonics. Main Metamaterials are artificial materials that typically consist of subwavelength structures arranged to achieve macroscopic electromagnetic (EM) properties 1 , 2 . Metasurfaces—the two-dimensional (2D) counterpart of metamaterials—can offer distinct advantages over metamaterials such as low insertion loss, easy fabrication and conformability 3 , 4 , 5 . Metasurfaces can manipulate EM waves—including their amplitude, phase and polarization—and can dynamically control EM waves by integrating functional materials. In particular, spatial gradient metasurfaces can tailor the momentum of EM waves, enabling a wide variety of functions including abnormal deflection 5 , 6 , orbital angular momentum generation 7 , 8 , holography 9 , 10 and cloaking 11 , 12 . By loading tunable components into metasurfaces, the wavefront of EM waves can be switched on demand 13 , 14 , 15 . Recently, time-varying metasurfaces with space modulation—that is, spatiotemporally modulated metasurfaces (STMMs)—have been developed. STMMs have one or more spatially and temporally variant parameters, such as the reflection/transmission phase, amplitude, surface impedance and conductivity of the constitutive material 16 , 17 , 18 , 19 , 20 , 21 . Compared with conventional gradient metasurfaces, STMMs add an additional dimension—time—into the metasurface design, enabling new physical phenomena and a new level of EM wave manipulation in the momentum and frequency spaces. STMMs can be divided into three categories according to whether their excitation and output are guided waves or free-space waves. The first type controls the waves inside waveguides. Waveguide-based STMMs can break time-reversal symmetry and Lorentz reciprocity by cascading two standing-wave modulators with a relative phase shift 22 , 23 or by the unidirectional-propagating permittivity 19 , 20 /conductivity 17 , 18 of the constructive medium. Non-reciprocal STMMs can behave as optical isolators or circulators, potentially for on-chip integrated photonics 22 . In addition, space–time modulation allows the spectrums of guided waves to be controlled by forming photonic gauge potentials, leading to diverse phenomena, such as frequency comb generation, negative refraction, perfect focusing and Bloch oscillations in the synthetic frequency dimensions 24 , 25 , 26 . In these cases, both input and output are guided modes propagating inside the waveguides. The second type of STMM is excited by external free-space waves and offers novel physical phenomena such as the ability to overcome Lorentz reciprocity constraints 27 , 28 , 29 , 30 , Doppler cloaks 31 , 32 , 33 , harmonic generation 16 , 34 , 35 , 36 , frequency conversion 37 , 38 , 39 , 40 and direct information modulation 41 , 42 , 43 . For these situations, both input and output are free-space waves, making further on-chip integration difficult. The third type of STMM aims to bridge the gap between free-space and guided modes 18 , 44 , 45 . We term this type as a ‘metasurface antenna’ to distinguish it from the other two types and to emphasize its unique property that links guided waves in transmission lines and free space. Such STMMs have received limited attention to date, and only the breakdown of Lorentz reciprocity has been experimentally demonstrated 44 . Other wave-based spatiotemporal manipulations have yet to be explored. Such metasurface antennas, however, have a range of potential applications including microwave wireless communications, optical light fidelity, and light detection and ranging systems. These systems require advanced interfaces that can couple guided waves from in-plane sources into free space and manipulate the extracted free-space waves on demand. Phased arrays 46 , 47 are well explored at microwave frequencies to implement beamforming and steering, but they are costly and power hungry. Alternatively, edge couplers 48 and surface gratings 49 used for optical on-chip guided-to-space coupling have limited functionalities in terms of light control. Metasurfaces offer powerful opportunities to bridge the gap between waveguides and free space, but most metasurface antennas have space-only modulation and do not exploit the time dimension 50 , 51 , 52 , 53 . Sideband pollution is a key bottleneck in the broad application of STMMs. In particular, periodic temporal modulation in STMMs and time-modulated arrays 54 , 55 , 56 produce unwanted harmonic radiations, severely interfering with the useful signals. In this Article, we report a self-filtering phenomenon for waveguide-integrated metasurface antennas. Our space–time-coding (STC) metasurface antennas can achieve a versatile and complex guide to free-space functions in both spectral and spatial domains and are free of sideband pollution. We also theoretically predict and experimentally demonstrate that a 1-bit coding scheme—the simplest digital version of metasurfaces—can achieve full control of the frequency and momentum contents of EM waves, which typically requires continuous or multibit spatiotemporal modulations for conventional free-space-only STMMs. We illustrate the flexibility in frequency and space manipulation of our STC metasurface antenna by using it for high-efficiency frequency conversion, fundamental-frequency continuous beam scanning, and multiharmonic independent control. Our STC metasurface antenna could be of use in wireless communications, cognitive radar and integrated photonics. Theoretical formulation A conceptual illustration of the waveguide-integrated STC metasurface antenna, consisting of a one-dimensional array of meta-atoms, is shown in Fig. 1 . The propagating guided waves inside the waveguide can be extracted and moulded into desired out-of-plane free-space waves in both spatial and frequency domains. Positive–intrinsic–negative (PIN) diodes are utilized as the active components embedded into each meta-atom to independently and periodically switch the element between the coupling and non-coupling states (Methods provides the detailed configurations of the meta-atom). This corresponds to a 1-bit STC digital scheme, in which ‘1’ and ‘0’ present radiating and non-radiating states of the unit cell, respectively. The PIN diodes are controlled by a field-programmable gate array (FPGA) such that the coupling state of the meta-atoms can be dynamically programmed in a predesigned STC sequence. Fig. 1: Conceptual illustration of the STC metasurface antenna. The propagating guided waves inside the waveguide can be converted and moulded into any desired out-of-plane free-space waves in both frequency and momentum domains. PIN diodes are incorporated into each meta-atom to switch the element between the coupling (‘1’) and non-coupling (‘0’) states. The excitation states of the meta-atoms are periodically switched according to the applied 1-bit digital ‘0/1’ STC matrix. In this illustrated example, the guided mode is converted into free-space modes at three different harmonic frequencies, whose radiation beams can be independently and precisely controlled. Full size image The meta-atom extracts energy from the waveguide, which can be viewed as a waveguide-fed magnetic dipole to radiate EM waves into free space. By assuming that the lattice size of the meta-atom is much smaller than the wavelength, the meta-atoms can be viewed as sampling the feeding guided waves propagating inside the waveguide at each element position. Suppose that the time modulation period T is much larger than the radio-frequency (RF) period, the coupled magnetic field just above the metasurface aperture can be written as $$\mathop{H}\limits^{\rightharpoonup} \left( {x,t} \right) = \hat y{{{\mathrm{e}}}}^{\mathrm{j}\omega _0t}H_0C(x,t)\mathrm{e}^{ - \mathrm{j}\xi _{\mathrm{gw}}x}{{{\mathrm{{\varPi}}}}}(x),$$ (1) where ω 0 is the angular frequency of the injected RF signal, and H 0 and ξ gw are the constant magnitude of the magnetic field and wavenumber inside the waveguide, respectively. Also, Π ( x ) is the rectangular function, considering the finite length of the metasurface aperture. Assume that the excitation source is located at x = 0, and the power is delivered towards the waveguide with length L along the x direction; then, \({{{{{\varPi}}}}}\left( x \right) = \left\{ {\begin{array}{*{20}{c}} {1,0 < x < L} \\ {0,\,\mathrm{else}} \end{array}} \right.\) . In equation ( 1 ), C ( x , t ) is the coupling coefficient at different positions and instants of time. If the coupling coefficient is space and time invariant, that is, C ( x , t ) = C 0 , the coupled wave by the meta-atom is a slow wave confined in the waveguide. This is due to the fact that ξ gw is larger than the free-space wavenumber k 0 = ω 0 / c , where c is the free-space light speed. Now, if C ( x , t ) is a periodic function of time with period T , the coupled field above the waveguide also periodically changes with time. The periodic coupling coefficient C ( x , t ) can be decomposed into a Fourier series in the frequency domain as $$C\left( {x,t} \right) = \mathop {\sum }\limits_{m = - \infty }^\infty c\left( {x,\omega _0 + m{{\Delta }}\omega } \right){{{\mathrm{e}}}}^{\mathrm{j}m{{\Delta }}\omega t},$$ (2) where Δ ω is the modulation angular frequency. The Fourier coefficients c ( x , ω 0 + m Δ ω ) can be calculated by $$c\left( {x,\omega _0 + m{{\Delta }}\omega } \right) = \frac{1}{T}\mathop {\smallint }\limits_0^T C\left( {x,t} \right){{{\mathrm{e}}}}^{ - \mathrm{j}m{{\Delta }}\omega t}\mathrm{d}t,$$ (3) where c ( x , ω 0 + m Δ ω ) represents the equivalent complex coupling coefficient at the m th-order harmonic frequency at coordinate x . Substituting equation ( 2 ) into equation ( 1 ), we obtain the coupled magnetic field as $$\mathop{H}\limits^{\rightharpoonup} \left( {x,\omega } \right) = \hat yH_0\mathop {\sum }\limits_{m = - \infty }^\infty {{{\mathrm{e}}}}^{\mathrm{j}(\omega _0 + m{{\Delta }}\omega )t}c\left( {x,\omega _0 + m{{\Delta }}\omega } \right){{{\mathrm{e}}}}^{ - \mathrm{j}\xi _{\mathrm{gw}}x}{{{{{\varPi}}}}}(x).$$ (4) We can observe from equation ( 4 ) that the temporal modulation of the coupling coefficient results in a frequency comb centred at ω 0 with frequency interval Δ ω for the spectrum lines. Most importantly, the modulation leads to an additional equivalent complex amplitude distribution c ( x , ω 0 + m Δ ω ) adding to the original guided wave at each harmonic frequency. Especially, the field at the fundamental frequency (input frequency) reads $$\mathop{H}\limits^{\rightharpoonup} \left( {x,\omega _0} \right) = \hat yH_0{{{\mathrm{e}}}}^{\mathrm{j}\omega _0t}c(x,\omega _0){{{\mathrm{e}}}}^{ - \mathrm{j}\xi _{\mathrm{gw}}x}{{{\mathrm{{\varPi}}}}}(x),$$ (5) where $${{{\mathrm{c}}}}\left( {x,\omega _0} \right) = \frac{1}{T}\mathop {\smallint }\limits_0^T C\left( {x,t} \right)\mathrm{d}t.$$ (6) From equation ( 6 ), it is evident that the equivalent coupling coefficient at the fundamental frequency is the time average of the time-varying coupling coefficient, which is independent of the type of spatiotemporal modulation strategy adopted by C ( x , t ). Intuitively, a periodical change in the instantaneous coupling energy from the waveguide has an average time effect on the coupling coefficient at the fundamental frequency. This property is beneficial for achieving fundamental-frequency beam scanning and sideband radiation suppression in the following sections. Now we consider the simplest type of STC modulation: ON–OFF switching of the PIN diodes, corresponding to switching each meta-atom between the coupling (coding element ‘1’) and non-coupling (coding element ‘0’) states. The coupling coefficient based on this 1-bit coding scheme in one time-periodic T can be written as $$C\left( {x,t} \right) = C_0\left\{ {\begin{array}{*{20}{c}} {1,t_\mathrm{s}\left( x \right) - \tau \left( x \right)/2 \le t/T \le t_\mathrm{s}\left( x \right) + \tau \left( x \right)/2} \\ {0,\,\mathrm{else}} \end{array}}, \right.$$ (7) where t s ( x ) and τ ( x ) are the normalized time shift and duty cycle at position x , respectively, and subject to 0 ≤ t s ( x ) and τ ( x ) ≤ 1. The form of C ( x , t ) is shown in Supplementary Fig. 5 . The ON–OFF switching between coupling and non-coupling energy from the waveguide is a 1-bit amplitude modulation (AM) scheme. In this case, the equivalent complex coupling coefficient in equation ( 3 ) can be represented as (Supplementary Note 5 provides detailed derivations) $$c\left( {x,\omega _0 + m{{\Delta }}\omega } \right) = C_0\tau \left( x \right)\mathrm{sinc}[\uppi m\tau \left( x \right)]\mathrm{e}^{ - \mathrm{j}2\uppi mt_\mathrm{s}\left( x \right)}.$$ (8) From equation ( 8 ), the amplitude of the equivalent coupling coefficient is determined by the normalized duty cycle τ , whereas the imparted phase shift by the STC scheme mainly depends on the normalized time shift t s . The aperture field above the waveguide at the m th-order harmonic frequency can be written as $$\begin{array}{l}\mathop{H}\limits^{\rightharpoonup} \left( {x,\omega _0 + m{{\Delta }}\omega } \right)\\ = \hat yH_0C_0\tau \left( x \right)\mathrm{sinc}\left[ {\uppi m\tau \left( x \right)} \right]\mathrm{e}^{ - \mathrm{j}[\xi _{\mathrm{gw}}x + 2\uppi mt_\mathrm{s}(x)]}{{{{{\varPi}}}}}(x).\end{array}$$ (9) The corresponding far-field scattering patterns of the metasurface antenna at the m th-order harmonic frequency ω 0 + m Δ ω can be obtained by taking the spatial Fourier transform of equation ( 9 ) as $$\begin{array}{l}H_{\mathrm{rad}}(\theta ,\omega _0 + m{{\Delta }}\omega )\\ = H_0C_0\mathrm{FT}\{ \tau \left( x \right)\mathrm{sinc}\left[ {\uppi m\tau \left( x \right)} \right]\mathrm{e}^{ - \mathrm{j}[\xi _{\mathrm{gw}}x + 2\uppi mt_\mathrm{s}(x)]}{{{{{\varPi}}}}}\left( x \right)\}.\end{array}$$ (10) In the above theoretical modelling, the mutual coupling between the meta-atoms is ignored, considering that the slot elements operate in the off-resonance state. On the other hand, only the fundamental frequency (input frequency) of the guided wave is considered in this model for the following reasons. First, the ON–OFF switching of the PIN diodes has little perturbation on the guided wave (Supplementary Fig. 3 ), and therefore, the modulation effect of the guided wave is negligible. More importantly, the higher-order harmonic frequencies cannot satisfy the phase-matching condition and are suppressed inside the waveguide, as revealed in the following sections. According to equation ( 8 ), the STC scheme introduces an additional momentum \(k_\mathrm{m} = 2\uppi m\frac{{\mathrm{d}t_\mathrm{s}(x)}}{{\mathrm{d}x}}\) along the x axis to the coupled field. Accordingly, equation ( 9 ) can be rewritten as $$\begin{array}{l}\mathop{H}\limits^{\rightharpoonup} \left( {x,\omega _0 + m{{\Delta }}\omega } \right)\\ = \hat yH_0C_0{{{{{\varPi}}}}}(x)\left\{ {\begin{array}{*{20}{c}} {\tau (x)\mathrm{e}^{ - \mathrm{j}\xi _{\mathrm{gw}}x},m = 0} \\ {\tau \left( x \right)\mathrm{sinc}\left[ {\uppi m\tau \left( x \right)} \right]\mathrm{e}^{ - \mathrm{j}(\xi _{\mathrm{gw}} + k_\mathrm{m})x},m = \pm 1, \pm 2 \ldots } \end{array}}. \right.\end{array}$$ (11) At first glance, the 1-bit STC metasurface antenna seems like the 1-bit free-space-only STMMs with very limited spatial and frequency manipulation flexibilities. However, the 1-bit STC metasurface antennas possess powerful EM wave controllability in both spatial and spectral domains via fully leveraging its unique guided-wave-driven nature. Our illustrative examples of this include high-efficiency frequency conversion, fundamental-frequency continuous beam scanning and multiharmonic independent control—all of which would not be possible with conventional 1-bit free-space-only STMMs. Frequency conversion and beam steering We first consider frequency conversion—a process that translates the input waveguide signal at one frequency into another in free space (Fig. 2a ). The dispersion diagram (Fig. 2b ) qualitatively explains the operating principle. The input signal with frequency ω 0 is injected into the waveguide and propagates forward along the + x direction in the form of guided waves, corresponding to the red dot located at the guided-mode dispersion curve (Fig. 2b ). According to equation ( 11 ), the space–time modulation results in new spectrum generation with frequency interval Δ ω and an additional tangential momentum k m to each harmonic frequency. We use this momentum to push the target conversion frequency (+1 harmonic in this case) into the light cone such that it can radiate as leaky waves with a specified radiation angle θ r . Accordingly, other higher-order harmonics are imparted by a tangential momentum, too, but m times that of the +1 harmonics. We can observe from Fig. 2b that other higher-order harmonics are out of the light cone; therefore, they cannot be radiated into free space due to momentum mismatch. Most importantly, these unwanted higher-order harmonics are prohibited inside the waveguide because they cannot fulfil the phase-matching conditions. As such, only the target +1 harmonic frequency can be generated and extracted into free space, whereas all the other unwanted harmonics are suppressed. We term this phenomenon that selects harmonics for free-space radiation as the ‘self-filtering’ property of the waveguide-integrated metasurface antenna. Fig. 2: High-efficiency frequency conversion and beam steering. a , Schematic of the STC metasurface antenna for high-efficiency frequency conversion and beam scanning. The input guided waves at frequency ω 0 are translated into free-space waves with target frequency ω 0 + m ω m , whereas other undesired harmonics are highly suppressed. The output angle of the target harmonic can be controlled at will according to the applied digital ‘0/1’ STC matrix. b , Dispersion diagram for frequency conversion. STC modulation introduces tangential momentum k m into the target translated harmonic, pushing it into the light cone. Other undesired higher-order harmonics are in the forbidden regions and filtered out by the phase-matching conditions. c – e , The ‘0/1’ digital STC matrices for the +1 harmonic frequency translations with output angles of –50° ( c ), −10° ( d ) and +30° ( e ). f – h , Measured power distributions at different harmonics corresponding to c – e , respectively. i – k , Theoretically calculated and measured radiation patterns at the +1 harmonic frequency corresponding to c – e , respectively. Full size image To allow the target harmonic frequency to radiate in direction θ r , the compensating momentum imparted by the STC modulation should satisfy $$\xi _{\mathrm{gw}} + k_\mathrm{m} = \xi _m\sin\theta _\mathrm{r},$$ (12) where ξ m is the free-space wavenumber at the target m th-order harmonic frequency. The required normalized time shift t s ( x ) in the 1-bit STC scheme can be resolved as $$t_\mathrm{s}\left( x \right) = \frac{{\xi _m\sin\theta _\mathrm{r} - \xi _{\mathrm{gw}}}}{{2m\uppi }}x.$$ (13) Here we demonstrate the upward conversion to the +1 harmonic frequency and beam-direction manipulation. To avoid the +2 frequency harmonics entering the light cone as the +1 harmonic scans from the backward to the forward end-fire direction, the normalized duty cycle τ is set as 0.5. As such, the amplitude of the equivalent coupling efficiency for the +2 harmonics is 0. The radiation direction of the extracted beam can be flexibly tuned by changing the time shift t s ( x ) according to equation ( 13 ). To validate the above concept and design, we create an STC metasurface antenna based on a substrate-integrated waveguide (SIW) operating at 27 GHz. The STC metasurface antenna has 82 meta-atoms with a total length of 7.38 λ 0 , where λ 0 is the free-space wavelength at 27 GHz. The detailed metasurface prototype design, modelling and characterization are given in Methods. As illustrative examples, we consider the output angle of the translated +1 harmonic scans to −50°, −10° and 30°. Their corresponding required time shifts t s ( x ) are given in Supplementary Fig. 6 . The ON–OFF sequence can be obtained once t s ( x ) and τ are determined. The STC matrix for each beam direction can be obtained by sampling the ON–OFF sequence at each meta-atom position, and the results are shown in Fig. 2c–e . Their corresponding measured power distributions at different harmonics are given in Fig. 2f–h (Supplementary Table 2 lists the specific power distribution and calculation). We observe that the target +1 harmonic is dominant with a conversion efficiency higher than 80%. The corresponding measured radiation patterns at the +1 harmonics are illustrated in Fig. 2i–k , agreeing well with the predicted results calculated by equation ( 10 ). Moreover, from the scattering patterns in Fig. 2i–k , the converted waves at the +1 harmonic frequency can be correctly scanned to the intended directions. Furthermore, we can freely choose which target harmonic frequency to be converted into and radiated into free space by altering the STC matrix. To this end, we compress the above STC matrix for the +1 harmonic radiation down to 1/ m in the time domain and repeat the new matrix m times in one modulation period T . The synthesized STC matrices for the −1, −2 and −3 harmonic conversions are given in Fig. 3a–c . The corresponding measured harmonic power distributions for the three conversion cases are illustrated in Fig. 3d–f . The measured scattering patterns at different harmonics are shown in Fig. 3g–i . Again, we observe that the desired −1, −2 and −3 harmonics dominate the output free-space waves, with conversion efficiency larger than 82% (Supplementary Table 3 lists the specific measured power distributions). Moreover, the STC metasurface antenna can launch a high-directivity beam at the target harmonic frequency, whereas all the other undesired harmonics are highly suppressed. Fig. 3: Arbitrary harmonic frequency conversion. a – c , Required ‘0/1’ digital STC matrices for translating the guided waves at ω 0 into free-space waves at –1 ( a ), –2 ( b ) and –3 ( c ). d – f , Measured power distributions at different harmonics for m = –1, –2 and –3 harmonic frequency conversions corresponding to a – c , respectively. g – i , Measured radiation patterns at different harmonic frequencies corresponding to a – c , respectively. Full size image In a physical sense, the STC metasurface antenna can be interpreted as a heterodyne transmitter for implementing frequency mixing, filtering, phase shifting as well as radiation (Supplementary Fig. 7 shows the configuration of a traditional heterodyne transmitter). The temporal modulation in the STC metasurface antenna allows the mixing of the input RF and modulation signals to generate new harmonics. The stringent momentum-matching conditions equal a perfect band-pass filter that selects the target harmonic frequency to pass through and filtering out unwanted ones. The time shift at different positions t s ( x ) leads to an equivalent phase shifter attached to each meta-atom. Each meta-atom behaves as a magnetic dipole antenna element that radiates EM waves into free space. As a result, the functionalities of various active and passive components in traditional RF heterodyne transmitters, including mixers, filters, power dividers, phase shifters and antenna arrays can be implemented by and integrated into our single STC metasurface antenna. This creates advantages in wireless device designs including simpler architecture, higher integration, higher signal-to-noise ratio and lower power consumption. Most of the reported free-space-only STMMs 36 , 37 , 38 , 40 , 41 , 42 , 43 produce undesired harmonics, also known as sidebands, causing severe spectrum pollution. Nevertheless, as demonstrated in another work 28 , the phase-mismatch method can also be used to achieve unidirectional up-/downconversions and hence suppress the sidebands for free-space-only STMMs. On the other hand, using discrete phase modulation reduces the conversion efficiency for free-space-only STMMs. For instance, theoretically, a 1-bit free-space-only STC metasurface has a maximum power conversion efficiency of around 40% (ref. 38 ), and the rest contributes to unwanted harmonic radiations. By contrast, our 1-bit STC metasurface antennas feature substantially higher conversion efficiency—higher than 80%. Fundamental-frequency continuous beam scanning In the above design, beam steering at harmonic frequencies is achieved by varying the momentum imparted by the STC scheme. However, the exact mechanism cannot be applied to the fundamental frequency. According to equation ( 11 ), the STC scheme only introduces an equivalent amplitude without producing tangential momentum at ω 0 . Here we use this equivalent amplitude to perform spatial AM for fundamental-frequency beam scanning (Fig. 4a ). Spatial AM 57 —an analogy to the well-known AM radio technology but in the spatial domain—allows the n = –1 space harmonic to become fast and radiate into free space. A sinusoidal amplitude distribution along the waveguide aperture should be formed to achieve high-directivity radiation 57 . To this end, the normalized duty cycle should be $$\tau \left( x \right) = \bar \tau [1 + M{{{\mathrm{cos}}}}(\frac{{2\uppi }}{\varLambda }x)],$$ (14) where \(\bar \tau\) is the average duty cycle, M is the modulation depth and Λ is the spatial modulation period. Detailed effects of these parameters on the radiation performance are provided elsewhere 57 . The STC matrix for radiation direction θ r = 20° and its corresponding radiation pattern calculated by equation ( 10 ) are given in Supplementary Fig. 8b,d , respectively. Although the STC metasurface antenna launches a high-directivity beam at ω 0 , high radiations are also observed in other undesired sidebands, which is caused by the higher-order space harmonics at these higher-order frequency harmonics. Fig. 4: Fundamental-frequency beam scanning. a , Schematic of the STC metasurface antenna for fundamental-frequency beam scanning. The output angle of the radiation beam at the fundamental frequency can be continuously scanned from the backward end-fire direction towards the forward end-fire direction according to the applied ‘0/1’ STC matrix. All the undesired higher-order harmonics are highly suppressed. b , Brillouin diagram for the STC metasurface antenna for beam steering at the fundamental frequency. The n = –1 space harmonic is fast and radiated, whose output angle can be controlled on demand by varying the spatial modulation period Λ . c , Equivalent amplitude distributions or normalized duty cycles along the metasurface aperture for output angles at −20°, +20°, +40° and +50°. d , Theoretically calculated (dashed line) and measured (dots) output angles versus the spatial modulation period Λ . e – h , The ‘0/1’ STC digital matrices for output angles at −20° ( e ), +20° ( f ), +40° ( g ) and +50° ( h ). i – l , Measured radiation patterns at different harmonic frequencies corresponding to e – h , respectively. Full size image In the 'Theoretical formulation' section, we show—in equation ( 6 )—that the equivalent amplitude at the fundamental frequency is the time average of the radiating state of the meta-atom and irrelevant to the type of adopted STC strategy. To suppress the undesired sidebands, we randomize the original STC matrix but maintain the time-average radiation state in one modulation period T as fixed. As such, we can maintain an equivalent sinusoidal amplitude distribution at ω 0 yet introduce random equivalent phase modulations to higher-order harmonic frequencies, which are filtered out by the phase-matching conditions of the waveguide. The new STC matrix and its calculated radiation patterns are depicted in Supplementary Fig. 8c,e , respectively. We observe that the pencil-like beam is generated at the fundamental frequency ω 0 , whereas all the undesired sidebands are highly suppressed, at least 28 dB below that of the fundamental frequency. Furthermore, we can continuously tune the output angle of the extracted wave from the backward end-fire direction towards the forward end-fire direction by varying the spatial modulation period Λ , as illustrated in the Brillouin diagram (Fig. 4b ). The radiation direction of the n = –1 space harmonic is determined by $$\theta _\mathrm{r} = \sin^{ - 1}\left( {\frac{{\xi _{\mathrm{gw}}}}{{\xi _0}} - \frac{{2\uppi }}{{{{\varLambda }}\xi _0}}} \right).$$ (15) Figure 4d depicts the calculated and measured relationship between output angle θ r and spatial modulation period Λ . As illustrative examples, we consider output angles of −20°, +20°, +40° and +50°, whose corresponding required equivalent amplitude distributions or normalized duty cycles along the length of the metasurface aperture are depicted in Fig. 4c . We observe that the spatial modulation period increases for forward beam scanning. The corresponding STC matrices after randomization are depicted in Fig. 4e–h , and the corresponding measured scattering patterns at different harmonic frequencies are given in Fig. 4i–l . We observe that the radiation direction at the fundamental frequency can be precisely and dynamically controlled. Moreover, all the undesired sidebands are highly suppressed, with measured spectrum purities as high as 95% in all the scanning cases (Supplementary Table 4 ). The measured radiation patterns with a 20° increment at ω 0 are given in Supplementary Fig. 10a . The scanning range of the STC metasurface antenna is from −60° to 60° with a field of view of 120°. The measured peak gain of the metasurface antenna is 13.5 dBi. The STC metasurface antenna also exhibits good radiation performance as the input frequency ranges from 25.5 to 27.5 GHz (Supplementary Fig. 11 ). These results verify that the metasurface antenna achieves continuous beam scanning at the fundamental frequency based on the 1-bit STC scheme. Multiharmonic independent control The previous examples manipulate only one specific harmonic frequency and suppress other undesired harmonics. In some scenarios, such as multiuser or multitarget wireless communications 58 , independent and simultaneous control of multiharmonics is critical. Here we use the Nyquist sampling theorem, a fundamental bridge between the continuous and discrete worlds, to create multiharmonic independent control. The Nyquist sampling theorem states that a signal can be perfectly reconstructed if the waveform is sampled over twice its highest-frequency component. For the spatial version, the requirement translates to the fact that spatial sampling should be smaller than half the operating wavelength across the metasurface antenna aperture. This theorem opens the possibility for the digital STC metasurface to perform multiharmonic conversion and independent control. As shown in Fig. 5a , meta-atoms in the same colour couple and convert the energy from the waveguide into free-space waves at the same intended harmonic frequency. The neighbouring meta-atoms in different colours form a super unit cell that can simultaneously launch free-space waves at multiharmonic frequencies. The metasurface is free of high-order diffraction for all the desired harmonic frequencies as long as the spatial sampling of the super unit cell satisfies the Nyquist criterion, that is, the lattice of the super unit cell is smaller than half the free-space wavelength. Fig. 5: Multiharmonic independent control. a , Schematic of the STC metasurface antenna for multiharmonic independent control. The input guided waves at frequency ω 0 are translated into free-space waves at multiple target harmonic frequencies, whose output angles can be dynamically and independently controlled by the applied digital ‘0/1’ STC matrix. Meta-atoms in different colours are responsible for different harmonic radiations in the corresponding colours. Every three neighbouring meta-atoms form a super unit cell for multiharmonic radiation. b , Design process of the ‘0/1’ STC matrix for multiharmonic independent control. The continuous STC matrices for m = –1, +2 and +3 separate harmonic conversions are alternately sampled and subsequently combined to form the final STC matrix. c , Dispersion diagram for multiharmonic independent control. STC modulation introduces different tangential momentums to the target harmonics to push them into the light cone for radiation. d – f , Measured radiation patterns when the output beams of the m = –1, +2 and +3 harmonics are scanned to (0°, +30°, +15°) ( d ), (0°, 0°, 0°) ( e ) and (0°, −15°, +15°) ( f ). Full size image Without the loss of generality, here we demonstrate independent control of three harmonics, that is, m = –1, +2 and +3 with output angles of 0°, −15° and +15°, respectively. The design process for multiharmonic independent control is shown in Fig. 5b . Following the design process in the ‘Frequency conversion and beam steering’ section, we first obtain the required continuous space–time-modulation sequence for the three separate frequency conversion cases with different colours (Fig. 5b ). The digital metasurface alternately samples the three continuous matrices at each meta-atom position and obtains the discrete STC matrices for the three frequency conversion cases. As illustrated in the dispersion diagram (Fig. 5c ), each discrete STC matrix is responsible for introducing an independent tangential momentum k m to push one corresponding harmonic frequency into the light cone and radiate the wave into the intended direction but highly suppress other harmonics. The final STC matrix combines the three individual discrete STC matrices (Fig. 5b ). Due to the deep-subwavelength nature of the designed meta-atom, the spatial sampling of the super unit cell is around 0.33 λ 0 in this design, satisfying the Nyquist sampling condition. As such, the 1-bit STC metasurface antenna can simultaneously and independently control the tri-harmonic radiations as well as their beam output angles. The measured radiation patterns at the three harmonics are illustrated in Fig. 5f , from which we observe that the output angles are well located at the desired radiation directions of 0°, −15° and +15° for the –1, +2 and +3 harmonics, respectively. Following the same strategy, we further manipulate the radiation directions of the three harmonics by changing the STC matrix. For instance, all the output angles of the three harmonics are steered to the broadside direction ( θ r = 0°). The required STC matrices are given in Supplementary Fig. 12a , and the corresponding measured scattering patterns are shown in Fig. 5e . As another illustrative example, the beam directions are directed to 0°, +30° and +15° for the −1, +2 and +3 harmonics, respectively. Supplementary Fig. 12b displays the STC matrix, and Fig. 5d illustrates the corresponding measured radiation patterns. Again, we see that the output beam at the three harmonics can be correctly radiated into the intended directions. These results illustrate the flexibility of our 1-bit STC metasurface antenna for multiharmonic, simultaneous and independent control. Conclusions We have reported a waveguide-integrated STC metasurface antenna that can extract and mould guided waves into any desired free-space waves in both momentum and frequency domains. The complex EM wave manipulation is achieved by 1-bit switching between the coupling and non-coupling states from the waveguide for each meta-atom in a predesigned space–time sequence. As proof-of-concept examples, we used the 1-bit STC metasurface antenna to achieve high-efficiency frequency conversion, fundamental-frequency continuous beam scanning and independent control of multiple harmonics. Our 1-bit STC metasurface can offer all the functionalities of a traditional complex heterodyne transmitter, which generally consists of mixers, filters, power dividers, phase shifters and antenna arrays. Compared with free-space-only STMMs, our STC metasurface antenna is free of sideband pollution and features a much simpler coding strategy (1 bit) without degrading its functionality. Moreover, our STC metasurface antenna is directly driven by guided waves, allowing it to be seamlessly integrated with in-plane sources. As well as beam steering, the STC metasurface antenna can also achieve other light manipulations, including light focusing (Supplementary Note 11 ). Furthermore, the STC metasurface antenna could be extended to a 2D aperture by periodically repeating the one-dimensional metasurface along the y axis, fed by a power-dividing network. The 2D STC metasurface antenna could provide more sophisticated wave manipulations, such as holographic projection 59 , orbital angular momentum beam generation 53 and direct information modulation 41 . The concept can also be extended to the terahertz and optical spectrums, as well as to acoustic waves by using alternative materials and active elements such as graphene and electro-optic and photo-acoustic media. Our technology expands the capabilities of metasurfaces, from free space only and waveguide only to their mutual transformation. It also equips integrated devices with free-space EM wave controllability in both spectral and spatial spaces. The technology could be used in applications where integrated devices should have agile access to free space, such as next-generation mobile communications, terahertz security screening, optical light fidelity, and light detection and ranging systems. Methods Prototype design The STC metasurface antenna operates at microwave frequencies and adopts the SIW as the waveguide. As shown in Fig. 6b , SIW uses the upper and lower metal layers along with two rows of metallized vias to form a rectangular dielectric waveguide. The coding meta-atom consists of a rectangular loop slot etched on the top surface of the waveguide to couple energy from the waveguide into free space (Fig. 6b ). Two PIN diodes (MACOM MADP-000907-14020x) biased in the same states are mounted across the gaps of the loop slot along the x direction. When the two PIN diodes are OFF with a biasing voltage of 0 V, the slot meta-atom is in the coupling state (corresponding to the coding state of ‘1’) to leak energy into free space. On the contrary, the element is in the non-coupling mode (corresponding to state ‘0’) as the PIN diodes are ON with a forward biasing voltage of 1.33 V. The switching capability of the meta-atom between the coupling and non-coupling states is enabled by the conductive and capacitive equivalence of the PIN diodes in the forward and non-bias states, respectively (Supplementary Fig. 3a shows the equivalent circuits of the PIN diode in the ON and OFF states). Fig. 6: Prototype design, modelling and characterization. a , Photograph of the fabricated STC metasurface antenna prototype. b , Configuration of the SIW-based meta-atom. PIN diodes, controlled by the FPGA, are incorporated into each meta-atom to switch the element between the coupling and non-coupling states. c , Simulated radiation patterns of the meta-atom in the coupling (‘1’) and non-coupling (‘0’) states. d , Measurement setup of the STC metasurface antenna in a microwave anechoic chamber. Full size image We use the metal vias that exist in the SIW to act as the common electrode for the biasing of the two PIN diodes such that the perturbation of the d.c. bias network on the guided wave can be minimized. The bias metal via passes from the edge of the slot meta-atom down through the substrate and is finally connected to an open-ended radial stub designed to choke the RF leakage (Supplementary Notes 1 and 2 provide the detailed d.c. bias network design and parameters). The slot meta-atom was designed to weakly couple energy from the waveguide in the coupling state such that the meta-atoms do not seriously perturb the waveguide mode. More importantly, meta-atoms downstream can also get enough excitation to have a large radiating aperture of the metasurface. To this end, the geometry parameters of the loop slot are carefully tuned to be slightly off the resonance at the operating frequency by using the commercial software package CST Studio Suite 2022 numerical simulator. The simulated radiation patterns of the meta-atom in the coupling and non-coupling states at 27 GHz are illustrated in Fig. 6c . The gain of the radiating meta-atom is 12.5 dB higher than that in the non-coupling state. Consequently, the radiating state of each meta-atom can be independently controlled by the ON–OFF switching of the embedded PIN diodes in a predetermined coding sequence. More simulated scattering parameters of the meta-atom in the coupling and non-coupling states are given in Supplementary Fig. 3 . To reduce the period of the meta-atom, we use the staggered configuration in which the metasurface consists of two rows of staggered rectangular slots located on the top surface of the waveguide, as shown in the fabricated metasurface prototype (Fig. 6a ). The period is p = 1 mm along the x axis, corresponding to 0.09 λ 0 . The metasurface antenna consists of 82 meta-atoms with an overall size of 11.4 mm × 82.0 mm (1.02 λ 0 × 7.38 λ 0 ). The detailed configurations of the metasurface and d.c. control lines are given in Supplementary Note 1 . A low-cost FPGA control board (ALTERA Cyclone IV) is adopted to feed the dynamic voltage signals into the meta-atoms. A program recording the STC matrix is preloaded in the FPGA to generate 82 independent control signals. In experiments, the update time of the control signals from the FPGA is set as 0.5 μs, and the modulation period of the time-coding sequences T is 50.0 μs. The corresponding diode-switching speed is 2 MHz, and the system modulation frequency is 20 kHz in this design. Therefore, the maximum beam-scanning speed of the current STC metasurface antenna is on the order of 50 μs. A higher scanning rate can be achieved if a high-performance FPGA is adopted, considering that the maximum switching speed of the PIN diode (2–3 ns) is much faster than that of the control signals from the FPGA. Prototype simulation To study the scattering and radiation characteristics of the meta-atom, we modelled and simulated a single slot opening fed by the SIW (Fig. 6b ) using the commercially available CST Studio Suite 2022 numerical simulator. In the simulation, the PIN diode was modelled as a series of resistance R = 8 Ω and inductance L = 30 pH for the ON state and a series of capacitance C = 0.052 pF and inductance L = 30 pH for the OFF state (Supplementary Fig. 3a ). Two wave ports are used to excite the fundamental transverse-electric TE 10 mode of the SIW and feed the slot meta-atom. The simulated S parameters are shown in Supplementary Fig. 3b–d . Prototype fabrication Two commercially available Taconic TLY substrates (relative dielectric constant ε r = 2.2; loss tangent tan δ = 0.0009) with a thickness of 1.52 and 0.76 mm are used as the substrates of the SIW and d.c. bias network, respectively. The two TLY substrates are bonded by a Rogers RO4450F prepreg with ε r = 3.52, tan δ = 0.004 and thickness of 0.101 mm. The SIW waveguide and slot opening were fabricated using a commercial printed circuit broad manufacturing process. After finishing, the PIN diodes (MACOM MADP-000907-14020x) were added on the top of each loop slot by reflow soldering. Prototype measurements The fabricated metasurface was characterized inside a microwave anechoic chamber (Fig. 6d ). A signal generator (Agilent E8267D) launches a monochromatic signal at 27 GHz and connects to port 1 of the STC metasurface antenna. Port 2 of the metasurface antenna is connected to a matching load to absorb the residue power. A linearly polarized diagonal horn is employed as the receiving (Rx) antenna connected to a vector network analyser (Agilent E8363C) to detect the harmonic signals generated by the STC metasurface. The transmitting and receiving antennas are aligned using lasers. The distance between the transmitting and receiving antennas is 2.5 m, satisfying the far-field condition of the STC metasurface antenna. A three-dimensionally printed fixture is fabricated to mount the STC metasurface and FPGA control board on the computer-controlled rotary stage. The radiated powers at different harmonic frequencies can be separated and obtained by directly reading the peak values at the corresponding frequency lines in the vector network analyser (Supplementary Fig. 14 ). The radiation pattern measurement of the STC metasurface antenna is carried out by automatically rotating the receiver mounted on a rotary stage in the horizontal plane with an angular step of 1°. The radiation patterns shown in Fig. 5d–f are the measured power distributions into free space as a function of direction at different harmonic frequencies. Data availability The data that support the findings of this study are available from the corresponding authors upon reasonable request. Code availability The codes that support the theoretical modelling of the metasurface antennas are available from the corresponding authors upon reasonable request.
Antennas that can couple guided waves emitted from different sources in free space and manipulate these waves are crucial to the development of numerous technologies, including wireless communication, optical communication and ranging systems. Metasurfaces, thin films made up of several elements, have shown promise for developing these types of antennas. Researchers at City University of Hong Kong and Southeast University have recently created a new metasurface antenna that can extract and manipulate guided waves to create desired free-space waves in both the spatial and frequency domains. This antenna, introduced in a paper published in Nature Electronics, combines two different research advancements, namely amplitude-modulated (AM) leaky-wave antennas and space-time coding techniques. "Our paper is based on the invention of the amplitude-modulated leaky-wave antenna reported in the Ph.D. thesis of Dr. Gengbo WU at the City University of Hong Kong (CityU) and the groundbreaking research on space-time-coding metasurface led by Academician Tie Jun Cui at Southeast University," Prof. Chi Hou Chan, one of the researchers who carried out the study, told TechXplore. "The AM leaky-wave antennas are inspired by the amplitude modulation in classical communications theory. " A key advantage of AM leaky-wave antennas is that their radiation pattern can easily be synthesized or tailored for specific uses, by changing the antenna's shape and structure. Once an antenna is fabricated, however, its radiation characteristics will typically remain fixed. "Around the same time when the AM leaky-wave antennas were presented in 2020, Dr. Jun Yan Dai joined the CityU research group from Southeast University for a postdoctoral research fellow appointment," Prof. Chan said. "He came from the research group led by Academician Tie Jun Cui and Professor Qiang Cheng and brought us the use of space-time coding, or software control, to dynamically reconfigure the antenna performance." Measurement setup of the space-time-coding metasurface antenna in a microwave anechoic chamber. Credit: Wu et al. Space-time coding, or space-time block coding, is a technique that allows engineers to transmit multiple copies of a data stream across several antennas, using the different versions of the original data to transfer data more reliably. Prof. Chan and his colleagues combined space-time coding techniques with AM leaky-wave antenna design to enable the modulation of emitted radiation patterns. "By combining the two approaches, we achieve a designated radiation characteristic by controlling the on-off sequences and durations of the switches on the leaky-wave antenna through a space-time matrix that can be generated via analytical means," Prof. Chan explained. "Due to the guided-wave nature of the leaky-wave antenna, unwanted harmonics generated by the switches can be filtered out by the waveguide, leading to sideband suppression. The reported research results are generated with synergistic efforts and complementary expertise by the two research teams at City University of Hong Kong and Southeast University." The new sideband-free and space-time coding metasurface antennas introduced by the researchers ultimately allow users to attain different radiation characteristics through software control, without requiring physical alterations to the antenna's structure. This means that the unwanted sidebands or harmonics that hinder the performance of conventional antennas based on reconfigurable metasurfaces can be eradicated. "For example, the same frequency beam steering is achievable with our antenna," Prof. Chan said. "As the space-time matrix is linear, different beams radiating at different frequencies can be generated simultaneously by the superposition of different space-time matrices. As the space-time matrix is linear, different beams radiating at different frequencies can be generated simultaneously by the superposition of different space-time matrices." Prof. Chan and his colleagues assessed the effectiveness of their antenna design in a series of tests and found that it enabled high-efficiency frequency conversion, fundamental-frequency continuous beam scanning and the independent control of multiple harmonics. In the future, their antenna could be combined with different base-band modulation schemes to achieve highly efficient and secure communications. "In our next studies, we plan to modify the antenna's structure to achieve more control over the radiation characteristics, including polarization of the radiated beams," Prof. Chan added. "One unique feature of the antennas is that we can control the amplitude and phase distributions on the antenna aperture. Therefore, we can create all kinds of radiation characteristics, including linear sweeping of a focused spot along the length or perpendicular to the antenna surface."
10.1038/s41928-022-00857-0
Physics
Using physics to make better GDP estimates
A. Tacchella et al. A dynamical systems approach to gross domestic product forecasting, Nature Physics (2018). DOI: 10.1038/s41567-018-0204-y Abstract Models developed for gross domestic product (GDP) growth forecasting tend to be extremely complex, relying on a large number of variables and parameters. Such complexity is not always to the benefit of the accuracy of the forecast. Economic complexity constitutes a framework that builds on methods developed for the study of complex systems to construct approaches that are less demanding than standard macroeconomic ones in terms of data requirements, but whose accuracy remains to be systematically benchmarked. Here we develop a forecasting scheme that is shown to outperform the accuracy of the five-year forecast issued by the International Monetary Fund (IMF) by more than 25% on the available data. The model is based on effectively representing economic growth as a two-dimensional dynamical system, defined by GDP per capita and 'fitness', a variable computed using only publicly available product-level export data. We show that forecasting errors produced by the method are generally predictable and are also uncorrelated to IMF errors, suggesting that our method is extracting information that is complementary to standard approaches. We believe that our findings are of a very general nature and we plan to extend our validations on larger datasets in future works. Journal information: Nature Physics
http://dx.doi.org/10.1038/s41567-018-0204-y
https://phys.org/news/2018-07-physics-gdp.html
Abstract Models developed for gross domestic product (GDP) growth forecasting tend to be extremely complex, relying on a large number of variables and parameters. Such complexity is not always to the benefit of the accuracy of the forecast. Economic complexity constitutes a framework that builds on methods developed for the study of complex systems to construct approaches that are less demanding than standard macroeconomic ones in terms of data requirements, but whose accuracy remains to be systematically benchmarked. Here we develop a forecasting scheme that is shown to outperform the accuracy of the five-year forecast issued by the International Monetary Fund (IMF) by more than 25% on the available data. The model is based on effectively representing economic growth as a two-dimensional dynamical system, defined by GDP per capita and ‘fitness’, a variable computed using only publicly available product-level export data. We show that forecasting errors produced by the method are generally predictable and are also uncorrelated to IMF errors, suggesting that our method is extracting information that is complementary to standard approaches. We believe that our findings are of a very general nature and we plan to extend our validations on larger datasets in future works. Main In recent years a new approach to macroeconomic analysis and forecasting has been developed in the context of complex systems. This new framework, which goes under the name of economic complexity (EC), provides a concise description of global macroeconomic relations, technological trends and growth dynamics. By contrast with standard macroeconomics, EC is parsimonious in terms of data requirements, both in terms of quantity and diversity. EC leverages the ability of network algorithms to extract information from reliable and standardized datasets on global trade to build a metrics of national industrial competitiveness, called economic fitness (EF). In this work we show how EF can be combined with techniques from dynamical systems to provide a novel approach to gross domestic product growth forecasting that is concise and reproducible, surpasses the accuracy of mainstream institutions such as the IMF, and provides a clearer understanding and interpretation of the dynamics of growth. Besides providing extensive validation, we show how this new approach is complementary to standard ones, and how its combination with mainstream models further improves accuracy. These results imply that further effort should be made in order to integrate EC in the standard framework, and the fact that such accuracy arises from the dynamics of low-dimensional systems can help improve and rethink our vision of the forces that drive long-term growth. Less data is more Models developed for GDP growth forecasting are very complex. These typically make use of a large number of variables (up to hundreds of them), and for each of those there is at least one parameter to be fitted to data, or assigned by assumptions. Such variables range from socio-economic indicators (labour productivity, employment, schooling, population age, and so on) to financial indicators (fiscal policies, interest rates, public debt) and global trade indicators (raw material prices, trade openness, exchange rates) among others 1 , 2 , 3 . To have such complex models, with so many variables, offers the illusion to be able to capture all the components that drive economic growth and therefore to deliver the best possible forecasting. However, this is in general not the case, for two reasons: first, it is in practice very hard to find the right ‘recipe’ to mix all these variables (that is, the right functional form and the right parameters) in order to have an accurate forecast of growth. There is in fact scarcely a principle that can drive the choice of how to combine schooling with raw material prices for instance. Second, increasing the number of input variables exponentially increases the dimensionality of the space over which one would like to fit a function (the model) that predicts economic growth 4 : this implies that the ability to systematically sample observations from this space is very limited. In many cases, not only in economics, theoretical modelling and forecasting are not tightly related 5 . Most of the modelling efforts are in the direction of oversimplified representations that aim only at understanding the potential effects of a single variable, or of a limited set of them, in a controlled setting. Although these models may help grasp many isolated implications, they are rarely used directly to predict precise dynamics of complex systems such as, in the case of economics, countries’ growth or crisis. On the other side, the approaches explicitly developed for forecasting often depart from rigorous theoretical models, and are typically grounded on econometric or statistical techniques. From a physicist’s perspective, such a situation can be mapped to the more general problem of forecasting the evolution of a dynamical system of which we do not know the actual laws of motion. The state of the system, in the case of growth forecasting, is the GDP of the country, together with all the other socio-economic variables that are used in the model, and we suppose the dynamics to be ruled by some kind of coupling among such variables. The only knowledge we have about the system is a collection of previously observed states, together with their evolution after a fixed time delay (possibly only along the GDP direction, if we do not know the evolution of the other indicators). In such a situation, to perform a prediction for the evolution of a new and previously unseen state, the most basic approach would be to look for the most similar state of which we know the evolution from past data (an analogue) and use that evolution as a prediction. This approach goes under the name of ‘method of analogues’, and has a long story of successful and failed applications 6 , 7 , 8 . This model requires only a rule to chose the analogues and a procedure to extract the information from them. In light of basic considerations about dynamical systems, such as those exposed in ref. 4 , it is easy to see that we can only hope that such an approach succeeds—that is, that we are able to identify ‘close enough’ analogues—if the effective dimensionality of the phase space of the dynamics we want to describe is very low: in other words, if the system is likely to be found in a small volume of the phase space, an attractor, so that we can effectively sample it. In this perspective, the addition of more data to a forecasting problem is often detrimental to the actual quality of the prediction, as it diminishes our ability to find relevant analogues. Economic growth as a low-dimensional dynamical system In this work we demonstrate how the dimensionality problem can be solved in the context of GDP forecasting, by completely rethinking the process of data selection. In particular we expand on a series of recent works in the field of economic complexity (EC) 9 , 10 , 11 , 12 , 13 , 14 , 15 , and bring this conceptual framework at the level of the state of the art of GDP forecasting, obtaining a substantial improvement in accuracy over the current International Monetary Fund (IMF) five-year forecasting 3 . Such results are obtained over the largest publicly available historical dataset released by the IMF, which covers three windows of five years from 2008–2013 to 2010–2015 (details on the exact specification of the data used are reported in the Methods ). We forecast the GDP per capita at purchasing power parity (PPP) in constant dollars. The EF metrics, or fitness, introduced in 2012 10 , provides an effective and extremely synthetic way of quantifying the competitiveness of a country’s economy, and does so by considering only export data. EF is not a simple statistic of the export but extracts the information on how complex is the productive structure of a country, accounting for the complexity of the produced goods in an algorithmic way (see Methods for details), using revealed comparative advantage 16 as a proxy for competitiveness. Export data at the national level is publicly available through the COMTRADE database. The COMTRADE database provides bilateral declarations of export flows of single products, categorized in the Harmonized System international classification, and aggregated on a yearly basis, for more than 20 years. This data offers three main advantages. The first is syntheticity. Exported products are in principle a very good proxy of competitiveness. To export a product a country has to compete in the global market, and being a relevant player is a much stronger signal of competitiveness than internal demand. In other words, if an industry is able to compete only in its own country internal market, there is no explicit sign of global competitiveness that we can use to infer the presence of a competitive mix of endowments. Moreover, looking at product-level variables gives a perspective that is much closer to the actual ability of the country to produce wealth. Instead of inferring this ability through more indirect indicators (the endowments), looking at products allows one to get direct information on how such endowments interact to produce material wealth. The second is standardization. The product classification is global, and for most of the trade flows two declarations of the total values are available: one from the importer and one from the exporter. This allows a great reduction in fluctuations and increases data quality. The third is homogeneity. Export data does not suffer many of the most problematic limitations of other macroeconomic observables. Being collected at an international level there is a strong homogeneity in units, gathering methodology, frequency and data availability, both geographically and through time. This allows for much easier and less obscure standardization and regularization procedures on raw data. We leverage the advantages of export data in two ways: first we develop a state-of-the-art workflow for data sanitation and regularization (see Methods section); then on this high-quality dataset, we compute the countries’ fitness, which is a very effective scalar indicator that synthetically describes the industrial competitiveness of a country. As has been shown in previous works 1 , 15 , fitness and GDP per capita define a bidimensional space where the dynamics of countries exhibits high levels of regularity. Data sanitation greatly expands the volume of this space where the dynamics is regular (see Supplementary Information ). Given the low dimensionality of such space, and the absence of an effective theoretical framework to describe the dynamics of economic growth in the GDP–fitness plane, the situation is perfect to develop a forecasting scheme based on the method of analogues. We produce probabilistic five-year forecasts by repeatedly sampling analogues in the GDP pc –fitness space with a Gaussian kernel, centred on the present state of a country (we call this approach SPSb, see Methods section for details). This results in a distribution of possible outcomes (Fig. 1a ). To further refine our forecast, and take into consideration the strong self-correlation of GDP growth 17 , we combine the forecast obtained from the SPSb distribution with the forecast that would be obtained by assuming a growth exactly equal to that of the past five years, as shown in Fig. 1b (see Methods for details). The resulting distribution is the velocity-SPS forecast. Fig. 1: Graphic scheme of the SPS method and its combination with the past growth. a , The Bootstrap Selective Predictability Scheme. To predict the evolution of a point (red) in the GDP pc –fitness plane we perform a bootstrap of previously observed evolution, weighted by the distance of the analogues starting points (black) from the country. b , In the velocity-SPS approach we perform a weighted average of the forecast given by SPSb (black arrow) with the forecast that corresponds to a perfectly autocorrelated dynamics (red arrow). Full size image To backtest these approaches we build 482 GDP growth forecasts on three five-year windows: 2008–2013, 2009–2014 and 2010–2015. The forecasts are built with a rigorous out-of-the-sample approach, namely all the data sanitation is performed using data only up to the beginning of the forecasting window, and the analogues are sampled among transitions observed up to the beginning of the forecasting window. We forecast the growth of all the countries for which we have data of fitness and GDP in the corresponding time range and with at least one analogue in a radius of one σ (see definition in the Methods section). As a benchmark of the state of the art, we compare our results with the historical forecasts released by the IMF for exactly the same set of countries and time windows 3 . The quality of the IMF forecast is debated in the economic literature 18 , 19 , but it is nevertheless often used as a valid reference term 20 . These forecasts are the most authoritative publicly available global historical forecast data on a five-year horizon, and this motivates our choice to use them as benchmarks. The choice of a five-year time horizon is a trade-off between the medium–long term where we expect the GDP–fitness dynamics to be meaningful and stronger than fluctuations, and the availability of benchmark forecasts from the IMF. The 482 data points that we use to test the forecasting accuracy correspond to the maximum possible intersection between the forecasts released by IMF and the 169 countries for which reliable export data is available on COMTRADE. The results are summarized in Table 1 . We consider two error metrics over the forecasted percentage compound annual growth rate (%CAGR) of the countries: the mean absolute error (MAE) and the root mean square error (RMSE). The error is defined as $$E = \% {\mathrm{CAGR}} - \% {\mathrm{CAGR}}_{\mathrm{forecasted}}$$ An error of 1% means that if the real compound annual growth has been 3% the forecast was 2%. Table 1 Summary of the results Full size table Table 2 Top 20 MAE enhancements combining IMF forecasting with velocity-SPS Full size table We consider three sets of countries (all, predictable and unpredictable), emphasizing how the fitness of a country is a strong predictor of the ability to forecast growth, and how this is a general feature that holds not only for SPS forecasts, but for the IMF as well. The predictable and unpredictable classifications are based uniquely on the fitness of the country at the beginning of the forecasting window. A country c is classified as predictable if log( F c ) > −1.5 and unpredictable otherwise. Such distinction is taken from ref. 15 , where the predictable and unpredictable regimes have been identified. We also show the performance of a combination of the two SPS methods presented here with the IMF forecasts—that is, what is the error of a forecast that is the simple linear average of the two combinations of forecasted %CAGR: IMF–SPSb and IMF–velocity-SPS. That is, we do not average the errors, but rather compute the error made by an averaged prediction. We comment the main findings reported in Table 1 . On average SPSb performs as good as IMF models, despite being much simpler and easier to interpret, being built out of only two variables. Velocity-SPS represents an improvement in terms of MAE over state-of-the-art IMF forecasts. All the methods perform significantly better in the predictable regime than in the unpredictable regime. This consideration surprisingly holds much more strongly for IMF forecasts, that are fitness-unaware. velocity-SPS is by far the best approach to forecast the growth of low-fitness countries, with a striking improvement over IMF forecasts in terms of MAE. In general, the IMF and SPS approaches seem to provide information that is to some extent orthogonal, and can be fruitfully combined. Globally the combination of velocity-SPS and IMF brings an improvement in MAE over IMF alone. Similar results hold in the predictable regime as well, but fail in the unpredictable regime: possibly due to the very large IMF error, here the velocity-SPS alone is the best performing strategy in terms of MAE and RMSE. The combination of velocity-SPS and IMF could be easily improved by adding the confidence interval of the IMF’s projections. The SPS approach not only provides state-of-the-art GDP forecasting within a minimal framework, but it is also able to provide a consistent estimate of the forecasting error for each specific forecast. To demonstrate the consistency of the error estimates, we first approximate all the forecast GDP probability distributions as Gaussian distributions, by simply computing the mean and standard deviation of the centres of mass of the bootstrapped samples (see Methods ). We then standardize the corresponding realized growth G c , t by computing $$\tilde G_{c,t} = \frac{{E\left( {G_{c,t}} \right) - G_{c,t}}}{{\sigma _{c,t}}}$$ where E ( G c , t ) is the expected growth of country c in time window t + Δ t under the Gaussian approximation and σ c , t is the standard deviation of such Gaussian. If the G c , t are distributed as predicted by the SPS Gaussians, then we expect \(\tilde G_{c,t}\sim N(0,1)\) . In Fig. 2a we show a comparison of N (0, 1) with the empirical distribution of \(\tilde G_{c,t}\) , obtained again from strictly out-of-sample forecasts in the three five-year time windows discussed above. In Fig. 2b we show how the uncertainty varies with the fitness of the country: in general, uncertainty is lower for developed countries and is larger for countries with low fitness. Interestingly, there seems to be an increase in uncertainty for countries close to the transition from the unpredictable to the predictable regime, at log( F ) = −1.5: this is in line with findings outlining bifurcation-like behaviours of the dynamics of countries close to development barriers, which can make the difference between transitioning to a developed economy or remaining stuck in a ‘poverty trap’ 21 . Fig. 2: Expected errors’ properties of the forecasting. a Empirical distribution of \(\tilde G_{c,t}\) for the SPSb method (yellow bars) compared with the N (0, 1) expected distribution (blue line). b Expected forecasting error decreases when fitness increases. The forecasting error is calculated as the standard deviation of the average growth realized by the bootstrapped analogues. Dashed lines mark ±2σ confidence intervals. Full size image With these new techniques we are able to easily interpret the situation of a large set of countries, providing a sharp, quantitative answer to many heavily debated questions. In Fig. 3 we show the velocity-SPS forecasts 2015–2020 for three emblematic countries: China, Brazil and Tanzania. In 2015 Brazil’s GDP pc was still slightly above China’s. By standard definitions both would be middle-income economies, with a danger of becoming trapped in the so called ‘middle income trap’ 22 . However, even before running quantitative estimates, the situation of the two countries already appears radically different when projected in the GDP pc –fitness plane. While China seems a radical historical outlier with two consecutive decades of steady growth (actually its trajectory resembles very closely that of Japan after the Second World War), Brazil sits in the middle of a much more crowded part of the space. It is in that crowded space, where also countries like Russia and South Africa are currently placed, that the idea of the ‘middle income trap’ was defined. But by adding only one more dimension, the fitness, we are able to separate extremely well the scenarios of China and Brazil. In the past 20 years, and even today, the opinion on how long China’s fast growth would have lasted has often been pessimistic and generally unclear. From Fig. 3 the situation is clear, although China has no analogues in the plane, so this method is not optimal for a quantitative forecast, it still has a GDP much lower than the other countries with the same fitness. Thus it should be looking at several more years of fast growth in its future, and not only in terms of GDP, but in industrial competitiveness as well. A country like Tanzania instead is in a much more uncertain situation. Its fitness has been fluctuating, and the uncertainty on where it will be in five years is much higher. That uncertainty also reflects on the GDP growth estimate, which is in general less precise for low-fitness countries. Fig. 3: Dynamics and forecasts in the GDP–fitness plane. We show three highlighted examples of dynamics during 1995–2015 for China, Brazil and Tanzania (coloured lines) among all countries (grey lines). Coloured regions represent the probability distributions estimated with velocity-SPS for 2015–2020 evolution (red: high probability, blue: low probability). Full size image In Table 2 we show the top 20 average improvements in terms of MAE of the combined model over the IMF forecasts. These are averaged over the three windows for which we have conducted our tests. There is a clear prevalence of developing countries. This is in line with the idea that fitness describes structural industrial properties, which are the main drivers of growth in developing economies. On the other hand, fitness is less effective in capturing financial and monetary effects, which are best described by the IMF models. We believe that this complementarity is the key to understanding the improvement in predictability of the combined SPS + IMF model. Extensive tests of the statistical robustness of such results are presented in the Supplementary Figs. 3,4 . Outlook In recent years we have witnessed a strong debate about the ability of current macroeconomic models to forecast growth and crisis, both internal to the field and in the mainstream media. Many have pointed out the need for a fundamental rethinking of economic modelling that goes in the direction of a more scientific and less dogmatic approach. Economic complexity has been one of the attempts in this direction. The fact that here we have shown how these low-dimensional representations are so effective in forecasting growth can have a strong impact on the general thinking about the driving forces of economic growth and on our understanding of the main features of economic development. An important side note is that the effectiveness of our methods means that countries’ development paths tend to be similar across countries with no specific temporal or geographical limitations. This kind of ‘universal’ behaviour inspires the exploration of the scale invariance of the growth mechanisms that can be described by EC, especially towards smaller regional scales. Although regional trade data are at the moment largely unavailable, and in general not sufficiently standardized, we believe that encouraging results at the country level such as those presented here can motivate their collection in the future, at least in some areas of the world, such as the European Union. Some preliminary results show nontrivial behaviours at the level of single firms 11 , 23 . All these observations imply that a deeper and more solid connection between EC and the discipline of macroeconomics in general is the clear next step to pursue. As we have shown, also a trivial averaging is already effective in terms of accuracy. Many of the ideas and methods from EC are currently being used, on the field, by large macroeconomic institutions such as the World Bank 1 . Nevertheless, we believe that there is tremendous margin for improvement in a more formal and refined integration of EC and dynamical systems in the standard macroeconomics practice, both in terms of accuracy and, more importantly, in our understanding of growth and development. Implications can stretch a long way, from a simply more effective approach to data gathering (for example, at the regional level) to a whole new way of designing development policies. Methods The fitness–complexity algorithm The fitness–complexity algorithm (FC) has been introduced and explored in a recent series of papers 13 , 14 , 24 . It allows one to define a measure of countries’ industrial fitness and products’ complexity. The fitness of a country is defined as the weighted sum of the complexity of the products of which that country is a competitive exporter. The complexity of products is defined in a self-consistent way as a nonlinear function of the fitness of the countries that are competitive exporters of that product. The spirit is to bound the complexity of a product with the fitness of the less complex economy that is able to be a competitive exporter of the product. In formulas the FC algorithm is defined iteratively as $$\left\{ \begin{array}{l}\tilde F_c^{(N + 1)} = \mathop {\sum}\limits_p {\kern 1pt} M_{cp}Q_p^{(N)}\cr \tilde Q_p^{(N + 1)} = \frac{1}{{\mathop {\sum}\limits_c \frac{{M_{cp}}}{{F_c^{(N)}}}}}\end{array} \right.$$ (1) with a normalization step after each iteration: $$\left\{ {\begin{array}{*{20}{l}} {F_c^{(N)} = \frac{{\tilde F_c^N}}{{\left\langle {\tilde F} \right\rangle _c}}} \hfill \cr {Q_p^{(N)} = \frac{{\tilde Q_p^N}}{{\left\langle {\tilde Q} \right\rangle _p}}} \hfill \end{array}} \right.$$ (2) where M cp is a binary matrix whose elements are 1 if the country c is a competitive exporter of product p . A precise definition of this matrix is crucial, as it is the only input that we use for our measures of competitiveness and forecasts of growth. The procedure that we use to compute it out of export data is detailed in the Supplementary Fig. 1 , and the impact of such procedures is shown in Supplementary Fig. 2 and Supplementary Table 1 . The FC iterations lead to a unique fixed point that does not depend on the initial conditions \(F_c^0\) and \(Q_p^0\) and whose convergence properties are extensively discussed in ref. 25 . Extensive discussions about the motivation for the introduction of the FC algorithm and the properties of the fitness and complexity measures can be found in ref. 14 . SPSb and velocity-SPS In order to forecast annualized five-year GDP pc growth, we develop a statistical framework to select and weight analogues in the log(fitness)–log(GDP pc ) plane (FG plane). We define \({\mathbf{x}}_{c,t}\) the position of country c in the FG plane at time t and \(\delta {\mathbf{x}}_{c,t}\) the displacement vector of country c from time t to t + Δ t , on the same plane. We always refer to Δ t = 5 years in this paper. To forecast \(\delta {\mathbf{x}}_{\tilde c,t^ \ast }\) —that is, the evolution of country \(\tilde c\) from time t * to t * + Δ t —we consider the analogues to be the set of available data points \(\mathrm{x}_{c,t}\) on the FG plane for which the five-year evolutions would be known at time t . Namely all the \(\delta {\mathbf{x}}_{c,t}\) where t ≤ t * + δt . We build a probability distribution for \(\delta {\mathbf{x}}_{\tilde c,t^ \ast }\) in two steps: 1. We sample the set of analogues with a probability distribution given by a Gaussian kernel centred in \({\mathbf{x}}_{\tilde c,t^ \ast }\) ; that is, an analogue is sampled with probability $$p\left( {{\mathbf{x}}_{c,t}|{\mathbf{x}}_{\tilde c,t^ \ast }} \right) = \frac{1}{{\sigma \sqrt {2\uppi } }}{\mathrm{e}}^{ - \frac{{\left| {{\mathbf{x}}_{c,t} - {\mathbf{x}}_{\tilde c,t^ \ast }} \right|^2}}{{2\sigma ^2}}}$$ (3) where σ = 0.5. So our definition of analogues is just dependent on the proximity in the FG plane regardless of time or other variables. We sample with repetition N analogues, where N is the number of available analogues. 2. We sample 1,000 batches with the above procedure (bootstrap). The global distribution of the sampled displacements is our probabilistic forecast of \(\delta {\mathbf{x}}_{\tilde c,t^ \ast }\) . The mode of the distribution is used as our forecast value, and the standard deviation as the uncertainty on the forecast. This method is described in Fig. 1a and we call it the bootstrapped selective predictability scheme (SPSb). In order to take into consideration the fact that GDP growth tends to be strongly self-correlated, we develop a different version of the SPSb approach that combines the bootstrapping of analogues with the recent GDP growth of the country. We name this approach velocity-SPS. To do so, we perform two forecasts: the SPSb forecast as described above, and a naive forecast where we predict the country to grow exactly as much as it grew in the past five years. To combine the past velocity with the SPSb distribution we use the Gaussian approximation, which we have shown holds in Fig. 2 for the SPSb, taking a σ vel for the velocity equal to the standard deviation of all the past one-year velocity of the given country, so quantifying the spread of the velocity distribution in the past. We use one-year instead of the more intuitive five-year velocity to have more data and a better estimation of the standard deviation. The resulting distribution is a binormal distribution with mean value μ and variance σ 2 $$\mu = \frac{{\frac{{\mu _{\mathrm{sps}}}}{{\sigma _{\mathrm{sps}}^2}} + \frac{{\mu _{\mathrm{vel}}}}{{\sigma _{\mathrm{vel}}^2}}}}{{\frac{1}{{\sigma _{\mathrm{sps}}^2}} + \frac{1}{{\sigma _{\mathrm{vel}}^2}}}}$$ (4) $$\sigma ^2 = \left( {\frac{1}{{\sigma _{\mathrm{sps}}^2}} + \frac{1}{{\sigma _{\mathrm{vel}}^2}}} \right)^{ - 1}$$ (5) These methods naturally provide probabilistic forecasts of GDP pc and fitness growth. For all the results presented in this paper, we considered the marginal distributions along the GDP pc axis. In the Supplementary Information we show as an example what would be the accuracy of linear models relating the same variables to GDP growth. Results are shown in Supplementary Table 2 for comparison. Sources and specifications of GDP data Throughout the analysis, as the GDP dimension in the GDP–fitness space, we use the gross domestic product per capita based on purchasing power parity in constant 2011 international dollars, (GDP, PPP (constant 2011 international $)). These data have been acquired from the World Bank website on 17 June 2017. The same data are used as ground truth for all the out-of-sample error estimations. Such choices are motivated by three reasons: We use GDP PPP in constant dollars to remove growth terms due to inflation and other monetary effects. In terms of smoothness and general data quality, the World Bank GDP data is the best publicly available global dataset that we could find. We want to use a third party dataset as ground truth in order to have an unbiased estimate of IMF errors on the growth rates. The IMF provides five-year historical forecasts in the World Economic Outlook (WEO) dataset 3 . Such forecasts are expressed for the GDP per capita PPP in current dollars. We applied the exchange rates provided by the World Bank to convert such forecasts into the same 2011 constant dollars. We downloaded the data from the IMF website in February 2018. One important limitation of all these datasets, including the COMTRADE dataset, is that a true strict out-of-sample error estimation is not completely possible, due to ex-post revisions that are conducted on the data. Citing comments written in the WEO files: ‘[WEO] Historical data are updated on a continual basis, as more information becomes available, and structural breaks in data are often adjusted to produce smooth series with the use of splicing and other techniques […] When errors are discovered, there is a concerted effort to correct them as appropriate and feasible.’. In our experience we have observed that significant corrections to trade data in COMTRADE are usually limited within a year after their first release. We have no precise notion of how often and with what delays the World Bank corrects past GDP data. Such caveats are of course important, but since these apply to all the data sources, including our IMF benchmarks, we do not expect our conclusions to be biased in any particular direction. Data availability All the GDP data used in this work is publicly and freely available at . The COMTRADE dataset is available at . Bulk downloads may require a paid subscription. The economic fitness dataset is available at . However, that version of the dataset, although similar to the version used in this work, is not exactly the same, includes fewer countries and has been regularized with a simpler data-sanitation procedure. The most updated version of the fitness dataset will be published together with an upcoming methodological paper, explaining in detail all the manipulations needed to compute fitness from the raw COMTRADE dataset. In the meantime, the dataset used in this work will be provided by the authors upon request. The interested reader can contact the corresponding author via the provided e-mail address.
A team of Italian physicists has used economic complexity theory to produce five-year gross domestic product (GDP) estimates for several countries around the world. In their paper published in the journal Nature Physics, Andrea Tacchella, D. Mazzilli and Luciano Pietronero describe how they applied the theory to economic forecasting and how well it has worked thus far. Currently, economists use a variety of models to produce GDP estimates, which are often used by law and policy makers to inform decisions about future events. Such models typically require a host of variable inputs and are quite complex. In sharp contrast, the estimates made by the Italian team used just two variables: current GDP and another they describe as "economic fitness." The researchers calculated a number for a given country's economic fitness using physics principles applied to export products. Such factors as diversification and the complexity of the products were taken into account, offering a means of gauging the relative strength of a given economy. The idea was to rate a country's economic strength—the wider the range of products being exported and the more complex they were, the more likely GDP was likely to grow—and used that to forecast future prosperity. The team reports that they have been running their models for approximately six years—long enough for them to see how well their estimates matched actual GDP numbers over time. They report that their estimates were on average 25 percent more accurate than were those made by the International Monetary Fund. They report also that their models accurately predicted the booming Chinese economy in 2015 when more traditional models suggested the country was headed for a slowdown. The researchers explain that the field of economic complexity involves studying the behavior of economies over time and the factors that cause them to change. Doing so includes using tools such as those that have been developed to measure turbulence in fluids and traffic jams. The philosophy of such research, they explain, revolves around the idea that complex systems with a large number of elements that interact in non-linear ways tend to have emergent properties. Learning to understand such properties, they further note, can offer insights into relationships such as the one between exports and GDP trends.
10.1038/s41567-018-0204-y
Biology
GM—'the most critical technology' for feeding the world, expert says
Nina V Fedoroff, Food in a future of 10 billion , Agriculture & Food Security 2015 , DOI: 10.1186/s40066-015-0031-7
http://dx.doi.org/10.1186/s40066-015-0031-7
https://phys.org/news/2015-08-gmthe-critical-technology-world-expert.html
Abstract Over the past two centuries, the human population has grown sevenfold and the experts anticipate the addition of 2–3 billion more during the twenty-first century. In the present overview, I take a historical glance at how humans supported such extraordinary population growth first through the invention of agriculture and more recently through the rapid deployment of scientific and technological advances in agriculture. I then identify future challenges posed by continued population growth and climate warming on a finite planet. I end by discussing both how we can meet such challenges and what stands in the way. Background Today we have enough food to meet the world’s needs. Indeed, we have an extraordinary global food system that brings food from all over the planet to consumers who can afford to buy it. The food price spike of 2008 and the resurgence of high food prices in recent years have had little impact on the affluent citizens of the developed world who spend a small fraction of their income on food. By contrast, food prices have a profound impact on the world’s poorest people. Many of them spend half or more of their income on food. During the food price crisis of 2008, there were food riots in more than 30 countries. Unrest in the Middle East and North Africa tracks with the price of food, as is dramatically illustrated in Fig. 1 . Spiraling food prices drive the world’s poorest into chronic hunger even in a world of relative plenty. Fig. 1 Food price spikes are correlated with increases in food riots. Red dashed vertical lines correspond to beginning dates of “food riots” and protests associated with the major recent unrest in North Africa and the Middle East. The overall death toll is reported in parentheses . The blue vertical line indicates the date on which the authors of the cited report [ 1 ] submitted a report to the U.S. government warning of the link between food prices, social unrest, and political instability. The inset shows the FAO Food Price Index from 1990 to 2011. (The figure is reproduced with permission from [ 1 ]). Full size image Does this mean we need worry only about poverty, not about the global food supply, as suggested in a recent editorial by the influential New York Times food commentator Mark Bittman [ 2 ]? Analyses of the most recent United Nations projections indicate that the human population will expand from roughly 7.2 billion today to 9.6 billion in 2050 and 10.9 billion by 2100 [ 3 , 4 ]. Current yield growth trends are simply insufficient to keep up with growing demand [ 5 ]. As well, the rapid expansion of agriculture over the past century to feed today’s population has had a devastating impact on biodiversity [ 6 ]. As a result, there is an acute need to intensify agricultural productivity, while at the same time decreasing the deleterious impact of agriculture on biodiversity and the services provided by complex ecosystems [ 7 ]. Historical perspective For most of our evolutionary history, our numbers were small and we were mobile hunter-gatherers. We spent our time finding and capturing enough food to feed ourselves and our closest kin. Then sometime between 10 and 20,000 years ago—maybe even more—that started to change. We began to shape plants and animals to our own advantage and settled down to grow and herd them [ 8 ]. The process by which we have modified plants and animals to suit our needs, traditionally called “domestication,” is a process of genetic modification [ 9 ]. Early peoples selected variant organisms—plants, animals, and microbes—with useful traits, such as seeds that adhere to plants until they are harvested and animals tame enough to herd. Domestication is a process of modification that is possible because of the genetic variation constantly arising in all living organisms. While hunter-gatherers were quite sophisticated in their resource management, it was systematic planting and harvesting of crops that marks the origin of what we now call “agriculture” [ 10 ]. Agriculture allowed people to produce more food than they consumed; cities and civilization followed. Thus human civilization emerged because we figured out how to produce surplus food. We could feed artisans and scribes and warriors and kings. For the next 10 millennia, people built cities and civilizations, wore out the land, invaded their neighbors or abandoned the cities and civilizations, eventually rebuilding on fresh land [ 11 ]. It was often the fertility of the land that determined how long a civilization lasted. Plants extract nutrients from the soil and crop yields decline, making it harder and harder to produce enough food as the number of people grows [ 8 ]. Concern about access to sufficient food, today called “food security,” is as old as mankind. Thomas Malthus’ famous Essay on Population, published in 1798, crystallized the problem of balancing food and human population for the modern era [ 12 ]. Malthus believed that humanity was doomed to food insecurity because our numbers increased exponentially, while our ability to produce food could only increase linearly. Curiously, Malthus penned his essay at about the time that science began to play a major role in boosting agricultural productivity. Late eighteenth century milestones were Joseph Priestley’s discovery that plants emit oxygen and Nicholas-Théodore de Saussure’s definition of the chemical composition of plants [ 13 , 14 ]. Malthus could not have envisioned the extraordinary increases in productivity that the integration of science and technology into agricultural practice would stimulate over the ensuing two centuries. Both organic- and mineral fertilization of plants have been practiced since ancient times. Farmers knew that certain chemicals and biological materials, ranging from fish and oyster shells to manure and bones, stimulated plant growth [ 15 , 16 ]. Justus von Liebig made important contributions to the study of plant nutrient requirements, understanding that biological sources of nitrogen could be replaced with purely chemical sources. But supplying nitrogen in the forms that plants use remained a major limitation until the development of the Haber–Bosch process for fixing atmospheric nitrogen early in the twentieth century [ 17 ]. Today, agriculture in the developed world relies primarily on chemical fertilizers. Indeed, the global human population could not have grown from roughly 1 billion at the turn of the nineteenth century to today’s 7.2 billion without synthetic nitrogen fertilizer. Crop domestication Humans practiced genetic modification long before chemistry entered agriculture, transforming inedible wild plants into crop plants, wild animals into domestic animals, and harnessing microbes to produce everything from cheese to wine and beer. Oddly, it is only our contemporary methods of bending organisms’ genetic constitution to suit our needs that are today recognized as genetic modification, known in common parlance by the abbreviations “GM” (genetically modified), “GMO” (genetically modified organism) or “GE” (genetically engineered). Yet all of the useful, heritable traits nurtured by people in organisms constitute “domestication” and all are the result of genetic modifications. Each microbe, crop and animal has its own interesting history. To take just one example, a fundamental trait that distinguishes wild from domesticated plants is the retention of mature seeds on the plant. Plants have many mechanisms for dispersing their seeds, but it is much easier for people to harvest seeds that remain attached to the plant at maturity. Hence one of the earliest steps in grain crop domestication was the identification of mutations—genetic changes—that prevent seed dispersal [ 18 ]. Corn, also known as maize, remains one of our most spectacular feats of genetic modification. Its huge ears, packed with starch and oil, provide one of humanity’s most important sources of food and feed. Corn bears little resemblance to its closest wild relative, teosinte. Indeed, when teosinte was first discovered in 1896, it was assigned to a different species [ 19 ]. By the 1920s, it was known that teosinte and corn readily produce fertile hybrids, but controversies about their relationship and about the origin of corn continued throughout most of the twentieth century. The key genetic changes that transformed teosinte into corn appear to have happened in the Balsas River Valley in Mexico some 9000 years ago [ 20 ]. The mutations that converted teosinte, a grass with hard, inedible seeds, into modern corn altered just a handful of genes that control plant architecture and the identity of reproductive organs. Remarkably, once these mutations had been brought together in an early corn plant, they stayed together and spread very rapidly, moving from Mexico into the American southwest by 3000 years ago [ 20 ]. Among the many other traits altered during domestication of plants are the size and shape of leaves, tubers, berries, fruits and grains, as well as their abundance, toxicity, and nutritional value. The changes are often in genes coding for proteins that regulate the expression of many other genes [ 9 ]. Differences in nutrient composition among varieties of the same crop are caused by mutations in genes coding for proteins in a number of different biosynthetic pathways. Thus, for example, sweet corn has mutations that prevent the conversion of sugar to starch in the kernel [ 21 ]. Modern crop improvement The genetic revolutions of the twentieth century boosted crop productivity immeasurably. Austrian monk Gregor Mendel’s pioneering observations on inheritance were published in 1865, but did not get wide attention until a half-century later [ 22 ]. A simple demonstration project to illustrate Mendelian inheritance led to the re-discovery of hybrid vigor, a long-known phenomenon whose incorporation into crop breeding resulted in a dramatic expansion of the corn ear and, thereby, crop yield [ 23 ]. However, when corn hybrids were first introduced in the U.S. during the 1930s, they faced resistance and criticism similar to that leveled at contemporary GM crops. The hybrids were complex to produce and agriculture experiment stations were not interested. Eventually a company was formed to produce hybrid seed. But farmers accustomed to planting seed from last year’s crop saw no reason to buy it. It was only when farmers realized the yield benefits and the drought-resistance of hybrid corn during the 1934–1936 dust-bowl years that farmers began to adopt hybrid corn rapidly [ 24 ]. Techniques for accelerating mutation rates with radiation and chemicals and through tissue culture were developed and widely applied in the genetic improvement of crops during the twentieth century [ 25 ]. These methods introduce mutations rather indiscriminately and require the growth of large numbers of seeds, cuttings or regenerants to detect desirable changes. Nonetheless, all of these approaches have proved valuable in crop improvement and by the end of the twentieth century, more than 2300 different crop varieties, ranging from wheat to grapefruit, had been developed using radiation and chemical mutagenesis [ 25 ]. Mechanization of agriculture A major development with impact Malthus could not have envisioned is the mechanization of agriculture. Human and animal labor provided the motive force for agriculture throughout most of its history and continues to do so in many less-developed countries. The invention of the internal combustion engine at the turn of the twentieth century led to the development of small, maneuverable tractors. The mechanization of plowing, seed planting, cultivation, fertilizer and pesticide distribution, and harvesting accelerated in the US, Europe, and Asia following World War II [ 26 ]. Agricultural mechanization drove major demographic changes virtually everywhere. In the U.S., 21 % of the workforce was employed in agriculture in 1900 [ 27 ]. By 1945, the fraction had declined to 16 % and by the end of the century the fraction of the population employed in agriculture had fallen to 1.9 %. At the same time, the average size of farms increased and farms increasingly specialized in fewer crops. This profound demographic shift from agrarian to urban underlies the development of today’s attitudes about food and farming in developed countries. Today the vast majority of the developed world’s population is urban and far removed from primary food production. The Green Revolution Malthus penned his essay when the human population of the world stood at less than a billion. The population tripled over the next century and a half. As the second half of the twentieth century began, there were neo-Malthusian predictions of mass famines in developing countries that had not yet experienced science- and technology-based advances in agriculture. Perhaps the best known of the mid-century catastrophists was Paul Ehrlich, author of The Population Bomb [ 28 ]. Remarkably, the extraordinary work of just a handful of scientists and their teams, principally plant breeders Norman Borlaug and Gurdev Khush, averted the widely predicted Asian famines [ 29 ]. The Green Revolution was based on the development of dwarf rice and wheat varieties that responded to fertilizer application without falling over (lodging). Subsequent breeding for increased yield continued to improve the productivity of these crops by as much as 1 % per year. Perhaps most remarkably, the Green Revolution and other technological advances reduced the fraction of the world’s hungry from half to less than a sixth, even as the population doubled from 3 to 6 billion. These accomplishments earned Borlaug a well-deserved Nobel Prize. Curiously, the Green Revolution is often vilified today. Genetic modification of crops The equally revolutionary molecular genetic advances that began in the 1960s led to the development of new methods of crop improvement. The basic methodology lies in the construction of hybrid DNA molecules designated “recombinant DNA (R-DNA)” because they consist of a piece of bacterial or viral DNA combined with a piece of DNA from a different kind of organism, plant or animal [ 30 ]. The ability to multiply such hybrid DNA molecules in bacteria made it possible to develop the DNA sequencing techniques that underlie today’s genomic revolution. As well, techniques were developed to introduce genes into plants using either the soil bacterium Agrobacterium tumefaciens , which naturally transfers a segment of DNA into a plant cell, or mechanical penetration of plant cells using tiny DNA-coated particles [ 31 ]. This combination of methods and knowledge made it possible to transfer a well-understood segment of genetic material from either the same or a related plant or from a completely unrelated organism into virtually any crop plant, creating what is known as a “transgenic” plant. Because genes work the same way in all organisms, this made it possible to introduce a desirable trait, such as disease- or pest-resistance, without the extensive genetic and epigenetic disturbance attending what we now consider to be the “conventional” crop improvement techniques such as hybridization and mutagenesis [ 32 – 34 ]. Indeed, recent comparisons have revealed plant modification by molecular techniques has less impact on gene expression, protein, and metabolite levels than do conventional genetic crosses [ 35 – 37 ]. Several crop modifications achieved using these methods are now in widespread use. Perhaps the best known of these are crop plants containing a gene from the soil bacterium, Bacillus thuringiensis , long used as a biological pesticide. The gene encodes a protein that is toxic to the larvae of certain kinds of insects, but not to animals or humans [ 38 ]. Such a toxin gene is often called the “Bt gene,” but is actually a family of related toxin genes from a group of closely related bacteria and these are increasingly used in combinations to decrease the probability of resistance developing in the target insects, an approach that has been dubbed gene “stacking.” Herbicide tolerance is another widely accepted GM crop modification. Among the most common herbicides in use today are compounds that interfere with the production of certain amino acids that plants synthesize, but animals do not [ 39 ]. Such herbicides, therefore, kill plants, but have low or no toxicity for animals or humans. Herbicide-tolerant crops make it possible to control weeds without damaging the crop and without tilling the soil. Such crops have been derived through natural mutations and induced mutations, as well as by introduction of genes from either bacterial sources or plant sources. Today, herbicide-tolerant varieties of many crops, most importantly soybeans and canola, are widely grown [ 40 ]. Papayas resistant to papaya ringspot virus (PRSV) saved the Hawaiian papaya industry and are the only such GM crop to emerge from public sector GM research. Papaya ringspot virus is a devastating insect-borne viral disease that wiped out the papaya industry on the Hawaiian island of Oahu in the 1950s, forcing its relocation to the Puna district of the big island. PRSV was first detected in the Puna district in 1992; by 1994 it was widespread and threatening the industry. A project initiated in 1987 introduced a gene from the PRSV into papayas based on reports that introducing a viral gene could make a plant resistant to the virus from which the gene came [ 41 , 42 ]. Transgenic seeds were released in 1998; by 2000, the papaya industry was returning to pre-1995 levels. This remarkable achievement of disease resistance enhanced a virus protection mechanism already present in the plant, much as vaccination protects people and animals from infection by pathogens [ 43 ]. New methods are rapidly being developed that promise to further increase the specificity and precision of genetic modification. These techniques capitalize on growing knowledge of the dynamic processes underlying genome maintenance, particularly the repair of breaks in the genetic material, DNA. Known under the general rubric of “site-directed nuclease (SDN)” technology, this approach uses proteins (or protein-nucleic acid complexes) that seek out, bind to, and cut specific DNA sequences, introducing breaks in the DNA at one or a small set of sequences targeted for modification [ 44 ]. Repair of such DNA cuts by natural cellular processes results in precisely targeted genetic changes rather than the random ones introduced by older methods of mutagenesis. This method can also be used to introduce a gene at a pre-identified site in the genome or to modify a resident gene precisely, something that could not be done with pinpoint specificity and precision by R-DNA methods. As well, such genetic changes can often be made without creating a transgenic plant. The changes are the same at the molecular level as those that occur in nature or can be induced by older mutagenic techniques. What is new is that the genetic changes introduced by SDN techniques are not random, but confined precisely to the gene or genes selected by the breeder. Adoption of GM crops GM crops have been adopted at unprecedented rates since their commercial introduction in 1996. In 2014, GM crops were grown in 28 countries on 181.5 million hectares [ 45 ]. More importantly, more than 90 % of the 18 million farmers growing biotech crops today are smallholder, resource-poor farmers. The simple reasons that farmers migrate to GM crops are that their yields increase and their costs decrease. A recent meta-analysis of 147 crop studies conducted over a period of 20 years concluded that the use of GM crops had reduced pesticide use by 37 %, increased crop yields by 22 %, and increase farmers’ profits by 68 % [ 46 ]. The vast majority of GM hectarage is devoted to the growing of GM corn, soybeans, cotton, and canola with either Bt toxin-based pest resistance or herbicide tolerance traits. The reasons for the narrow GM crop and trait base to date lie in a combination of the economic, regulatory, and legal issues, discussed below. While some resistance to the Bt toxin has developed, it has not been as rapid as initially feared and second-generation, two-Bt gene strategies to decrease the probability of resistance are already being implemented [ 47 ]. Predicted deleterious effects on non-target organisms, such as monarch butterflies and soil microorganisms have either not been detected at all or are insignificant [ 48 ]. The better cropping practices supported by GM crops have decreased the availability of the milkweed on which monarch larvae feed [ 49 ]; hence efforts are being directed to the establishment of milkweed preserves ( ). The development of herbicide tolerance in previously susceptible weeds, while not unique to GM crops, is becoming an increasing problem because of the widespread use of glyphosate with glyphosate-tolerant GM crops [ 50 ]. Although the pace of herbicide discovery has slowed markedly since the 1980s, new combinations of herbicide-tolerant crops and older herbicides are likely to come on the market in the near future [ 51 ]. The overwhelming evidence is that the GM foods now on the market are as safe, or safer, than non-GM foods [ 37 , 52 ]. Moreover, there is no evidence that the use of GM techniques to modify organisms is associated with unique hazards. The European Union alone has invested more than €300 million in GMO biosafety research. Quoting from its recent report, “The main conclusion to be drawn from the efforts of more than 130 research projects, covering a period of more than 25 years of research and involving more than 500 independent research groups, is that biotechnology, and in particular GMOs, are not per se more risky than, e.g. conventional plant breeding technologies.” ( ). Every credible scientific body that has examined the evidence has come to the same conclusion ( ). Despite occasional one-of-a-kind, often sensationalized reports, the vast majority of feeding studies have identified no meaningful nutritional differences between GM and non-GM food and feed. Indeed, and perhaps unsurprisingly, comparative molecular analyses show that GM techniques have less impact on the genetic and molecular constitution of crop plants than conventional plant breeding techniques [ 37 ]. This is because conventional breeding mixes whole genomes comprising tens of thousands of genes that have previously existed in isolation, while GM methods generally add just a gene or two to an otherwise compatible genome. Thus the probability of introducing unexpected genetic or epigenetic changes is much smaller by GM methods than by conventional breeding methods. Crops modified by GM techniques are also less likely to have unexpected genetic effects than crops modified by the more conventional techniques of chemical and radiation mutagenesis methods simply because of the greater precision and predictability of molecular modification. Taken together with the closer scrutiny paid during product development to the potential for toxicity and allergenicity of novel proteins expressed by GM methods, GM crops are arguably the safest new crops ever introduced into the human and animal food chains. Indeed, to date, the only unexpected effects of GM crops have been beneficial. Many grains and nuts, including corn, are commonly contaminated by mycotoxins, which are toxic and carcinogenic compounds made by fungi that follow boring insects into the plants. Bt corn, however, shows as much as a 90 % reduction in mycotoxin levels because the fungi that follow the boring insects into the plants cannot get into the Bt plants [ 53 ]. There is also evidence that planting Bt crops reduces insect pressure in non-GM crops growing nearby. The widespread adoption of Bt corn in the U.S. Midwest has resulted in an area-wide suppression of the European corn borer [ 54 ]. Future challenges in agriculture Since Malthus’ time, the human population has expanded more than sixfold. Through science and technology, agriculture in developed nations has become far less labor-intensive and has kept pace with population growth worldwide. Today, fewer than 1 in 50 citizens of developed countries grows crops or raises animals for food. But after a half-century’s progress in decreasing the fraction of humanity experiencing chronic hunger, the food price and financial crises commencing in 2008 have begun to swell the ranks of the hungry once more [ 1 , 55 ]. Population experts anticipate the addition of another 2–4 billion people to the planet’s population within the next 3–4 decades [ 4 , 56 , 57 ], but the amount of arable land has not changed appreciably in more than half a century [ 58 ]. Moreover, arable land continues to be lost to urbanization, salinization, and desertification. Supplies of fresh water for agriculture are under pressure, as well. Today, about a third of the global population lives in arid and semi-arid areas, which cover roughly 40 % of the land area. Climate scientists predict that in coming decades, average temperatures will increase and dryland area will expand. Inhabitants of arid and semi-arid regions of all continents are extracting ground water faster than aquifers can recharge and often from fossil aquifers that do not recharge [ 59 ]. Yet the major crops that now feed the world—corn, wheat, rice, soy—require a substantial amount of water. It takes 500–2,000 L of water to produce a kilogram of wheat and the amount of water required to produce a kilogram of animal protein is 2–10 times greater [ 60 ]. Increasing average temperatures and decreasing fresh water availability present critical challenges to agricultural researchers to increase crop performance under suboptimal conditions. Rapid advances in our knowledge of plant stress responses and improving molecular knowledge and tools for plant breeding have already resulted in the introduction of new drought-tolerant crop varieties, both GM and non-GM [ 61 ]. New varieties of drought-tolerant maize produced using modern breeding approaches that employ molecular markers, but do not generate transgenic plants, have been released in the North American market by Syngenta and DuPont Pioneer, while Monsanto and BASF have jointly developed MON87460 (aka Genuity DroughtGard Hybrids), a drought-tolerant maize variety expressing a cold-shock protein from the bacterium Bacillus subtilis , introducing it in the U.S in 2013 ( ). However, it should be kept in mind that suboptimal “stress” conditions necessarily move plants away from their peak ability to use sunlight to convert carbon dioxide, water, and other simple compounds into the carbohydrates and proteins that feed people and animals. Stress-tolerant varieties do not generally outperform less stress-tolerant varieties by much or at all under optimal conditions, but simply survive better under suboptimal conditions, losing less of their yield potential. More with less Why do we need to do more with less? The FAO has estimated that we will need to increase the amount of food produced by 70 % by 2050 [ 62 ]. We will need more food, feed, and fiber both because there will be more people and because they will be richer. Among the things that people demand as they become more affluent is more meat in their diet. Producing more meat requires growing more grain. But increasing the grain supply by expanding the land under cultivation cannot be sustained. All the best land is already under cultivation and preserving what remains of our planet’s rich biological heritage by leaving more land unplowed is a growing priority. Indeed, modeling exercises reveal that within just a few decades, the planet’s natural resources will be insufficient to support developed-world consumption patterns [ 63 ]. As well, the negative impact of climate change on agriculture is becoming increasingly apparent and is predicted to worsen [ 64 , 65 ]. While more agriculturally suitable land may become available at greater distances from the equator as the climate warms, there is no guarantee that the productivity of these lands will compensate for productivity losses in the more populous equatorial regions. Whether our current highly productive food and feed crops can be modified and adapted to be substantially more productive at the higher temperatures expected or at more northern latitudes with shorter growing seasons is not yet known. Substantial research will be required not just on the salt, drought, and temperature tolerance of existing crop plants, but also for the domestication of plants that are not now used in agriculture, but that are capable of growing at higher temperatures and on saline water. In today’s highly productive developed-world agriculture, fertilizers and other chemicals are applied and used inefficiently, themselves becoming pollutants in our air, land, and water. As well, some of the chemicals used in both conventional and organic agriculture to control pests and diseases are toxic to people and to wildlife. Transitioning to more sustainable agricultural practices while doubling the food and feed supply, even as we must increasingly cope with the negative effects on agricultural productivity of a warming climate, is likely to be the greatest challenge of the twenty-first century [ 66 , 67 ]. Impediments to sustainable intensification of agriculture To live sustainably within planetary constraints, we must grow more on the same amount of land using less water, energy, and chemicals. The molecular genetic revolution of the late twentieth century that powered the development of precise GM methods is the most critical technology for meeting these challenges. Paradoxically, although the use of GM technology has been accepted in medicine, it has evoked an almost unprecedented level of societal controversy in the realm of food production, resulting in the proliferation of regulatory and legal constraints that threaten to cripple their use in achieving a more sustainable existence for humanity on planet Earth. While productivity gains based on earlier scientific advances can still increase food production in many countries, particularly in Africa, such productivity gains appear to have peaked in most developed countries and recent productivity gains have been achieved largely through adoption of GM crops [ 68 ]. The knowledge and GM technology are available to address these challenges throughout the world, but there are political, cultural, and economic barriers to their widespread use in crop improvement. As noted earlier, there is a global consensus among scientific societies that GM technology is safe. However, the political systems of Japan and most European and African countries remain opposed to growing GM crops. Many countries lack GM regulatory systems or have regulations that prohibit growing and, in some countries, importing GM food and feed. Even in countries such as the U.S. that have a GM regulatory framework [ 69 ], the process is complex, slow, and expensive. U.S. developers must often obtain the approval of three different agencies, the Environmental Protection Agency, the U.S. Department of Agriculture (USDA), and the Food and Drug Administration, to introduce a new GM crop into the food supply. Bringing a GM crop to market, including complying with the regulatory requirements, was estimated to cost $135 million in 2011 [ 70 ]. The effort, time, and cost for regulatory approval have dramatically contracted the pipeline of GM innovations that would directly benefit consumers [ 71 ]. In Europe, the regulatory framework is practically nonfunctional; only one GM crop is currently being grown and only two others have gained approval since 1990 when the EU first adopted a regulatory system [ 72 ]. The EU recently agreed to allow member countries decide individually whether to permit cultivation of an EU-approved GM crop ( ). The impact of this decision will not be known for some time, but it is likely to further complicate trade and food aid as crops approved in one country await regulatory approval in others [ 73 ]. Moreover, the increasing politicization of risk assessment makes it unlikely that uniform global safety standards for GM crops and animals will emerge in the foreseeable future [ 74 ]. European influence has been especially detrimental in Africa, causing African leaders to be excessively precautionary in approving GM crops and even to ban the import of GM grain to alleviate famine [ 75 ]. However, it is the case of Golden Rice, genetically modified to produce the vitamin A precursor β-carotene, that provides the paradigmatic example of an opportunity foregone to use GM technology to address a major global malnutrition issue [ 76 ]. Severe vitamin A deficiency results in blindness, and half of the roughly half-million children who are blinded by it annually die within a year. Vitamin A deficiency also compromises immune system function, exacerbating many kinds of illnesses. It is a disease of poverty and poor diet, responsible for 1.9–2.8 million preventable deaths annually, mostly of children aged less than 5 years and women [ 77 , 78 ]. Two scientists, Ingo Potrykus and Peter Beyer, and their teams developed a rice variety whose grains accumulate β-carotene, which our bodies convert to vitamin A. Collaborating with the International Rice Research Institute over a period of a quarter century, they developed and tested a transgenic rice variety that expresses sufficient quantities of β-carotene so that a few ounces of cooked rice can provide enough to eliminate the morbidity and mortality of vitamin A deficiency [ 79 ]. Yet, Golden Rice remains mired in controversy and has been tied up in the regulatory process for more than a decade [ 80 ]. Millions suffer and die while Golden Rice remains in test plots. The increasing politicization of risk determination raises questions about the underlying motivations [ 74 ]. NGOs, most vocally Greenpeace and Friends of the Earth, appear to have conducted vigorous campaigns of misinformation about GMOs first in Europe, then around the world [ 81 – 85 ]. Greenpeace remains adamantly against even the most benign and beneficial uses of GM technology in agriculture, such as the development and distribution of Golden Rice. Given the weight of scientific evidence to the contrary, it is difficult to avoid the conjecture that its continued opposition to a harmless and beneficial technology has more to do with preserving its funding base than benefitting humanity [ 84 , 85 ]. Perhaps the most counterproductive development is the increasing vilification of GM foods as a marketing tool by the organic food industry [ 86 ]. The organic food industry finds it roots in rural India, where Sir Albert Howard, arguably the father of “organic” agriculture, developed composting methods capable of killing the pathogens that abound in animal manures and human wastes so that these could be used safely as fertilizers in agriculture [ 30 ]. Even as synthetic fertilizers were increasingly being used around the world, the organic movement grew in the UK and Europe, eventually finding an American champion in Jerome Rodale, founder of the Rodale Press, and pesticide crusader Rachel Carson, author of Silent Spring , the book that has been credited with starting the environmental movement [ 87 ]. With the establishment of organic retailers, such as Whole Foods and Wild Oats, the organic food business grew rapidly and certification organizations proliferated. To bring some uniformity to what was being certified as “organic,” Congress established the National Organic Standards Board (NOSB) under the USDA through the Organic Food Production Act and charged it with developing national standards [ 30 ]. These were eventually published in 2000 and are generally referred to as the Organic Rule. According to the NOSB, organic agriculture is a production system that makes minimal use of off-farm inputs and seeks to enhance “ecological harmony.” The Organic Rule expressly forbids the use of GM crops, antibiotics, and synthetic nitrogen fertilizers in crop production and animal husbandry, as well as food additives and ionizing radiation in food processing. Organic food is food produced in compliance with the Organic Rule; the USDA’s Organic Seal is a marketing tool that makes no claims about food safety or nutritional quality. But a number of organic food industry marketers have systematically used false and misleading claims about the health benefits and relative safety of organic foods compared with what are now called “conventionally grown” foods [ 86 ]. Indeed, such organic marketers represent conventionally grown foods as swimming in pesticide residues, GM foods as dangerous, and the biotechnology companies that produce GM seeds as evil, while portraying organically grown foods as both safer and more healthful. Recent “labeling” campaigns have the objective of promoting the organic food industry by conveying the message to consumers that food containing GM ingredients is dangerous [ 86 ]. The future In 1798, Thomas Malthus told us that humanity was doomed to famine and strife because population growth would always outstrip our ability to produce food [ 12 ]. The human population of the Earth then numbered about a billion. The ensuing two centuries have seen a more than sevenfold expansion of the human population as a result of rapid scientific and technical developments in agriculture and a decline in the number of chronically hungry from half of humanity to about a sixth. But as Nobel Laureate Norm Borlaug, Father of the Green Revolution, observed in his Nobel Prize lecture ( ), “We may be at high tide now, but ebb tide could soon set in if we become complacent and relax our efforts.” Said another way, agriculture must ever race to maintain today’s status quo. And yet agriculture is now threatened in a sense by its very success. The demographic shift of population from rural to urban areas has been particularly dramatic in the developed world, with less than 2 % of the population supplying the food for the rest today. But the very fact that we are largely urban dwellers and have access to food through a global food system that supplies our food retailers with abundant produce blinds us to the basics of agriculture and makes us vulnerable to the increasingly strident opponents of modern agriculture who use fear to promote their economic interests. Will we have the wisdom to overcome our fear of new technologies and re-invest in the kind of agricultural research and development that can simultaneously increase agricultural productivity and decrease its environmental impact, so that we might preserve what remains of our extraordinary biological heritage? Can we continue to keep food prices down through agricultural innovation based on modern genetic methods and better farm management? Or will poverty-based social instability continue to spread and consume governments as population continues to climb while climate warming squeezes agriculture? The answers to these questions will, for better or worse, shape our future civilizations. Abbreviations DNA: deoxyribonucleic acid EU: European Union FAO: the U. N. Food and Agriculture Organization GE: genetically engineered GM: genetically modified GMO: genetically modified organism NGO: non-governmental organization NOSB: National Organic Standards Board PRSV: papaya ringspot virus R-DNA: recombinant DNA SDN: site-directed nuclease UK: United Kingdom USDA: U.S. Department of Agriculture
A former adviser to the US Secretary of State says that genetic modification (GM) is the most critical technology in agriculture for meeting the challenges of feeding a growing global population, writing in the open access journal Agriculture & Food Security. Nina Fedoroff, molecular biologist and former Science and Technology Adviser to Hillary Clinton and Condoleezza Rice, warns of the detrimental influence of politics and misinformation on the safety of GM crops. Instead, Fedoroff says that: "GM crops are arguably the safest new crops ever introduced into the human and animal food chains." Addressing safety concerns, Fedoroff highlights recent studies that have revealed that plant modification by molecular techniques has less impact on gene expression, protein and metabolite levels than conventional genetic crosses. New methods are also rapidly being developed that promise to further increase the specificity and precision of genetic modification. "The overwhelming evidence is that the GM foods now on the market are as safe, or safer, than non-GM foods," argues Fedoroff. She also cites a recent overview by the European Union of more than 130 research projects over 25 years concluding that GM methods are not inherently more risky than conventional plant breeding technologies. Fedoroff adds: "Every credible scientific body that has examined the evidence has come to the same conclusion." In her commentary, Fedoroff explains that the human population has grown seven-fold over the past two centuries, with the addition of a further 2-3 billion anticipated during the 21st century. The UN's Food and Agriculture Organization has estimated that food production will need to increase by 70% by 2050 to meet this demand. She writes: "Current yield growth trends are simply insufficient to keep up with growing demand...To live sustainably within planetary constraints, we must grow more on the same amount of land using less water, energy and chemicals. The molecular genetic revolution of the late 20th century that powered the development of precise GM methods is the most critical technology for meeting these challenges." The negative impact of climate change on agriculture is also predicted to worsen, warns Fedoroff, and arable land continues to be lost to urbanization, salinization, and desertification. "Supplies of fresh water for agriculture are under pressure, as well," writes Fedoroff. "Today, about a third of the global population lives in arid and semi-arid areas, which cover roughly 40% of the land area... Yet the major crops that now feed the world - corn, wheat, rice, soy - require a substantial amount of water." The advances in knowledge of plant stress responses and tools for plant breeding have already resulted in the introduction of new drought-tolerant crop varieties, both GM and non-GM, says Fedoroff. But opposition to GM crops within the political systems of Japan and most European and African countries impeded progress, suggests Fedoroff: "European influence has been especially detrimental in Africa, causing African leaders to be excessively precautionary in approving GM crops and even to ban the import of GM grain to alleviate famine." Fedoroff discusses missed opportunities in using GM technology for addressing global malnutrition. Severe vitamin A deficiency causes up to 2.8 million preventable deaths and blindness in half a million children annually. The GM crop 'Golden Rice' produces enough β-carotene so that a few ounces of cooked rice could eliminate the morbidity and mortality of vitamin A deficiency. "Golden Rice remains mired in controversy and has been tied up in the regulatory process for more than a decade," argues Fedoroff. "Millions suffer and die while Golden Rice remains in test plots." More positive stories on the adoption of GM crops are highlighted by Fedoroff, citing studies showing that more than 90% of farmers growing biotech crops today are smallholder, resource-poor farmers, and others concluding that, over 20 years, GM crops have reduced pesticide use by 37%, increased crop yields by 22% and increase farmers' profits by 68%: "The simple reasons that farmers migrate to GM crops are that their yields increase and their costs decrease." Taking a historical perspective, Fedoroff suggests that much of the opposition to GM crops could lie in our understanding of what constitutes 'genetic modification'. "Humans practiced genetic modification long before chemistry entered agriculture," explains Fedoroff, "transforming inedible wild plants into crop plants, wild animals into domestic animals and harnessing microbes to produce everything from cheese to wine and beer. Oddly, it is only our contemporary methods of bending organisms' genetic constitution to suit our needs that are today recognized as genetic modification." In conclusion, Fedoroff asks: "Will we have the wisdom to overcome our fear of new technologies and re-invest in the kind of agricultural research and development that can simultaneously increase agricultural productivity and decrease its environmental impact, so that we might preserve what remains of our extraordinary biological heritage?...The answers to these questions will, for better or worse, shape our future civilizations."
10.1186/s40066-015-0031-7
Biology
Biologists uncover a way to waterproof plants
Sjon Hartman et al. Ethylene-mediated nitric oxide depletion pre-adapts plants to hypoxia stress, Nature Communications (2019). DOI: 10.1038/s41467-019-12045-4 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-12045-4
https://phys.org/news/2019-09-biologists-uncover-waterproof.html
Abstract Timely perception of adverse environmental changes is critical for survival. Dynamic changes in gases are important cues for plants to sense environmental perturbations, such as submergence. In Arabidopsis thaliana , changes in oxygen and nitric oxide (NO) control the stability of ERFVII transcription factors. ERFVII proteolysis is regulated by the N-degron pathway and mediates adaptation to flooding-induced hypoxia. However, how plants detect and transduce early submergence signals remains elusive. Here we show that plants can rapidly detect submergence through passive ethylene entrapment and use this signal to pre-adapt to impending hypoxia. Ethylene can enhance ERFVII stability prior to hypoxia by increasing the NO-scavenger PHYTOGLOBIN1. This ethylene-mediated NO depletion and consequent ERFVII accumulation pre-adapts plants to survive subsequent hypoxia. Our results reveal the biological link between three gaseous signals for the regulation of flooding survival and identifies key regulatory targets for early stress perception that could be pivotal for developing flood-tolerant crops. Introduction The increasing frequency of floods due to climate change 1 has devastating effects on agricultural productivity worldwide 2 . Due to restricted gas diffusion underwater, flooded plants experience cellular oxygen (O 2 ) deprivation (hypoxia) and survival strongly depends on molecular responses that enhance hypoxia tolerance 2 , 3 . In submerged plant tissues the limited gas diffusion causes passive ethylene accumulation. This rapid ethylene build-up can occur prior to the onset of severe hypoxia, making it a timely and reliable signal for submergence 4 , 5 . In several plant species, ethylene regulates adaptive responses to flooding by inducing morphological and anatomical modifications that prevent hypoxia 5 . Surprisingly, ethylene has so far not been linked to metabolic responses that reduce hypoxia damage. In addition, how plants perceive early submergence to subsequently increase survival remains elusive. Here we show that plants can quickly sense submergence using passive ethylene accumulation and integrate this signal to acclimate to subsequent hypoxia. This ethylene-mediated hypoxia acclimation is dependent on enhanced group VII Ethylene Response Factor (ERFVII) stability prior to hypoxia. We show that ethylene limits ERFVII proteolysis under normoxic conditions by increasing the NO-scavenger PHYTOGLOBIN1 (PGB1). Our results reveal a molecular mechanism that plants use to integrate early stress signals to pre-adapt to forthcoming severe stress. Results Early ethylene signalling enhances hypoxia acclimation To unravel the spatial and temporal dynamics of ethylene signalling upon plant submergence, we monitored the nuclear accumulation of ETHYLENE INSENSITIVE 3 (EIN3) 6 , 7 , 8 , 9 , an essential transcription factor for mediating ethylene responses. We show, through an increase in EIN3-GFP fluorescence signal, that ethylene is rapidly perceived (within 1–2 h) in Arabidopsis thaliana (hereafter Arabidopsis) root tips upon submergence (Supplementary Fig. 1a-c ). An ethylene or submergence pre-treatment of only 4 h was sufficient to increase root meristem survival during subsequent hypoxia ( < 0.01% O 2 ). These responses were abolished in ethylene signalling mutants or via chemical inhibition of ethylene action (Supplementary Fig. 1d–e ). Ethylene-induced acclimation to hypoxia was observed in both roots and shoots and was accompanied by a reduction in cellular damage in response to hypoxia (Fig. 1 , Supplementary Figs. 2 & 3 ). Furthermore, enhanced hypoxia tolerance after ethylene pre-treatment is conserved within Arabidopsis accessions and taxonomically diverse flowering plant species, although variation in capacity to benefit from an ethylene pre-treatment exists (Supplementary Fig. 4 ; 10 ). These results demonstrate that ethylene enhances tolerance of multiple plant organs and species to hypoxia. Next, we aimed to unravel how early ethylene signalling leads to enhanced hypoxia tolerance in Arabidopsis root tips. Fig. 1 Ethylene pre-treatment enhances hypoxia tolerance. a , b Arabidopsis (Col-0) seedling root tip a and adult rosette b survival after 4 h of air (white) or ~5μllˉ¹ ethylene (blue) followed by hypoxia and recovery (3 days for root tips, 7 days for rosettes). Values are relative to control (normoxia) plants (mean ± sem). Asterisks indicate significant differences between air and ethylene ( p < 0.05, Generalized linear model, negative binomial error structure, n = 4-8 rows consisting of ~23 seedlings a , n = 30 plants b ). c Arabidopsis (Col-0) rosette phenotypes after 4 h of pre-treatment (air/ ~5μllˉ¹ ethylene) followed by hypoxia and 7 days recovery. All experiments were replicated at least 3 times Full size image Ethylene stabilizes group VII Ethylene Response Factors Hypoxia acclimation in plants involves the up-regulation of hypoxia adaptive genes that control energy maintenance and oxidative stress homeostasis 11 . Interestingly, most of these genes were not induced by ethylene alone, but showed increased transcript abundance upon hypoxia following a pre-treatment with ethylene (Supplementary Fig. 5 ). Hypoxia adaptive genes are regulated by the ERFVII transcription factors that are components of a mechanism that senses O 2 and NO via the Cys-branch of the PROTEOLYSIS 6 (PRT6) N-degron pathway 12 , 13 , 14 . ERFVIIs are degraded following oxidation of amino terminal (Nt-) Cysteine in the presence of O 2 and NO, catalysed by PLANT CYSTEINE OXIDASEs (PCOs) 15 . The N-recognin E3 ligase PRT6 promotes degradation of oxidized ERFVIIs by the 26 S proteasome 16 , 17 . A decline in either O 2 or NO stabilizes ERFVIIs, leading to transcriptional up-regulation of hypoxia adaptive genes and other environmental and developmental responses 12 , 13 , 14 , 18 . The constitutively synthesized ERVIIs RELATED TO APETALA2.12 (RAP2.12), RAP2.2 and RAP2.3 redundantly act as the principal activators of many hypoxia adaptive genes 19 , 20 , 21 . In contrast, HYPOXIA RESPONSIVE ERF1 (HRE1) and HRE2 function downstream of RAP-type ERFVIIs, being transcriptionally induced once hypoxia occurs 22 . We investigated whether ethylene-induced hypoxia tolerance depends on the constitutively synthesized RAP-type ERFVIIs. Single loss-of-function mutants of RAP2.12 , RAP2.2 and RAP2.3 , and the hre1 hre2 double mutant, responded to ethylene pre-treatment similarly to their WT backgrounds (Supplementary Fig. 6a ). However, two independent rap2.2 rap2.12 loss-of-function double mutants 20 showed no improved hypoxia tolerance after ethylene pre-treatment (Fig. 2a ), while their WT background crosses did (Supplementary Fig. 6b ). In contrast, overexpression of a stable N-terminal variant of RAP2.12 21 , or inhibition of the PRT6 N-degron pathway in the prt6–1 mutant 12 , 23 both enhanced hypoxia tolerance without an ethylene pre-treatment (Fig. 2a ). These data indicate that ethylene-induced hypoxia tolerance occurs through the PRT6 N-degron pathway and redundantly involves at least RAP2.2 and RAP2.12 20 , 21 . Fig. 2 Ethylene-induced hypoxia tolerance is regulated by RAP-type ERFVIIs. a Seedling root tip survival of Col-0, Ler-0, rap2.2 rap2.12 (2 independent lines in Col-0 x Ler-0 background), a constitutively expressed stable version of RAP2.12 and N-degron pathway mutant prt6-1 after 4 h air or ~5μllˉ¹ ethylene followed by 4 h of hypoxia and 3 days recovery. Values are relative to control (normoxia) plants (mean ± sem). Statistically similar groups are indicated using the same letter ( p < 0.05, 2-way ANOVA, Tukey’s HSD, n = 20-28 rows consisting of ~23 seedlings). b , c Representative root tip images showing promRAP2.12:RAP2.12-GUS staining and confocal images of 35 S:RAP2.12-GFP intensity in root tips after 4 h of air or ~5μllˉ¹ ethylene. Cell walls were visualized using Calcofluor White stain c . Scale bar of b and c is 50 μm. All experiments were replicated at least 3 times Full size image We next explored how ethylene regulates ERFVII mRNA and protein abundance. Ethylene increased RAP2.2 , RAP2.3, HRE1 and HRE2 transcripts in root tips and RAP2.12 , RAP2.2 and RAP2.3 mRNAs in shoots (Supplementary Fig. 6c, d ). Visualization and quantification of RAP2.12 abundance using transgenic promRAP2.12:RAP2.12-GUS and 35 S:RAP2.12-GFP protein-fusion lines revealed that ethylene strongly increased RAP2.12 protein in meristematic zones of main and lateral root tips and shoots under normoxia (Fig. 2b, c , Supplementary Fig. 6e, f ). Since 35 S:RAP2.12-GFP is uncoupled from ethylene-triggered transcription, this suggests that ethylene limits ERFVII protein turnover. In root tips, this RAP2.12 stabilization appeared within nuclei across most cell types and was also independent of ethylene-enhanced RAP2.12 transcript abundance (Fig. 2b, c , Supplementary Figure 6c, e, f ). These data suggest that ethylene-enhanced ERFVII accumulation is regulated by post-translational processes. Ethylene limits ERFVII proteolysis through NO depletion To investigate enhanced ERFVII stability under ambient O 2 , we studied the effect of ethylene on the expression of genes encoding PRT6 N-degron pathway enzymes or other mechanisms reported to influence ERFVII stability. In response to ethylene, none of these genes showed changes in transcript abundance (Supplementary Fig. 7a, b ). In addition, as both O 2 and NO promote ERFVII proteolysis 17 , and since ethylene was administered at ambient O 2 conditions (21%; normoxia) and did not lead to hypoxia in desiccators (Supplementary Fig. 7c ), it is unlikely that hypoxia causes the observed ERFVII stabilization. Furthermore, while recent reports show that plants contain a hypoxic niche in shoot apical meristems and lateral root primordia 24 , 25 , we did not observe enhanced hypoxia target gene expression in root tips exposed to ethylene treatments (Supplementary Fig. 5 ), ruling out ethylene-enhanced local hypoxia in these tissues. Since NO was previously shown to control proteolysis of ERFVIIs and other Met 1 -Cys 2 N-degron targets 14 , 18 , 26 , we hypothesized that ethylene may regulate NO levels. Roots treated with the NO probe 4-Amino-5-Methylamino-2',7'-Difluorofluorescein (DAF-FM) Diacetate 27 revealed an ethylene-induced depletion in fluorescence, indicating that ethylene mediates NO levels (Fig. 3a, b ). Next, we investigated whether this decline in NO was required for RAP-type ERFVII stabilization. Both ethylene and the NO-scavenging compound 2-(4-Carboxyphenyl)-4,4,5,5-Tetramethylimidazoline-1-oxyl-3-oxide (cPTIO) led to increased RAP2.12 and RAP2.3 stability under normoxia (Fig. 3c–e ). However, the ethylene-mediated increase in RAP2.12 and RAP2.3 stability was abolished when an NO pulse was applied concomitantly confirming a role for NO depletion in ethylene-triggered ERFVII stabilization of both these RAPs during normoxia. Application of hypoxia after pre-treatments resulted in stabilization of RAP2.12 and RAP2.3, demonstrating that the plants were viable and the PRT6 N-degron pathway could still be impaired (Fig. 3c–e , Supplementary Fig. 7d ). These data together illustrate that both RAP2.12 and RAP2.3 depend on ethylene-mediated NO-depletion to promote their stability. Fig. 3 Ethylene impairs NO levels leading to ERFVII stability and enhanced hypoxia survival. a , b Representative confocal images visualizing a and quantifying b NO using fluorescent probe DAF-FM diacetate, in Col-0 seedling root tips after 4 h of air or ~5μllˉ¹ ethylene (scale bar= 50 μm). (Letters indicate significant differences (1-way ANOVA, Tukey’s HSD, n = 3–4). c , d Representative confocal images visualizing c and quantifying d 35 S:RAP2.12-GFP intensity in seedling root tips after indicated pre-treatments and subsequent hypoxia (4 h). Cell walls were visualized using Calcofluor White stain (scale bar= 50μm). (Letters indicate significant differences ( p < 0.05, 2-way ANOVA, Tukey’s HSD, n = 5-7). e RAP2.3 protein levels in 35 S:MC-RAP2.3-HA seedlings (Col-0 background) after indicated treatments. f Seedling root tip survival of Col-0, rap2.2 rap2.12 line A mutants and an over-expressed stable version of RAP2.12 after indicated pre-treatments followed by hypoxia (4 h) and 3 days recovery. Values are relative to control (normoxia) plants. Letters indicate significant differences ( p < 0.05, 2-way ANOVA, Tukey’s HSD, n = 12 rows consisting of ~23 seedlings). All data shown are mean ± sem. All experiments were replicated at least 3 times, except for c , d and f (2 times) Full size image The functional consequences of ethylene-induced NO-dependent RAP2.12 stabilization for hypoxia acclimation were studied in a root meristem survival assay. Ethylene pre-treatment enhanced hypoxia survival, which was largely abolished by an NO pulse (Fig. 3f ). Furthermore, pre-treatment with cPTIO to scavenge intracellular NO before hypoxia resulted in increased survival in the absence of ethylene. In genotypes lacking RAP2.12 and RAP2.2 or overexpressing a stable N-terminal variant of RAP2.12, neither ethylene nor NO manipulation had any effect on subsequent hypoxia survival (Fig. 3f ). These results demonstrate that local NO removal, via cPTIO or as a result of elevated ethylene, is both essential and sufficient to enhance RAP2.12 and RAP2.3 stability during normoxia, and that increased hypoxia tolerance conferred by ethylene strongly depends on NO-mediated stabilization of RAP2.12 and RAP2.2 prior to hypoxia. The ethylene-mediated NO decline depends on PHYTOGLOBIN1 The question remained how ethylene regulates NO levels under normoxia. NO metabolism in Arabidopsis is mainly regulated by NO biosynthesis via NITRATE REDUCTASE (NR)-dependent nitrite reduction and NO-scavenging by three non-symbiotic phytoglobins (PGBs) 28 , 29 , 30 . Ethylene led to small increases in NR1 and NR2 mRNA levels, but this did not influence total NR activity (Supplementary Figure 8a, b, e ). In contrast, transcript abundance of PGB1 , the most potent NO-scavenger 30 , increased rapidly in root tips and shoots after ethylene treatment (Supplementary Fig. 8a–c ). Importantly, PGB1 (a hypoxia-adaptive gene regulated by ERFVIIs) was still up-regulated by ethylene during normoxia in rap2.2 rap2.12 mutant lines (Supplementary Fig. 8d ). To study the effect of ethylene-induced PGB1 levels on NO metabolism, ERFVII stabilization, hypoxia-adaptive gene expression and hypoxia tolerance, we identified a T-DNA insertion line ( SK_058388 ; hereafter pgb1–1 ). In pgb1–1 the T-DNA is located 300 bp upstream of the PGB1 start codon (Supplementary Fig. 9a, b ). In wild-type plants, both ethylene and hypoxia treatment enhanced PGB1 transcript and protein accumulation (Fig. 4a, b ). In pgb1–1 , PGB1 transcript levels were reduced, and ethylene did not increase PGB1 transcript or protein abundance, whereas hypoxia only affected transcript abundance slightly (Fig. 4a, b ). A faint band of lower molecular weight than expected for PGB1 (18 kDa) was observed in some pgb1–1 samples, but did not show any clear treatment effect (Fig. 4b ). Together these data illustrate that the T-DNA insertion in the promoter of pgb1–1 uncouples PGB1 expression from ethylene regulation. Conversely, a 35 S:PGB1 line had constitutively elevated PGB1 transcript and protein levels (Fig. 4a, b 30 ,). Importantly, both pgb1–1 and 35 S:PGB1 showed mostly similar ethylene responses in abundance of perception ( ETR2 ) and biosynthesis ( ACO1 ) transcripts compared to wild-type during normoxia (Supplementary Fig. 10 ), indicating that ethylene biosynthesis and signalling are unlikely to be affected. Fig. 4 Ethylene mediates NO levels, ERFVII stability and hypoxia survival through PHYTOGLOBIN1. a Relative transcript abundance of PGB1 in root tips of Col-0 , pgb1-1 and 35 S:PGB1 after 4 h air or ~5μllˉ¹ ethylene followed by (4 h) hypoxia. Values are relative to Col-0 air treated samples. Letters indicate significant differences ( p < 0.05, 2-way ANOVA, n = 3 replicates of ~200 root tips each). b PGB1 protein levels in Col-0 , pgb1-1 and 35 S:PGB1 root tips after 4 h air or ~5μllˉ¹ ethylene followed by (4 h) hypoxia. c , d Representative confocal images visualizing c and quantifying d NO using fluorescent probe DAF-FM diacetate in Col-0, pgb1-1 and 35 S:PGB1 seedling root tips after 4 h air or ~5μllˉ¹ ethylene (scale bar= 50μm). Letters indicate significant differences ( p < 0.05, 2-way ANOVA, Tukey’s HSD, n = 5). e RAP2.3 and PGB1 protein levels in 35 S:MC-RAP2.3-HA (in Col-0, pgb1-1 and 35 S:PGB1 backgrounds) seedling root tips after indicated pre-treatments and subsequent hypoxia (4 h). f Seedling root tip survival of Col-0 , pgb1-1 and 35 S:PGB1 after indicated pre-treatments followed by 4 h hypoxia and 3 days recovery. Values are relative to control (normoxia) plants. Letters indicate significant differences (p < 0.05, 2-way ANOVA, Tukey’s HSD, n = 12 rows of ~23 seedlings). All data shown are mean ± sem. All experiments were replicated at least 2 times Full size image The ethylene-mediated NO decline observed in wild-type root tips was fully abolished in pgb1–1 , demonstrating the requirement of PGB1 induction for local NO removal upon ethylene exposure (Fig. 4c, d ). Moreover, lack of NO removal by ethylene in pgb1–1 resulted in the inability to stabilize RAP2.3 levels and reduced hypoxia survival (Fig. 4e, f ). These effects could be rescued by restoration of NO-scavenging capacity using cPTIO (Fig. 4f ). In addition, the reduced ethylene-induced hypoxia tolerance in pgb1-1 was also accompanied by an absence of enhanced hypoxia adaptive gene expression after an ethylene pre-treatment (Supplementary Fig. 10 ). In contrast, 35 S:PGB1 showed constitutively low NO levels in root tips (Fig. 4c, d 30 ,), and increased RAP2.3 stability under normoxia (Fig. 4e ). Moreover, ectopic PGB1 over-expression enhanced hypoxia tolerance without an ethylene pre-treatment, but this effect can be abolished by an NO pulse (Fig. 4f ). Elevated mRNA levels for several hypoxia adaptive genes accompanied this constitutive hypoxia tolerance in 35 S:PGB1 root tips (Supplementary Fig. 10 ). These results demonstrate that active reduction of NO levels by ethylene-induced PGB1 prior to hypoxia can precociously enhance ERFVII stability to prepare cells for impending hypoxia. Discussion We show that plants have the remarkable ability to detect submergence quickly by passive ethylene entrapment and use this signal to acclimate to forthcoming hypoxic conditions. The early ethylene signal prevents N-degron targeted ERFVII proteolysis by increased production of the NO-scavenger PGB1 and in turn primes the plant’s hypoxia response (Summarizing model, Fig. 5 ). Interestingly, while ethylene signalling prior to hypoxia leads to nuclear stabilization of RAP2.12 in root meristems (Figs. 2b, c , 3c ), it does not trigger accumulation of most hypoxia adaptive gene transcripts until hypoxia occurs (Supplementary Fig. 5 ). Apparently, stabilization of ERFVIIs alone is insufficient to trigger full activation of hypoxia-regulated gene transcription and additional hypoxia-specific signals such as altered ATP and/or Ca 2+ levels are required 31 , 32 , 33 . The possible existence of undiscovered plant O 2 sensors was recently discussed and could potentially fulfil this role 34 . Furthermore, the current discovery of ethylene-mediated stability of ERFVIIs paves the way towards unravelling how ethylene could influence the function of the other recently discovered PRT6 N-degron pathway targets VERNALIZATION2 (VRN2) and LITTLE ZIPPER2 (ZPR2) 24 , 26 . Fig. 5 Proposed mechanism of ethylene-induced hypoxia tolerance upon submergence. I Upon submergence ethylene (C 2 H 4 ) accumulates within minutes in plant tissues due to restricted gas diffusion. II+III Ethylene perception leads to EIN2 and EIN3EIL1 dependent signalling and enhanced production of NO-scavenger PHYTOGLOBIN1 (PGB1) within 1 h of ethylene signalling. IV+V Within 4 h, these enhanced PGB1 levels lead to NO depletion, in turn limiting PRT6 N-degron pathway targeted proteolysis of RAP-type group VII Ethylene Response Factor transcription factors (ERVIIs). V+VII Stabilized ERFVIIs translocate to the nucleus where they induce enhanced hypoxia gene expression only when O 2 deprivation occurs. This amplified hypoxia response increases hypoxia tolerance of Arabidopsis root and shoot apical meristems (created with BioRender.com) Full size image This study shows that PGB1 is a key intermediate, linking ethylene signalling, via regulated NO removal, to O 2 sensing and hypoxia tolerance. This mechanism also provides a molecular explanation for the protective role of PGB1 during hypoxia and submergence described in prior studies 30 , 35 , 36 , 37 . Natural variation for ethylene-induced hypoxia adaptation was also observed in wild species and correlated with PGB1 induction 10 . Our discovery provides an explanation for this natural variation and could be instrumental in enhancing conditional flooding tolerance in crops via manipulation of ethylene responsiveness of PGB1 genes. In these modified plants, rapid passive ethylene entrapment upon flooding would increase PGB1 levels and pre-adapt crops to later occurring hypoxia stress. Methods Plant material Plant material: Arabidopsis thaliana seeds of ecotypes Col-0, Cvi-0, C24 and mutants ein2-5 and ein3eil1-1 38 , 39 were obtained from the Nottingham Arabidopsis Stock Centre. Seeds of pgb1-1 (SALK_058388) were obtained from the Arabidopsis Biological Resource Center and the molecular characterization of this line is described in Fig. 4a, b and Supplementary Fig. 9 . Other germlines used in this study were kindly provided by the following individuals: Ler-0, rap2.2-5 (Ler-0 background, AY201781/GT5336), rap2.12-2 (SAIL_1215_H10), rap2.2-5rap2.12-A and -B (mixed Ler-0 and Col-0 background) from Prof. Angelika Mustroph 20 , University Bayreuth, Germany; 35 S:δ13-RAP2.12-GFP and 35 S:RAP2.12-GFP from Prof. Francesco Licausi, University of Pisa, Italy; 13 and 35 S:EIN3-GFP ( ein3eil1 mutant background) from Prof. Shi Xiao, Sun Yat-sen University, China 7 . The 35 S:PGB1 , 35 S:RAP2.3-HA transgenic lines, as well as prt6-1 (SAIL_1278_H11), rap2.3-1 (SAIL_1031_D10) and hre1-1hre2-1 (SALK_039484 + SALK_052858) mutants were described in the following publications by the authors of this study: 12 , 14 , 40 . Barley seeds were obtained from Flakkebjerg Research Center Seed Stock (Aarhus University). Additional mutant combinations used in this study were generated by crossing, and all lines were confirmed by either conventional genotyping PCRs and/or antibiotic resistance selection (Primer and additional info in Supplementary Table 1 ). Plant growth conditions Growth conditions adult rosettes: Arabidopsis seeds were placed on potting soil (Primasta) in medium sized pots and stratified at 4 °C in the dark for at least 3 days. Pots were then transferred to a growth chamber for germination under short day conditions (8:00 – 17:00, T = 20 °C, Photon Flux Density = ~150 μmol m -2 s -1 , RH = 70%). After 7 days, seedlings were transplanted individually into single pots (70 ml) that were filled with the similar potting soil (Primasta). Plants continued growing under identical short day conditions and were watered automatically to field capacity. Per genotype, homogeneous groups of 10-leaf-stage plants were selected and randomized over treatment groups for phenotypic and molecular analysis under various treatments. Plants used for hypoxia tolerance assays were transferred back to the same conditions after treatments to recover for 7 days. Growth conditions seedlings: Seeds were vapor sterilized by incubation with a beaker containing a mixture of 50 ml bleach and 3 ml of fuming HCl in a gas tight desiccator jar for 3 to 4 h. Seeds were then individually transplanted in (2 or 3) rows of 23 seeds on sterile square petri dishes containing 25 ml autoclaved and solidified ¼ MS, 1% plant agar without additional sucrose. Petri dishes were sealed with gas-permeable tape (Leukopor, Duchefa) and stratified at 4 °C in the dark for 3 to 4 days. Seedlings were grown vertically on the agar plates under short day conditions (09:00–17:00 hours, T = 20 °C, Photon Flux Density = ~120 μmol m -2 s -1 , RH = 70%) for 5 days for Arabidopsis thaliana , and 7 days for Solanum lycopersicum (Tomato, Moneymaker), Solanum dulcamara and Arabidopsis lyrata before phenotypic and/or molecular analysis under various treatments. For Hordeum vulgare (Barley, both ssp. Golden Promise and landrace Heimdal) seedlings were grown on agar in sterile tubs and were 3 days old before phenotypic analysis. Construction of transgenic plants The promRAP2.12:MC-RAP2.12-GUS protein fusion lines were constructed by amplifying the genomic sequence capturing 2 kb of sequence upstream of the translational start site, and removing the stop codon using the following primers: RAP2.12-fwd GGGGACAAGTTTGTAC AAAAAAGCAGGCTATTCAGATTGGATCGTGACATG and RAP2.12-rev GGGGACCACT TTGTACAAGAAAGCTGGGTAGAAGACTCCTCCAATCATGGAAT. The PCR product was GATEWAY cloned into pDNR221 through a BP reaction, then transferred to pGWB433 creating an in-frame C-terminal fusion to the GUS reporter protein 41 . Experimental setup and (pre-)treatments Ethylene treatments: Lids of the agar plates of the vertically grown seedlings were removed during all (pre-) treatments and plates were placed vertically into glass desiccators (22.5 L volume). Air (control) and ~5μll -1 ethylene (pre-) treatments (by injection with a syringe) were applied at the start of the light period (09:00h for seedlings, 08:00h for adult rosettes) and were performed by keeping the seedlings/plants in air-tight closed glass desiccators under low light conditions (T = 20 °C, Light intensity = ~3-5 μmolm -2 s -1 ) for 4 h. Ethylene concentrations in all desiccators were verified by gas chromatography (Syntech Spectras GC955) at the beginning and end of the pre-treatment. Hypoxia treatments: After 4 h of any pre-treatment plants/seedlings were flushed with oxygen-depleted air (humidified 100% N 2 gas) at a rate of 2.00 l/min under dark conditions to limit oxygen production by photosynthesis. Oxygen levels generally reached 0.00% oxygen within 40 min of the hypoxia treatment as measured with a Neofox oxygen probe (Ocean optics, Florida, USA) (Supplementary Fig. 7c ). Control plants and seedlings were flushed with humidified air condition for the duration of the hypoxia treatment in the dark. Hypoxia treatment durations varied depending on the developmental stage and plant species and are specified in the appropriate figure legends. Nitric oxide treatments: Just before application, pure NO gas was diluted in small glass vials with pure N 2 gas to minimize the oxidation of NO gas. Diluted NO gas was injected with a syringe into the air and ethylene treated desiccators at a final concentration of 10 ull -1 NO, 1 h prior to the end of the (pre-)treatment. cPTIO applications: Treatments with the NO-scavenger 2-(4-Carboxyphenyl)-4,5-dihydro-4,4,5,5-tetramethyl-1H-imidazol-1-yloxy-3-oxide potassium salt (cPTIO salt, Sigma Aldrich, Darmstadt, Germany) were performed 1 h prior to air/ethylene treatments to allow for treatment combinations. Droplets of 5 μl cPTIO solution (250 μM in autoclaved liquid ¼ MS) or mock solution (autoclaved liquid ¼ MS) were pipetted onto each individual root tip. 1-MCP treatments: Seedlings were placed in closed glass desiccators (22.5 l volume) and gassed with 5μll -1 1-MCP (Rohmand Haas) for 1 h prior to other (pre-) treatments. Submergence treatments: For submergence (pre-) treatments, the plates of vertically grown seedlings were placed horizontally and were carefully filled with autoclaved tap water until the seedlings were fully submerged. Hypoxia tolerance assays Survival of adult rosette plants: 10-leaf stage plants received ethylene and air pre-treatments followed by several durations of hypoxia and were subsequently placed back under short day growth chamber conditions to recover. After 7 days of recovery survival rates and biomass (fresh and dry weight of surviving plants) were determined. Root tip survival of seedlings: 5-day old seedlings grown vertically on agar plates received pre-treatments (described above) followed by several durations of hypoxia (generally 4 h for mutant analysis). After the hypoxia treatment, agar plates were closed and sealed again with Leukopor tape and the location of root tips was marked at the back of the agar plate using a marker pen (0.8 mm fine tip). Plates were then placed back vertically under short day growth conditions for recovery. After 3-4 days of recovery, seedling root tips were scored as either alive or dead based on clear root tip re-growth beyond the line on the back of the agar plate. Primary root tip survival was calculated as the percentage of seedlings that showed root tip re-growth out of a row of (maximally) 23 seedlings. Root tip survival was expressed as relative survival compared to control plates that received similar pre-treatments but no hypoxia. For Solanum lycopersicum (Tomato, Moneymaker), Solanum dulcarama and Arabidopsis lyrata methods were similar as described above, but seedlings were 7 days old. For Hordeum vulgare (Barley, both ssp. Golden Promise and landrace Heimdal) seedlings were only 3 days old and received 20 h of hypoxia before scoring survival of whole seedlings after 3 days of recovery. Evans blue staining for cell viability in root tips Arabidopsis seedlings were taken for root cell integrity analysis by Evans blue staining after air and ethylene pre-treatments followed by both hypoxia and post-hypoxia time-points. Seedlings were incubated in 0.25% aqueous Evans blue staining solution for 15 min in the dark, subsequently washed three times with Milli-Q water to remove excess dye and finally imaged using light microscopy (OLYMPUS BX50WI, 10x objective). Evans blue area and pixel intensity of the microscopy images was analyzed using ICY software ( ), by quantifying the mean pixel intensity of the red (ch0) and blue (ch2) channels of the tissues of interest, and expressed as Blue/Red pixel intensity. RNA and RT-qPCR Adult rosette (2 whole rosettes per sample), whole seedling (~20 whole seedlings) or seedling root tip (~200-500 root tips) samples were harvested by snap freezing in liquid nitrogen. Total sample RNA was extracted from frozen pulverized tissue using the RNeasy Plant Mini Kit protocol (Qiagen, Dusseldorf, Germany) with on-column DNAse treatment Kit (Qiagen, Dusseldorf, Germany) and quantified using a NanoDrop ND-1000 UV-Vis spectrophotometer (Nanodrop Technology). Single-stranded cDNA was synthesized from 500 ng RNA using random hexamer primers (Invitrogen, Waltham, USA). RT-qPCR was performed using the Applied Biosystems ViiA 7 Real-Time PCR System (Thermo Fisher Scientific) with a 5 μl reaction mixture containing 2.5 μl 2× SYBR Green MasterMix (Bio-Rad, Hercules, USA), 0.25 μL of both 10 μM forward and reverse primers and 2 μl cDNA (5 ng/μl). Average sample CT values were derived from 2 technical replicates. Relative transcript abundance was calculated using the comparative CT method 42 , fold change was generally expressed as fold change relative to air treated samples of Col-0. ADENINE PHOSPHORIBOSYL TRANSFERASE 1 ( APT1 ) was amplified, stable in all treatments and used as a reference gene. Primers used for RT-qPCR are listed in Supplementary Table 2 . Histochemical staining for GUS activity Seedlings of promRAP2.12:RAP2.12-GUS (10 days old) were harvested in GUS solution (100 mM NaPO4 buffer, pH 7.0, 10 mM EDTA, 2 mM X-Gluc, 500 μM K3Fe(CN)6 and 500 μM K4Fe(CN)6) directly after (indicated in figure legend) treatments, vacuum infiltrated for 15 min and incubated for 2 days at 37 °C before de-staining with 70% ethanol. Seedlings were kept and mounted in 50% glycerol and analyzed using a Zeiss Axioskop2 DIC (differential interference contrast) microscope (10× DIC objective) or regular light microscope with a Lumenera Infinity 1 camera. GUS pixel intensity of the microscopy images was analyzed using ICY software ( ), by quantifying the pixel intensity of the red (ch0) and blue (ch2) channels of the tissues of interest relative to the respective channel background values of these images. GUS intensity of all treatments was expressed relatively to the Air-treated controls. Protein extraction and Western Blotting Protein was extracted on ice for 30 min from pulverized snap frozen samples in modified RIPA lysis buffer containing 50 mM HEPES-KOH pH (7.8), 100 mM KCl, 5 mM EDTA (pH 8), 5 mM EGTA (pH 8), 50 mM NaF, 10% (v/v) glycerol, 1% (v/v) IGEPAL, 0.5% (w/v) deoxycholate, 0.1% (w/v) SDS, 1 mM Na3VO4 (sodium orthovanadate), 1 mM PMSF, 1x proteinase inhibitor cocktail (Roche), 1x PhosSTOP Phosphatase Inhibitor Cocktail (Roche) and 50 µM MG132 43 . Protein concentration was quantified using and following the protocol of a BCA protein assay kit (Pierce). Protein concentrations were equalized by dilution with RIPA buffer and incubated for 10 min with loading buffer (5x sample loading buffer, Bio Rad) + β-ME) at 70 °C before loading (30 µg total protein per sample) on pre-cast Mini-PROTEAN Stain Free TGX Gels (Bio Rad) and ran by SDS-PAGE. Gels were imaged before and after transferring to PVDF membranes (Bio Rad) using trans-blot turbo transfer system (Bio Rad), to verify successful and equal protein transfer. Blots were blocked for at least 1 h in blocking solution at RT (5% milk in 1xTBS) before probing with primary antibody in blocking solution (α-HA-HRP, 1:2500 (Roche, Cat. No. 12 013 819 001)); α-PGB1, 1:500 (produced for this study using full length protein as antigen by GenScript); α-Actin, 1:2500 (Thermo Fisher Scientific, Cat. No. MA1-744) overnight at 4 °C. Blots were rinsed 3 times with 1xTBS-T (0.1% Tween 20) for 10 min under gentle agitation before probing with secondary antibody (α-rabbit IgG-HRP, Cat. No. 7074, for PGB1, 1:3000; α-mouse IgG-HRP, Cat. No. 7076, for Actin, 1:2500) and/or SuperSignal™ West Femto chemiluminescence substrate (Fisher Scientific) and blot imaging using Image Lab software in a chemi-gel doc (Bio-rad) with custom accumulation sensitivity settings for optimal contrast between band detection and background signal. To visualize RAP2.3 (~45 kDa) and ACTIN (~42 kDa) protein levels on the same blot, membranes were stripped after taking final blot images using a mild stripping buffer (pH 2,2, 1.5% (w/v) glycine, 0.1% SDS and 1.0% Tween 20) for 15 min and rinsed 3x in 1xTBS-T before blocking and probing with the 2 nd primary antibody of interest. NO quantification Intracellular NO levels were visualized using DAF-FM diacetate (7'-difluorofluorescein diacetate, Bio-Connect). Seedlings were incubated in the dark for 15 min under gentle agitation in 10 mM Tris-HCl buffer (pH 7.4) containing 50 μM DAF-FM DA and subsequently washed twice for 5 min 10 mM Tris-HCl buffer (pH 7.4). Several roots of all treatments/genotypes were mounted in 10 mM Tris-HCl buffer (pH 7.4) on the same microscope slide. Fluorescence was visualized using a Zeiss Observer Z1 LSM700 confocal microscope (oil immersion, 40x objective Plan-Neofluar N.A. 1.30) with excitation at 488 nm and emission at 490–555 nm. Roots incubated and mounted in 10 mM Tris-HCl buffer (pH 7.4) without DAF-FM DA were used to set background values where no fluorescence was detected. Within experiments, laser power, pinhole, digital gain and detector offset were identical for all samples. Mean DAF-FM DA fluorescence pixel intensity in root tips was determined in similar areas of ~17,000 μm 2 between epidermis layers using ICY software ( ). Confocal Microscopy Transgenic Arabidopsis seedlings of 35 S:EIN3-GFP and 35 S:RAP2.12-GFP and were fixed in 4% PFA (pH 6.9) right after treatments, kept under gentle agitation for 1 h, were subsequently washed twice for 1 min in 1x PBS and stored in ClearSee clearing solution (xylitol 10% (w/v), sodium deoxycholate 15% (w/v) and urea 25% (w/v) 44 . Seedlings were transferred to 0.01% Calcofluor White (in ClearSee solution) 24 h before imaging. Fluorescence was visualized using a Zeiss Observer Z1 LSM700 confocal microscope (oil immersion, ×40 objective Plan-Neofluar N.A. 1.30) with excitation at 488 nm and emission at 490–555 nm for GFP and excitation at 405 nm and emission at 400–490 nm for Calcofluor White. Within experiments, laser power, pinhole, digital gain and detector offset were identical for all samples. Mean GFP fluorescence pixel intensity in root tips was determined in similar areas of ~17,000 μm 2 between epidermis layers using ICY software ( ). Nitrate reductase activity assay The NR activity was assessed using a mix of 20 whole 10-day-old seedlings with 2 replicates per treatment. Snap frozen samples were ground and homogenized in extraction buffer (100 mM HEPES (pH7.5), 2 mM EDTA, 2 mM di-thiothreitol, 1% PVPP). After centrifugation at 30,000 × g at 4 C for 20 min, supernatants were collected and added to the reaction buffer (100 mM HEPES (pH7.5), 100 mM NaNO3, 10 mM Cysteine, 2 mM NADH and 2 mM EDTA). The reaction was stopped by the addition of 500 mM zinc acetate after incubation for 15 min at 25 °C. Total nitrite accumulation was determined following addition of 1% sulfanilamide in 1.5 M HCl and 0.02% naphthylethylenediamine dihydrochloride (NNEDA) in 0.2 M HCl by measuring the absorbance of the reaction mixture at 540 nm. Statistical analyses No statistical methods were used to predetermine sample size. Samples were taken from independent biological replicates. In general, the sample size of experiments was maximized and dependent on technical, space and/or time limitations. For root tip survival assay, the maximum amount of seedlings used per biological replicate, generally 1 row of seedlings for in vitro agar plates, is mentioned in the appropriate figure legends. Data was plotted using Graphpad Prism software. The statistical tests were performed two-sided, using either Graphpad Prism or R software and the “LSmeans and “multmultcompView” packages. Survival data was analyzed with either a generalized linear modeling (GLM) approach or an ANOVA on arcsin transformation of the surviving fraction. A negative binomial error structure was used for the GLM. Arcsin transformation ensured a homogeneity and normal distribution of the variances, especially for data that did not have treatments with all living or all death responses. The remaining data were analyzed with either Students t-test, 1-way or 2-way ANOVAs. Here data were log transformed if necessary to adhere to ANOVA prerequisites. Multiple comparisons were corrected for with Tukey’s HSD. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability No restrictions are placed on materials and data availability. Biological materials such as mutant/transgenic lines can be requested from the corresponding authors. Details of all data and materials used in the analysis are available in the main text or the supplementary information. Gene accession numbers of all the Arabidopsis genes/mutants used in this study are listed in the Materials & Methods section and Supplementary Table 1 and 2 . Source Data (including uncropped blots) related to Fig. 1 a, b; Fig. 2 a; Fig. 3 b–f; Fig. 4a–f ; Supplementary Fig. 1 , b–e ; Supplementary Fig. 2a, b ; Supplementary Fig. 3b, c ; Supplementary Fig. 4a–c ; Supplementary Fig. 5 ; Supplementary Fig. 6a–d, f ; Supplementary Fig. 7a–d ; Supplementary Fig. 8a–e ; and Supplementary Fig. 10 are provided with the paper.
Scientists at Utrecht University have discovered how some plants can quickly detect that they are under water when flooded, and initiate processes that prevents them from drowning. Floods cause widespread yield losses annually due to the extreme flood sensitivity of most major crops. In a study published in Nature Communications, the researchers demonstrate how plants use the gaseous hormone ethylene as a signal to trigger underwater survival reactions. The identification of this signaling mechanism and the genes involved can potentially pave the way towards stress resilient, flood-proof crops that can sustain yields even under stressful conditions. Global warming results in the increased incidence of not just drought and heatwaves, but also increased rainfall and higher flood risk. This is a major problem for crops. Just like humans, plants need oxygen to survive and the lack of it underwater causes them to suffocate. The devastating agricultural impact of floods is therefore just as immense as other extreme weather events such as drought. For instance, flooding of a potato field can cause complete yield loss within just 24 hours. It is therefore absolutely essential to understand plant resilience mechanisms in order to progress towards flood resistant crops. This can make a world of difference with regards to food security and mitigating major economic losses. Survival response "If plants can quickly detect that they are flooded, they can initiate processes that increase their chances of survival. One strategy, for example, is the so-called snorkel response, in which leaves and stems increase upwards growth to emerge from the water and restore contact with air. Another strategy is to suppress growth and metabolism in order to minimize oxygen and energy consumption until the floods recede," says first author Sjon Hartman, plant biologist at the Plant Ecophysiology group at Utrecht University. "We recently demonstrated that the gaseous plant hormone ethylene plays a very important role in this. Ethylene entrapped in the cells of flooded plants serves as a flood signal for the plant." Together with colleagues from Nijmegen, Denmark, the United Kingdom and the U.S., the researchers now report their findings in Nature Communications, and describe the series of molecular events that are triggered upon a flood in plants and the genes involved. They demonstrate that ethylene accumulation in flooded plants triggers a survival response in the plants at an early stage, even before the oxygen levels actually drop. "This is very useful: a plant that goes into survival mode early can last longer under water and that can make the difference between life and death," says Hartman. Future-proof crops The discovery of this tolerance mechanism holds great potential for the future development of flood-tolerant crops. "Now that we know the genes associated with flooding survival, we can introduce them back into plants that lack them and thus program flooded plants to go into survival mode sooner," say research leaders Dr. Rashmi Sasidharan and Prof. Rens Voesenek of Utrecht University. "This will enable us to make future-proof crops that are better able to withstand flooding."
10.1038/s41467-019-12045-4
Biology
Is the middle Cambrian Brooksella a hexactinellid sponge, trace fossil or pseudofossil?
Morrison R. Nolan et al, Is the middle Cambrian Brooksella a hexactinellid sponge, trace fossil or pseudofossil?, PeerJ (2023). DOI: 10.7717/peerj.14796 Journal information: PeerJ
https://dx.doi.org/10.7717/peerj.14796
https://phys.org/news/2023-02-middle-cambrian-brooksella-hexactinellid-sponge.html
Abstract First described as a medusoid jellyfish, the “star-shaped” Brooksella from the Conasauga shale Lagerstätten, Southeastern USA, was variously reconsidered as algae, feeding traces, gas bubbles, and most recently hexactinellid sponges. In this work, we present new morphological, chemical, and structural data to evaluate its hexactinellid affinities, as well as whether it could be a trace fossil or pseudofossil. External and cross-sectional surfaces, thin sections, X-ray computed tomography (CT) and micro-CT imaging, revealed no evidence that Brooksella is a hexactinellid sponge or a trace fossil. Although internally Brooksella contains abundant voids and variously orientated tubes consistent with multiple burrowing or bioeroding organisms, these structures have no relation to Brooksella ’s external lobe-like morphology. Furthermore, Brooksella has no pattern of growth comparable to the linear growth of early Paleozoic hexactinellids; rather, its growth is similar to syndepositional concretions. Lastly, Brooksella , except for its lobes and occasional central depression, is no different in microstructure to the silica concretions of the Conasauga Formation, strongly indicating it is a morphologically unusual endmember of the silica concretions of the formation. These findings highlight the need for thorough and accurate descriptions in Cambrian paleontology; wherein care must be taken to examine the full range of biotic and abiotic hypotheses for these compelling and unique fossils. Cite this as Nolan MR, Walker SE, Selly T, Schiffbauer J. 2023 . Is the middle Cambrian Brooksella a hexactinellid sponge, trace fossil or pseudofossil? PeerJ 11 : e14796 Main article text Introduction Sponges that produce siliceous skeletons are the only benthic animals that secrete copious amounts of silica and are recently recognized as important sinks for biogenic silica and nutrient cycling in the oceans ( Chu et al., 2011 ; Maldonado et al., 2011 ; Maldonado et al., 2021 ). The fossil record and molecular clock history of putative sponges extends back to the late Proterozoic, possibly to 890 million years, making them potentially among the first animals on Earth ( e.g. , keratose demosponges; Turner, 2021 ), though many fossils of purported precambrian sponges are subject to significant controversy ( Antcliffe, Callow & Brasier, 2014 ; Botting & Muir, 2018 ). The earliest fossils of partially biomineralized, siliceous spicules date to the early Cambrian or the latest Ediacaran, indicating either a later evolution of spicule biomineralization or a taphonomic bias against these structural components that are essential to sponge taxonomy prior to that time ( Sperling et al., 2010 ; Chang et al., 2019 ; Tang et al., 2019 ). Disarticulated and articulated sponge spicules are known from a variety of early-to-middle Cambrian Burgess Shale-type deposits ( e.g. , Burgess Shale Lagerstätte of Canada and the earlier Series 2 Sirius Passet Lagerstätte of Greenland; Finks, 2003 ; Botting & Peel, 2016 ), though it was not until the middle Cambrian that the taxonomic affinities of these sponges become clearer. Most of these early and middle Cambrian sponges are preserved as compressions or impressions on shale with abundant spicules ( Finks, 2003 ) except for an enigmatic star-shaped fossil interpreted as a hexactinellid sponge, Brooksella alternata , from the middle Cambrian Conasauga Lagerstätte of the southeastern US (northeastern Alabama, northwestern Georgia; Ciampaglio et al., 2006 ; Schwimmer & Montante, 2007 ). Brooksella is considered to have exceptional three-dimensional (3-D) preservation in chert concretions with radial morphology and numerous lobes ( Ciampaglio et al., 2006 ). However, its identity has generated controversy since its discovery in the late 1800s by Charles Doolittle Walcott ( Walcott, 1896 ; Walcott, 1898 ). Brooksella was originally described by Walcott in 1896 as a jellyfish with tentacles, an umbrella (bell), and a gastric cavity. However, he also considered whether these medusoid forms were hexactinellid sponges despite finding no spicules or traces of spicules in his “large number” of thin sections of Brooksella ( Walcott, 1898 , his p. 21)—although he mentions finding a few hexactinellid-like spicule casts on the outer surface of non-medusoid concretions ( Walcott, 1898 , his p. 22). Since Walcott’s work, the taxonomic identity of Brooksella has been reevaluated many times ( Table S1 and Fig. S1 ). The most recent reevaluation by Ciampaglio et al. (2006) suggests that Brooksella is a Protospongia -type reticulosan hexactinellid sponge (though later researchers have suggested that Protospongia specifically and reticulosans, and in general, are not hexactinellids, e.g. , Botting & Muir, 2018 ; Page, Butterfield & Harvey, 2009 ; and J Botting, pers. comm., 2022), which Walcott also suggested but later rejected, over a century earlier. As a consequence of this new taxonomic assignment, the Conasauga Formation is interpreted to be an exceptional fossil Lagerstätte with fossils preserved by extensive sponge-produced biogenic silica ( Schwimmer & Montante, 2007 ). The question regarding Brooksella’s placement as a sponge, and more specifically, a hexactinellid sponge that could have produced enough biogenic silica to preserve an entire middle Cambrian Lagerstätte, might not yet be settled. Ciampaglio et al. (2006) observed that the external surfaces and cross sections of Brooksella had white Protospongia -type spicules, four-rayed spicules of siliceous composition. Even so, others suggest the presence of such hexactine spicules are not sufficiently diagnostic for hexactinellids ( e.g. , Botting & Muir, 2018 ) and Protospongia had calcitic or biminerallic, but not necessarily siliceous spicules ( e.g. , Page, Butterfield & Harvey, 2009 ). This calls into question what Walcott keenly observed, however: despite the hundreds of specimens he examined, he found no spicules in thin-section, and, based on a compression of Brooksella in shale, he favored a jellyfish fossil over a sponge identity ( Walcott, 1898 , his p. 21–22). Ciampaglio et al. (2006) also noted ostia (incurrent pores), a central canal (spongocoel), radial canals in each of the numerous lobes, and openings at the tips of the lobes into these canals (refer to their Fig. 3). They also inferred that the concave side of Brooksella with the central depression was the top of the specimen, contrary to Walcott’s medusoid interpretation ( Ciampaglio et al., 2006 , their p. 264). Lastly, Ciampaglio et al. (2006) synonymized three species of Walcott’s Brooksella and Brooksella -like fossils— Brooksella alternata , Brooksella confusa and Laotira cambria —all of which have variable morphologies and some were associated with annelid traces or trackways ( Fig. 1 ; Walcott, 1898 ). Figure 1: Brooksella and Brooksella -like fossils synonymized by Ciampaglio et al. (2006) and additional Brooksella -like fossils depicted by Walcott from the Conasauga Formation. (A–D) Brooksella alternata ; (E–H) Laotira cambria ; (I) annelid trace fossils ( Planolites sp.); (J) annelid burrows with Laotira cambria ; and (K) Brooksella confusa . Figures from ( Walcott, 1898 ): (A) plate I, Fig. 1 ; (B), plate I, Fig. 6 ; (C) plate II, Fig. 8A ; (D), plate IV, Fig. 5 ; (E) plate V, Fig. 7 ; (F) plate V, Fig. 6 ; (G) plate XIII, Fig. 2 ; (H) plate XIV, Fig. 2 ; (I) plate XV, Fig. 1 ; (J) plate XV, Fig. 5 ; (K) plate III, Fig. 12 . Download full-size image DOI: 10.7717/peerj.14796/fig-1 Figure 2: Brooksella and concretion field locality in northeastern Alabama, USA. Green area indicates the Conasauga Formation and is linked to the stratigraphic position of Brooksella (Map data ©2022 Google; biostratigraphic column adapted from Schwimmer & Montante, 2007 ). Inset shows Weiss Lake where Brooksella alternata were collected, indicated with a star ∼34°08′20″N, 85°35′56″W. Download full-size image DOI: 10.7717/peerj.14796/fig-2 Figure 3: Measurements used to examine Brooksella size and morphology. The longest axis of Brooksella (maximum diameter; white line); shortest axis (minimum diameter; blue line); maximum lobe length from base to tip (green line); maximum lobe width (purple line); and central depression diameter (black line). Scale bar = one cm; sample UGA WSL2.AL5 depicted. Download full-size image DOI: 10.7717/peerj.14796/fig-3 To resolve whether Brooksella is a fossil hexactinellid sponge—which would be critical for producing the biogenic silica needed to preserve the Lagerstätte—or a trace or pseudofossil, the following must be addressed: (1) abundance of Brooksella in the field; (2) its orientation within the sedimentary beds; (3) an evaluation of its putative sponge-like characteristics, such as possessing ostia, spongocoel, radial canals in the lobes, Protospongia -like spicules on the external surface, spicules on the surface of cross sections, and growth characteristics consistent with known fossil hexactinellids; and (4) whether it has trace fossil characteristics, such as back filling spreiten and evidence of probing. Herein, we reassess whether Brooksella is a hexactinellid sponge or trace fossil. We also considered whether Brooksella is similar in size and composition to co-occurring concretions, as it may also be a pseudofossil. Taxonomic background Most Brooksella and Brooksella- like fossils were synonymized by Ciampaglio et al. (2006) as one species, Brooksella alternata. Based on superficial appearance, Ciampaglio et al. (2006) synonymized Laotria cambria and Brooksella confusa ( Walcott, 1896 ; Walcott, 1898 ) with Brooksella alternata, although, B. alternata, B. confusa, and L. cambria have different external characteristics ( Table S1 ). Ciampaglio et al. (2006) also assigned ? Brooksella material from the Spence Shale of Utah ( Willoughby & Robison, 1979 ; Robison, 1991 ) to possibly Brooksella alternata , extending the range of Brooksella into the older Glossopleura Zone in the Wuliuan stage of the lower middle Cambrian. Additionally, Caster (1942) identified a specimen of Laotira cambria from the Cambrian Furogian Series of Wyoming that was later reassigned to Brooksella cambria ( Harrington & Moore, 1956 ), and is tentatively considered B. alternata by Ciampaglio et al. (2006) . Brooksella silurica ( Von Heune, 1904 ) includes an Ordovician specimen from Sweden, expanding both Brooksella ’s geographical range beyond North America and temporal range out of the Cambrian Period ( Harrington & Moore, 1956 ). Brooksella canyonensis ( Bassler, 1941 ), found in the Neoproterozoic Grand Canyon Series of Arizona, was reassigned to the trace fossil ? Asterosoma canyonensis ( Glaessner, 1969 ; see also Häntzschel, 1970 ; Kauffman & Fürsich, 1983 ), but the assignment as a trace fossil is questioned by Ciampaglio et al. (2006) . Ediacaran-aged Brooksella sp. material from the Nasep Member of the Urusis Formation in the Schwarzrand Subgroup of Namibia was interpreted as a probing trace fossil ( Crimes & Germs, 1982 ). Based on these reports, the most common alternative identity for Brooksella is that of a probing, radial trace fossil, like Dactyloidites , but the trace fossil attribution for Brooksella needs reassessment ( Muñoz, Mángano & Buatois, 2019 ). Thus, in addition to reevaluating the hexactinellid interpretation, we are also examining Brooksella for trace fossil characteristics, such as back-filled spreiten, central shafts, and sedimentary relationships like probing structures or movement in relation to the sediment (after Muñoz, Mángano & Buatois, 2019 ). Herein, we refer to Brooksella alternata and its related synonymized species as Brooksella . Geological setting The middle Cambrian Conasauga Formation is a predominantly grey shale unit with limestone interbeds that crops out in several southeastern US states: Alabama, Georgia, Tennessee, and Virginia ( Palmer & Holland, 1971 ; Hasson & Haase, 1988 ). Formal subdivision of the formation varies by state. In Tennessee, the Conasauga is treated as a group and is divided into six formations, each mainly shale or limestone in composition ( Hasson & Haase, 1988 ). Comparatively, in Georgia and Alabama, division of the Conasauga Formation either follows Tennessee’s geologic format ( Butts & Gildersleeve, 1948 ; McLemore & Hurst, 1970 ), or it is a formation informally divided into lower, middle, and upper portions ( Cressler, 1970 ; Chowns, 1977 ). The Coosa Valley, northeastern Alabama, is the source of all Brooksella and concretions in our study and is the primary source of Brooksella for Walcott’s (1896; 1898) studies. Part of the Appalachian Valley and Ridge Province ( Butts, 1926 ; Cressler, 1970 ; Thomas, 1985 ; Osborne, Thomas & Astini, 2000 ), the Coosa Valley localities are topographically low, with substantial vegetation cover, extensive faulting, and are mostly submerged by the Weiss Lake reservoir, thus, limiting fine stratigraphic correlation among localities (see also Ciampaglio et al., 2006 ). Chert nodules weather out of several shaley stratigraphic units. The chert and Brooksella -bearing layers are found at times associated with lenticular carbonate beefs and polymerid trilobites of the Bolaspidella Zone ( Schwimmer, 1989 ), which provides constraint to the Drumian Stage of the middle Cambrian (504.5 to 500.5 mya; Cohen et al. (2018) ). Carbonate nodules also weather out from stratigraphically lower shale units, but not in the units where we collected Brooksella . The fossils of the Conasauga Formation are comparable in generic richness to the Wheeler and Spence Shales of Utah ( Schwimmer, 2000 ), though the degree and quality of preservation is much poorer than the Wheeler or Spence Shales. Facies interpretations suggest likely deposition in a restricted paleoenvironment ( Robison, 1991 ) that is generally shallower than the Wheeler Shale and other Burgess Shale-type facies ( Schwimmer & Montante, 2007 ). The Conasauga Formation preserves fossils in two forms: flattened organic or ferrous impressions on shales and 3-D silicified materials on or within chert concretions ( Schwimmer & Montante, 2007 ). The 3-D preservation of some fossils has led to the description of the Coosa Valley localities as Konservat-Lagerstätten ( Schwimmer & Montante, 2007 ). Soft-bodied organisms and structures preserved in the Conasauga Formation include red algae, green algae, priapulids, and nektaspids ( Schwimmer & Montante, 2007 ). Material and Methods Sample collections Brooksella samples ( n = 77) come from three sources: existing University of Georgia (UGA) collections from the second author ( n = 29), samples donated by Dr. Donald Champagne ( n = 27), and by additional field collections from the Coosa Valley for this research ( n = 21). These samples are currently held at the UGA Department of Geology but will be reposited with the Smithsonian National Museum of Natural History. No permits were required for the described study, which complied with all relevant regulations for the State of Alabama. All samples were collected along the banks of Weiss Lake, Cherokee County, Alabama ( Fig. 2 ). However, collection is limited to the winter months when the Weiss Lake reservoir water level is lowest, and the banks are exposed. In situ Brooksella and concretions were collected with their locations and positions noted along six transects arrayed along exposed in place (not overturned) shale beds that parallel the lake shore. Additional Brooksella and concretions that were not in situ were collected as float below the transects. To compare to the Brooksella , we additionally examined siliceous concretions from the same localities ( n = 98 siliceous concretions from existing UGA collections and from additional field collection) and n = 1 carbonate concretion from another locality. Additionally, images of figured specimens of B. alternata ( n = 33), B. confusa ( n = 3), and L. cambria ( n = 58) from Walcott (1898) were examined to collect size data, orientation of lobes, and number of lobes to compare to our samples; according to Walcott (1898) , all images were life size. Brooksella and concretion surficial analysis The surfaces of Brooksella and concretions were observed via optical microscopy before and after cleaning the samples, which had clay, lichen and algae on them. For Brooksella and concretions, we noted the presence or absence of the following surficial features attributed to sponges by Ciampaglio et al. (2006) : Central depression (osculum) and small crater-like pores (ostia) as recorded in Table S2 . To quantify the size of Brooksella and concretions, digital calipers (accuracy ±0.03 mm) were used to measure the minimum diameter (shortest axis) and maximum diameter (longest axis) ( Fig. 3 ; Tables S2 and S3 ). As a proxy for general size, we used both maximum and minimum diameter and geometric mean of the maximum and minimum diameter (square root of their product) for statistical applications. Because lobes are the main diagnostic character of Brooksella and purportedly house the internal radial canals of the sponge, we first noted where the lobes occurred, either the top or bottom surfaces or both surfaces, for each specimen. We also counted the number of lobes per surface and measured the largest lobe length and width with digital calipers. The lobe length and width measurements were converted to geometric means to compare to the size of Brooksella. Lastly, images of B. alternata , B. confusa and L. cambria from Walcott (1898) were measured with digital calipers for maximum and minimum diameter. For analysis, the data from the three species were pooled as Walcott’s Brooksella to compare to our Brooksella and concretions. Further, the number of lobes were counted and if possible, their occurrence on one or both surfaces was also noted. Central depressions were not always depicted and therefore not measured; lobe width or length were also not measured from these images as it was often not possible to determine their dimensions on all specimens. These Brooksella are referred to as “Walcott’s Brooksella ” to distinguish them from our own collections. To compare the maximum and minimum diameters among our Brooksella, concretions, and Walcott’s Brooksella , the measurements were converted to a geometric mean and grand geometric mean and plotted with their 95% confidence intervals (95% CIs were from a one-sample t -test for each type; R Core Team, 2021 ). The relationship between maximum and minimum diameter (without geometric mean) among Brooksella , concretions and Walcott’s Brooksella , was examined using Model II standard major axis regressions (SMA) with 95% CIs for the slope. These were calculated and plotted in R ( Legendre, 2018 ; R Core Team, 2021 ; package lmodel2). Model II regressions were used because the two variables measured were not controlled by the researcher unlike in a Model I regression ( Legendre, 2018 ). The null hypothesis for this test was that there was no difference in the relationship between maximum and minimum diameter between all three sample types. Top lobe frequency of occurrence was examined by size class for our Brooksella and Walcott’s Brooksella to determine which size class or classes the lobes most commonly occur. A generalized linear model (GLM) with quasiPoisson for over-dispersed lobe count data was used to determine if the number of lobes increase as the size of Brooksella increase for both our samples and those of Walcott’s Brooksella ( R Core Team, 2021 ). A Model II SMA regression was used to examine the strength of the relationship between the geometric mean size of the largest lobe and the geometric mean size in our Brooksella and Walcott’s Brooksella ; correlation coefficients were determined using the cor.test function in R ( Legendre, 2018 ; package lmodel2; R Core Team, 2021 ). Brooksella and concretion internal structure Internal analysis of Brooksella and concretions was conducted using three methods. First, we cross-sectioned eleven Brooksella and two silica and carbonate concretions to try to locate the central cavity (spongocoel), radial canals and white spicules that Ciampaglio et al. (2006) reported from the surface cut area. Second, eleven Brooksella , two siliceous and one carbonate concretion were polished and made into petrogaphic thin sections to examine their composition and to also determine whether a spongocoel, radial canals, ostial chambers, and an external thin spicular wall were present. The thin sections were prepared by Vancouver Petrographics Ltd, British Columbia, Canada. Lastly, to visualize any internal features including spongocoel, radial canals, or trace fossil characteristics, Brooksella ( n = 21) and concretions ( n = 6) were scanned with the UGA College of Veterinary Medicine’s Computed Tomography (CT) scanner (a Siemens Sensation 64 slice unit; scans were collected under 120 kVp, a tube current of 190 mA, slice thickness of 0.6 mm, and convolution kernel setting of B80s for sharp/bone kernel). Additionally, two Brooksella and two silica concretions from this set were also scanned at a higher resolution using a Zeiss Xradia 510 Versa μ CT microscope at the University of Missouri X-ray Microanalysis Core Facility. Micro-CT scans were collected at 80 kV, 7 W, 2001 projections, 2–7 s of exposure, optical magnification of 0.4X, 360 degrees of rotation, a Zeiss LE6 filter, and a pixel size of between 50.3–58.4 µm. The CT and μ CT image stacks are available as supplemental data on morphosource, as Project ID: 000436718, Brooksella and silica concretions. Brooksella and concretion compositional analysis To determine and compare bulk compositions between Brooksella and concretions, portions of two Brooksella and two siliceous concretions were powdered via ball mill and scanned with a Bruker D8 Advance X-ray Powder Diffractometer (XRD) at UGA. To examine the elemental composition of specific internal structures, petrographic thin sections from Brooksella and siliceous concretions ( n = 2 each) were carbon coated and analyzed using a JEOL 8600 electron microprobe (EPMA) at the UGA Department of Geology. Backscattered electron images and energy dispersive X-ray (EDS) maps were processed with the Bruker Quantax analysis system. Results Field abundance and orientation of Brooksella and concretions in the Conasauga Shale Brooksella were rare in the shale outcrops at Weiss Lake. Field transects of in situ Brooksella only occurred with a frequency of 0.10 for all transects combined ( Table 1 ). Many more Brooksella and concretions were found as float located below the transects, but float Brooksella occurred at a lower frequency than that of collected float concretions ( Table 1 ). Table 1: Field occurrence of in situ and float Brooksella and concretions from six transects totaling 75.2 m in length. In situ Brooksella /frequency In situ concretions/frequency Brooksella float/frequency Concretion float/frequency 2/0.10 18/0.90 13/0.25 39/0.75 Per meter Per meter Per meter Per Meter 0.02 0.24 0.17 0.52 DOI: 10.7717/peerj.14796/table-1 In situ Brooksella were oriented in the shale with the stellate lobes on the concave surface facing downward into the sediment; Brooksella also appeared to deform the shale laminae ( Fig. 4A ). Concretions also had their more concave side oriented downward into the sediment and they also deformed the shale laminae around them ( Fig. 4B ). The Brooksella removed from the shale depicted in Fig. 4A appeared twinned ( Fig. 4C ). Brooksella and concretions co-occurred as siliceous cobbles on the shoreline of Weiss Lake in our locality ( Fig. 4D ). Figure 4: In situ Brooksella and concretions from Weiss Lake locality. (A) Sediment layers below specimen are deformed around Brooksella (left arrow); lobes of Brooksella are oriented downward into sediment (right arrow). (B) In situ concretion in shale with its most convex side downward (arrow); it also deforms the shale layers around it. (C) Brooksella depicted in A but now oriented upward (arrow). (D) Float Brooksella and concretions. Centimeter ruler for scale; Brooksella samples: UGA 1,2, 8, and 5. Download full-size image DOI: 10.7717/peerj.14796/fig-4 Figure 5: Morphological diversity in Brooksella alternata and concretions from Weiss Lake locality. Brooksella shapes are variable: typical Brooksella have approximately six lobes (A, B); twinned Brooksella can also occur (C); others can have multiple indistinct lobes (D) or lobes that are completely embedded in a concretion (E). Concretions (F–K) also vary in shape, but are mostly round to oblong and many have fossils fragments or whole trilobites embedded in them. Scale bars = one cm. Brooksella figured: (A) UGA 1; (B) UGA WSL2.AL2; (C) UGA WSL2.AL16; (D) UGA WSL2.AL4; (E) UGA LSV1.AL2; concretions figured: (F) UGA 40; (G) UGA 69; (H) UGA 25; (I) UGA 73; (J) UGA 136; (K) UGA 22. Download full-size image DOI: 10.7717/peerj.14796/fig-5 Figure 6: Frequency of occurrence of lobes on top and bottom surfaces of Brooksella ( n = 71). Based on field orientation, the top surface (with top lobes) faces downward into the sediment and the bottom surface (with bottom lobes) faces upward. (A) Presence/absence frequency of lobes on top and bottom surfaces. Histograms of the number of lobes on the top surface (B) and bottom surface (C). Download full-size image DOI: 10.7717/peerj.14796/fig-6 Figure 7: Lichen attach to and bioerode the surface of Brooksella . (A) In situ lichen; (B) close up of lichens; (C) same image as (B), but lichen was removed, revealing a bioerosion pit (arrow); (D), surface view of bioerosion pits (arrows) made by lichens on Brooksella surface after lichen were removed. Scale bars: (A) = one cm; (B–D) = one mm. Download full-size image DOI: 10.7717/peerj.14796/fig-7 External morphology of Brooksella and concretions The external morphology of Brooksella was variable in both the number of lobes and whether the central depression was present or not. A typical Brooksella had well-defined lobes and a central depression ( Fig. 5A ), which is referred to as the top surface of Brooksella by Ciampaglio et al. (2006) and the bottom surface of a jellyfish by Walcott (1898) ; however, we refer to it as the top surface to be consistent with Ciampaglio et al. (2006) although this side is facing downward into the sediment. Only 38% of Brooksella had a central depression, while some ( n = 5, or 6.5% of all specimens) had a central protuberance ( Fig. 5B ). The remaining 55.5% had no discernable central depression or protuberance ( Fig. 5C ). While Brooksella are usually depicted as having lobes extending to the margins of the specimen ( Figs. 5A – 5B ), they do not always have this feature ( Fig. 5C – 5E ). Some specimens ( n = 5) display multiple individual sets of lobes, although the second set of lobes is usually indistinct ( Fig. 5C ). We did not observe spicules on the external surfaces of Brooksella . Concretions from the Conasauga also display variable morphology ( Figs. 5F – 5K ); some have visible trilobites or trilobite fragments on their surfaces ( Fig. 5I ). Lobes are more common on the top surface of Brooksella that is oriented downward into the sediment and least common on the bottom surface which is oriented upward in the sediment ( Fig. 6 ). Ninety-four percent of Brooksella have top surface lobes while 55% have bottom surface lobes ( Fig. 6A ), and half of the Brooksella have lobes on both sides ( n = 35, 0.49 frequency). Five and six lobes are the most common on top surfaces, ranging from a few with no lobes to one specimen with 15 lobes ( Fig. 6B ). Having no lobes was most common on the bottom surface, followed by five lobes, with a maximum number of 12 lobes ( Fig. 6C ). Importantly, none of the lobes had openings at their ends that would indicate a radial canal opening. Pits on the surface of Brooksella The surfaces of Brooksella are host to lichen colonies, which can be abundant ( Fig. 7A ). The lichen can be peeled off the surface, revealing small round indentations approximately 0.05 mm in diameter ( Figs. 7B – 7D ). Concretion surfaces had similar lichen and algal colonies. Size relationships of Brooksella and concretions Overall size Based on the geometric mean, concretions were more variable in size and generally larger than either Brooksella or Walcott’s Brooksella ( Fig. 8A ). Generally, the size distribution of Brooksella overlaps with the smaller sizes of the concretions ( i.e., below the median for concretions). However, our Brooksella are larger than Walcott’s Brooksella . Concretions had a slightly larger grand geometric mean size (48.92 mm) than Brooksella (42.22 mm), but both were much larger than the grand geometric mean for Walcott’s Brooksella (33.82 mm; Fig. 8B ). There was a significant difference among all the specimen types for the grand geometric mean, as none of the 95% CIs overlapped ( Fig. 8B ). Model II regressions indicate that maximum and minimum diameter among the specimen types had positive relationships and the correlation tests indicated that they were moderately to well correlated ( Figs. 9A – 9C ). Walcott’s figured samples were highly correlated, and the Model II regression slope explained 89% of the data ( Fig. 9B ). However, maximum and minimum diameters were only moderately correlated for our Brooksella and concretions; the regression slopes only explained half of the data (57% and 52%, respectively; Figs. 9A and 9C ). Figure 8: Geometric mean (square root of maximum diameter × minimum diameter) and grand geometric mean size comparison among Brooksella , concretions, and Walcott’s Brooksella . (A) Boxplots of geometric mean. (B) Barplot of grand geometric mean with 95% CI error bars. Specimen type key: B = Brooksella , C = concretions, and W = Walcott’s figured Brooksella . Download full-size image DOI: 10.7717/peerj.14796/fig-8 Figure 9: Model II standard major axis (SMA) regressions between maximum and minimum diameter for Brooksella , Walcott’s Brooksella , and concretions. 95% CIs for the slope are depicted as grey lines around the slope (red line). Download full-size image DOI: 10.7717/peerj.14796/fig-9 Number of top lobes in relation to Brooksella size Top lobe occurrence in relation to size class based on geometric mean was different between our Brooksella and that of Walcott’s ( Fig. 10 ). Top lobes occurred more frequently on Brooksella that were 40 to 50 mm in size (size class 5; Fig. 10A ), while for Walcott’s Brooksella , they occurred more frequently on specimens that were 20 to 40 mm in size (size classes 3 and 4; Fig. 10B ). In general, the number of top lobes barely increased with size for both our Brooksella and Walcott’s specimens; essentially, it was nearly a flat slope for the generalized linear model regression ( Fig. 11 ). Moreover, although it appears that as Brooksella gets larger, its largest lobe also increases in size, the data only accounted for 11% of the slope and the correlation coefficient was extremely low ( r = 0.34), indicating that there was no relationship between the largest top lobe size and overall Brooksella size ( Fig. 12 ). Figure 10: Top lobe frequency of occurrence by size class for Brooksella (A) and Walcott’s Brooksella (B). Size is based on the geometric mean. Download full-size image DOI: 10.7717/peerj.14796/fig-10 Figure 11: Generalized linear regression between number of top lobes in relation to geometric mean size in Brooksella and Walcott’s Brooksella . Download full-size image DOI: 10.7717/peerj.14796/fig-11 Internal structure and composition of cross-sectioned Brooksella and concretions Cross-sectioned Brooksella and concretions have oxidized weathering rinds (∼2 mm thick); they also have similar internal structures, similar textural variability, and occasional root bioerosion ( Fig. 13 ). Internal color is variable, including grey ( Fig. 13A ), dark grey and black ( Fig. 13B ), and lighter grey-brown ( Figs. 13C – 13D ). There were no typical internal concentric bands of differing color for either specimen type and no indication of encapsulating sediment laminations from the surrounding shale. Figure 12: Model II SMA regression for geometric mean size of largest lobe in relation to geometric mean size in Brooksella . Slope (red line) is depicted with 95% CIs (grey lines). Download full-size image DOI: 10.7717/peerj.14796/fig-12 Figure 13: Cross-sectioned concretions (A–B) and Brooksella (C–D) showing iron-oxide weathering rind and internal surface structures. (A) Concretion dissected by root bioerosion (upper arrow) and marked by voids (lower arrow) that appear white in photographs; (B) concretion with weathering rind, variable internal coloration that is not concentric in form, and has white-appearing voids (arrow); (C) Brooksella that was affected by roots, which formed an oxidized hole in the center (arrow) on left cross-section and internal composition is variable with numerous voids and tubes that appear white in photographs but are not spicules (arrow, right cross section); (D) Brooksella with nearly homogenous internal texture, with voids and tubes (arrow). Scale bar: one cm. Figured specimens: (A) UGA 126; (B) UGA 156; (C) UGA WSL2.AL21; (D) UGA WSL2.AL1. Download full-size image DOI: 10.7717/peerj.14796/fig-13 Sponge-like characters are not evident for either Brooksella or concretions on the cross-sectioned sample surfaces. Rather, both Brooksella and concretions have what appear at first to be white spots on the surface of the cross sections, but upon closer inspection under a microscope, these are round voids and tube-like structures ( Figs. 13A – 13D ) and were not white spicules. None of the Brooksella or concretions have visible hexactinellid sponge-spicule framework near the outer wall, as would be indicative of protospongiids. Importantly, none of the concretions ( Figs. 13A – 13B ) or Brooksella ( Figs. 13C – 13D ) have what could be defined as an internal spongocoel, nor do they have radial or lateral canals. Additionally, there are no radiating spreiten. The small voids and tubes ranged from spherical to irregular in shape and can be either unlined, lined, or partly filled with red and yellow iron oxides and clays ( Figs. 14A – 14C ). Framboidal pyrite is present in some voids ( Fig. 14D ). There are also curved structures ( Figs. 14A – 14C ), which were often trilobite exoskeletal fragments rich in Ca, Al, and P or were replaced by patchy silica indistinguishable from the surrounding material; other structures were indeterminate but were not spicular skeletal fragments. Figure 14: Internal structures in cross-sectioned Brooksella (A) and a concretion (B) and petrographic thin sections (C–D). (A) Brooksella with weathering rind (white arrow), a large root trace (red arrow) and a curved structure, which is a trilobite fragment (black arrow). (B) Concretion with weathering rind (white arrow), trilobite fragments (black arrow) and dark grey center portion which has a variable shape (orange arrows). (C) Trilobite fragment in Brooksella thin section (blue arrow) and diagenetic void (green arrow). (D) thin section of tube within weathering rind of Brooksella with framboidal pyrite lining (yellow arrows). Scale bars: (A–B) one cm; (C) one mm; (D) 0.2 mm. Figured specimens: (A) UGA 2; (B) UGA 27; (C) UGA 54; (D) UGA WSL2.AL1. Download full-size image DOI: 10.7717/peerj.14796/fig-14 CT scans of Brooksella and concretions CT scans revealed that both Brooksella and concretions have, in general, internal hollow tubes with random orientations and randomly distributed dense spheres ∼2 mm in diameter ( Figs. 15A – 15O ). Only two of the 12 CT–scanned Brooksella had what appeared to be a low-density region in a somewhat stellate shape, but these do not match the location of the lobes ( Figs. 15A and 15F ), the rest had either cross-sections of low-density regions that appear to be voids or cross-sections of tubes ( Figs. 15B , 15I and 15K – 15L ) or irregular low-density regions, reminiscent of burrows, throughout the matrix ( Figs. 15C – 15E , 15G and 15J ). Some of these tubes are likely mineralized, as represented by the high-density regions within the filled tubes, voids or burrow-like structures ( Figs. 15B , 15D – 15E , 15G and 15K – 15L ). Concretions ( Fig. 15M – 15O ) had similar features, with low density burrow-like structures, some of which were filled with high density minerals ( Figs. 15N – 15O ). Figure 15: CT scans of Brooksella viewed from the top surface (A–L) and concretions (M–O). Green indicates external morphology, blue indicates low density mineral phases and voids, and yellow indicates higher density mineral phases. Scale bar = one cm. (A–L); figured Brooksella samples: (A) UGA 1; (B) UGA 3; (C) UGA 6; (D) UGA WSL2.AL1; (E) 55; (F) UGA LSV1.AL2; (G) UGA WSL2.AL2; (H) UGA 98; (I) UGA WSL2.AL12; (J) UGA 17; (K) UGA 155; (L) UGA WSL2.AL21; figured concretion samples: (M) UGA 103; (N) UGA 56; (O) UGA 60. Download full-size image DOI: 10.7717/peerj.14796/fig-15 As viewed in high-resolution µCT scans, both Brooksella and concretions ( Figs. 16A – 16P ) had extensive internal features defined by mineral phases denser and less dense than the surrounding silica matrix. These features include isolated void-like structures, isolated tubes or burrow-like structures, and fossil fragments. The µCT transmittance values indicate that these structures are represented by low-density mineral phases rather than void space, as compared to the air surrounding the specimen. Several of these tubes have vertical components. Notably, none of the burrow- or tube-like structures occur in the center of the specimen consistent with a spongocoel or central shaft or are in alignment with the lobes. Figure 16: MicroCT reconstructions of the internal structures within two Brooksella (A–H) and two concretions (I–P). External (A) and internal (B–D) morphology of a six-lobed Brooksella; external (E) and internal (F–H) of a 14-lobed Brooksella; and external morphology of two concretions (I, M) and their internal morphology (J–L and N–P, respectively). The first column represents external morphology either in photograph (A, E) or 3-D rendering (I, M); second column represents 3-D reconstruction with the matrix faded to highlight the internal structures (blue represent regions of low density; yellow represents regions of higher density); third column represents 3-D reconstructions of side (profile) view of the specimens; fourth column represents a composite of all the internal features from the serial scans through the specimen. Scale bars = one cm. Figured Brooksella: (A–D) UGA 1; E–H, WSL2.AL11. Figured concretions: (I–L), UGA 93; (M–P), UGA 107. Download full-size image DOI: 10.7717/peerj.14796/fig-16 Mineral composition of the groundmass and internal structures of Brooksella and concretions X-ray diffractograms of Brooksella and siliceous concretions revealed no differences in mineral composition. Both have a composition that is primarily silica with minor calcite, likely occurring as fine cements, interstitial crystals, or biotic hardparts ( Fig. S2 ). Electron microprobe analysis of two Brooksella specimens corroborated the XRD results, with aluminous silica as the dominant mineralogy but also revealed additional structures and mineral compositions not observed in XRD ( Fig. 17 ). These internal structures include: large voids that are partly filled with iron oxides and aluminosilicates ( Figs. 17A – 17B ); small tubes in the weathering rind lined with framboidal pyrite ( Figs. 17C – 17D ); barite crystals surrounded by microscopic voids ( Figs. 17E – 17F ); round voids lined with barite crystals ( Figs. 17G – 17H ); and cross-shaped structures, perhaps irregular ghosts of stauracts composed primarily of void space ( Figs. 17I – 17J ), to linear structures made mostly of iron-rich mineral phases with no diagnostic original silica ( Figs. 17K – 17L ). The cross-shaped structures are very rare in petrographic thin section (with approximately a count of one per thin section). Trilobite fragments are more common (up to eight counts per thin section, but varies); brachiopod fragments were also rare. Elongate tubes and round voids were very common, garnering a count of nearly 90 per thin section in both Brooksella and concretions. Figure 17: Electron microprobe images of internal structures of two Brooksella . Partial void with aluminum and iron oxides (A–B); tubular void that outlets to an external surface lined with framboidal pyrite (C–D); partial void with barite infilling (E–F); void with crystalline barite rim (G–H); cross-shaped void space, lacking skeletal hard parts (I –J); linear structures resembling (I), but with partial iron sulfide composition (K–L). In (A–C, E), and (G–K), void space is black; in (D), void space is dark green. Figured Brooksella : (A–D) and (I–J), UGA 119; (E–H) and (K–L), UGA WSL2.AL21. Download full-size image DOI: 10.7717/peerj.14796/fig-17 Siliceous concretions had an aluminous silica composition of the groundmass like Brooksella ( Figs. 18A , 18D – 18E and 18I , Fig. S2 and Fig. S3 ). Trilobite fragments and linear void structures present in Brooksella were also found in the concretions ( Figs. 18B – 18E and 18H ). These include Al-, Ca-, and P-rich skeletal fragments ( Figs. 18B – 18D ), pyrite and Ba-rich inclusions ( Fig. 18E ), and voids defined by a lack of silica ( Fig. 18H ). The weathering rinds of the concretions are richer in aluminum than the interior of the specimens ( Fig. 18I ). Partially lined voids are also present in the siliceous concretions with iron oxide ( Figs. 18F – 18G ), calcite ( Fig. 18J ), pyrite ( Fig. 18K ), and argillite ( Figs. 18I and 18K ) linings. Pyrite and titanium oxide-based inclusions are also found in the carbonate concretion ( Fig. 18L ). Figure 18: Element and backscatter electron maps of internal features of two siliceous concretions and one carbonate concretion. (A) Partial void and aluminous silica composition of the groundmass. (B–D) solid curved feature rich in phosphorous and calcium but depleted in silica. (E) inclusion containing pyrite. (F–G) surface out-letting tube partly lined with iron oxides. (H) linear feature defined by void space and silica. (I) voids along the surface weathering rind. (J) partial void filled with carbonate. (K) tube that outlets to the surface that is partly lined with pyrite and clays. (L) pyrite inclusions in a carbonate concretion UGA 157. (A–G) are from sample 27 and (I–K) are from sample 126. Energy dispersive X-ray spectra of selected features in this figure presented in Fig. S3 . Download full-size image DOI: 10.7717/peerj.14796/fig-18 Discussion Orientation and occurrence of Brooksella in Conasauga shale beds If Brooksella is a hexactinellid sponge, it is very rare in shale beds compared to concretions and its orientation indicates that the central depression (previously interpreted as the osculum) and lobes are mostly facing downward into the sediment, either as a once-living sponge or oriented in that position after death. Further, in situ concretions adjacent to Brooksella in the same bed are generally oriented with their more convex portion upward, similar to Brooksella . Both appear to deform the laminae around them. The shale beds were not overturned in this region, so their orientations represent how they were preserved or formed. Ciampaglio et al. (2006) suggested that the convex side (bottom side of the cup-shaped Brooksella ) is oriented downward in the sediment and that the concave side with central depression points upward suggestive of feeding mode for the sponge. They stated that their orientation was opposite of Walcott (1898) , who had his medusoid Brooksella oriented with its lobes downward in the sediment, with the smooth top part of the bell oriented upward. While we do not agree that Brooksella is a medusoid, we do agree with Walcott’s interpretation of Brooksella ’s orientation, with the lobes pointing downward in the sediment as corroborated by their field orientation in the shales. While the entire body of a sponge can act as a filter ( Kowalke, 2000 ), having Brooksella ’s lobes and central depression (the osculum) facing downward into the sediment does not permit feeding nor efficient water flow through the putative oscula and radial chambers, especially in clay-dominated environments. Increased clay particles decrease filtration efficiency for hexactinellids that live oriented above the sediment-water interface ( Kowalke, 2000 ); and all known living hexactinellid sponges live usually rooted in the sediment, and their filtering structure lies above the sediment-water interface ( Hooper & Van Soest, 2002 ). Therefore, the orientation of Brooksella seemingly upside down in the sediments calls into question whether it is a sponge. Is Brooksella a hexactinellid sponge? External and internal sponge characteristics reexamined Ciampaglio et al. (2006) cite that Brooksella is exceptionally preserved in 3-D as a cup-shaped fossil in profile. They also cite the presence of cross-shaped siliceous spicules on the outer surface which are characteristic of the hexactinellid family Protospongiidae to which they assigned Brooksella . They also observed the following as evidence for a sponge affinity for Brooksella : white spicules on the cross-sectioned polished surface; crater-like ostia on the outer surface; chamber openings on their lobe tips; internal radial canals in each lobe; and a spongocoel. However, we found that there were no stauractin siliceous spicules on the outer surface of Brooksella or white spicules on the cross-sectional surface. Rather, the white appearing structures are actually round voids and tubes and not sponge spicules. We did find in some of our petrographic thin sections at least one cross-shaped tube-like structure, but they cannot reliably be assigned to stauractines as they are poorly preserved ( Figs. 17I – 17J ). Walcott examined many thin sections of Brooksella and failed to find any evidence of spicules and suggested, if they were there, they were destroyed during fossilization ( Walcott, 1898 , p. 21). However, he did mention that casts of spicules occur on a few nodules but does not explicitly state what shape the casts are and if they were found on Brooksella . Importantly, both our CT and μ CT data indicate that Brooksella have a dense outer region, corresponding to an iron-oxide aluminous weathering rind. These scans do not show arranged spicules in this outer surface as would be present in protospongiids. Such a loose framework could be obscured by diagenetic processes, but there were also no spicules deeper within the specimens, where they would likely be better preserved. We also could not find any crater-like ostia on the outer surface of Brooksella. Instead, we found lichen growing on the surfaces, and when the lichen were removed, they left small round nearly microscopic bioeroded pits, which possibly could be mistaken for ostia. These surface lichen pits were not connected to any internal chambers based on our thin-section, CT, and μ CT analyses. The lobes of our Brooksella did not have terminal openings. There were also no radial canals attached to such openings that connected to a central depression and no internal lumen consistent with a spongocoel. Walcott’s images rarely depict a Brooksella with putative radial canals (refer to Walcott’s Brooksella images reprinted in Fig. 1D ), and those that he thought had them at the tips of the lobes could represent taphonomic effects ( Figs. 1B – 1C ). He noted that “not one in a hundred of the fossil specimens” had any structure within the bodies, except for some samples from one site which he doesn’t describe. However, darkened regions within Brooksella and concretions can occur, but not always, and these regions vary in size and shape depending on which serial cross-section is examined. None of these inner darker regions penetrated into the lobes or appeared to form a spongocoel that connected to the lobes or central depression ( Fig. 14A – 14B ). Further, no distinct radial lobes were seen in composite 3-D reconstructions of Brooksella or concretions from CT and μ CT scans (refer to Figs. 15 and 16 ). That is, no internal structures appear to represent a central cavity like a spongocoel with radial canals emanating from a central region. Rather, both Brooksella and concretions appear to have randomly oriented internal burrow- and tube-like structures and mineralized fossil fragments. Additionally, had radial canals corresponding to lobes as described by Ciampaglio et al. (2006) been present, this would have been inconsistent with the proposed protospongiid identity, as protospongiids have thin walls and lack internal structures like radial canals or chambers ( Botting & Muir, 2018 ). Our Brooksella and silica concretions were found to commonly contain round voids and what we refer to as tubes as we do not know for certain how these structures formed ( Figs. 13 , 14 , 15 and 16 ). Some larger round voids and tubes are most likely bioerosion from tree roots, and these often have an iron-oxide rind and infill ( Figs. 13A and 13C ), but others were much smaller ( Figs. 13B and 13D ). These smaller tubes can have a vertical and horizontal orientation within Brooksella and concretions and can vary in width and shape ( Figs. 16D and 16H ). Voids can be parts of tubes cut in half during thin- and μ CT-section analyses. We speculate that these smaller structures are likely formed by bioerosion (straight-edged tube walls) or burrows (diffuse tube walls; Figs. 16D , 16H , 16L and 16P ). In Walcott ( 1898 , p. 12), Professor Iddings examined thin sections of Brooksella and also noted “numerous gas pores” as part of the siliceous nodule composition, but neither Walcott nor Iddings considered those structures further. No fossil sponges, whether hexactinellid or not, are reported to have these tubes and voids. The voids and tubes can be lined with framboidal pyrite, barite, calcium carbonate, or clay ( Figs. 17C – 17D , 17E–17G; Figs. 18 J– 18K ). Framboidal pyrite is reported from algal borings in Ordovician brachiopods ( Kobluk & Risk, 1977 ), which suggests early diagenesis just below the sediment-water interface in the bacterial sulfate reduction zone. Similarly, barite can be an early diagenetic mineral, which can form in the early stages of concretionary growth ( Bojanowski et al., 2019 ). Early diagenesis is suggested because barite dissolves if sulfate is reduced during deep burial and if it is not protected within a microcrystalline concretion ( Bojanowski et al., 2019 ). Calcium carbonate infilling of tubes may originate from partial dissolution of trilobite and other carbonate fossil fragments within Brooksella and concretions or from later diagenetic fluids. Size relationship between Brooksella and concretions There was a significant difference in the grand geometric mean sizes among our Brooksella and concretions as well as Walcott’s Brooksella . The mean size of our concretions was slightly larger than our Brooksella , but both concretions and our Brooksella were much larger than Walcott’s Brooksella , suggesting that his samples were likely picked for a particular size range to be shown at natural size for comparison in his 1898 monograph. Overall, the maximum size constraints for Brooksella ’s growth and that of concretions are different. Nevertheless, Model II regressions indicate that size relationships in our Brooksella compared to concretions were not different and indicated that the maximum and minimum diameter among Brooksella, concretions and Walcott’s Brooksella were moderately to well correlated. While Walcott’s Brooksella were highly correlated ( r = 0.94) and 89% of the data variation was explained by the slope, for our Brooksella and concretions they were only moderately correlated ( r = 0.57 and r = 0.52, respectively), with only half of the data explained by the Model II regression slope. This finding indicates that not only were our Brooksella much more variable in diameter than those depicted in Walcott’s 1898 monograph but also that our Brooksella and concretions were both variable in shape and also grow similarly, although concretions can grow to a larger size. Hexactinellid sponges exhibit age-related patterns of growth, displaying either linear growth or linear until a plateau is reached during growth ( Leys & Lauzon, 1998 ; Botting, 2003 ). While growth in Brooksella appears somewhat linear, its growth was no different from concretionary growth, and half the data was not explained by the Model II regression slopes for both specimen types. Additionally, there was no trend or correlation for maximum lobe size to overall body size in Brooksella, thus lobes are not growing larger as body size increases. Further, the number of lobes did not demonstrably increase with size for Brooksella, given the number of lobes that Brooksella can have. Therefore, these results are not consistent with the general pattern of hexactinellid growth. Given the observed differences between expected sponge characteristics and the composition and microstructure of Brooksella that is shared with concretions, we do not accept the hexactinellid sponge identity. Non-sponge interpretations of Brooksella Trace fossil affinities Brooksella is attributed to several different trace fossils, but usually it is thought to represent a probing-style feeding burrow. Fürsich & Kennedy (1975) postulated that Brooksella represented the trace fossil Dactyloidites , a view that was echoed by Rindsberg (2000) . This identity is consistent with the general shape and orientation of many Brooksella samples, but Brooksella lacks the central tube and spreiten of Dactyloidites. Furthermore, radial probing actions fail to explain the tubular features observed within Brooksella . Similarly, Asterosoma, an ichnogenus of probing burrows ( Seilacher, 2007 ), is thought to be a Brooksella . Certain types of Asterosoma display radial lobes, although these lobes are clearly distinct from Brooksella in their fusiform shape, often branching arrangement, and surficial cracking. The earliest Asterosoma are known from the Devonian, in sandstone. They have backfilled lobes, are oriented stratigraphically with the convex side of lobes upwards, and have central connecting tubes—all of which is in contrast to the shale-hosted, non-backfilled, stratigraphically downward-oriented lobes with no central connecting tubes in Brooksella . Gyrophyllites , another fodinichnia characterized by radial lobes, backfill, and a central tube is another possible identity for Brooksella that was suggested by Seilacher (2007) . Gyrophyllites include both upward and downward probing, so the concave face can be oriented in either direction. These ichnofossils typically occur as impressions rather than in positive relief like Brooksella , which lack discernable back filling inside the lobes. Schwimmer, Frazier & Montante (2012) suggested that Brooksella was a coprolite. However, the middle Cambrian age of Brooksella rules out production of such large feces by much larger organisms. Further, Brooksella specimens lack fecal pellets, and the interiors of Brooksella lack the directional orientation of similar materials in coprolites. Other than superficial resemblance, Brooksella ’s internal and external morphology do not match any previous described trace fossil. Pseudofossil affinities Proposed identities for Brooksella have not been limited to those of biological origin. Through dewatering or other pressure imbalance processes, sand or other sediments can rise to the sediment surface, producing a “sand volcano”, which can be preserved as the pseudofossil Astropolithon ( Seilacher, 2007 ). These features can take on lobate forms similar to Brooksella because remnant surficial biofilms could hold the erupted sands together long enough for lithification to occur. Brooksella canyonensis was first described as a cnidarian before being reevaluated as a pseudofossil produced in this manner, but the mechanism of fluid escape is unlikely to have produced Brooksella alternata . Fluid escape structures produce lobes that are oriented with convex sides stratigraphically upwards, while the lobes of Brooksella are mostly oriented stratigraphically downwards and lobes can occur on both sides in nearly half the specimens. Additionally, Astrolopithon -type structures typically occur via repeated eruption from the same radial cracks, producing an upward growing series of sediment layers. Brooksella lacks the horizontal layers that such a mechanism would produce. Brooksella also lacks a central vertical tubular feature and it is compositionally different from the surrounding sediments. Similarly, gas rising from dewatering sediments was cited as a possible mechanism for Brooksella formation. While this origin could account for the differing lithology as silica could precipitate where the gas bubbles reside and possibly explain the tubular features and voids, it does not account for the complex, lobate form of Brooksella . Concretion affinities Both Brooksella and co-occurring siliceous concretions have similar shapes, remnant skeletal fossil components, weathering rinds, and internal composition; the only feature that concretions lack are lobes. In fact, Brooksella is recognized by the presence of at least two lobes given Walcott’s descriptions and our specimens (refer to Fig. 10 ). Concretions can overlap the size range of Brooksella , but their grand geometric mean size is significantly larger than Brooksella suggesting a limit to Brooksella size. Like Walcott (1898) observed, we also found that the composition for Brooksella is primarily silica with minor amounts of calcium carbonate, which is identical to the concretions. The composition of tube- and void-infilling barite and framboidal pyrite indicate the silica-rich Brooksella and concretions were likely formed during early diagenetic processes. In both Brooksella and silica concretions, there was a lack of concentric zoning which a Professor Hayes also recognized for Walcott’s samples ( Walcott, 1898 , p. 12). Professor Hayes also noticed that some Brooksella had “parallel mica scales” which he surmised were part of the shale laminations ( Walcott, 1898 , p. 13), suggestive of replacive growth in carbonate concretions (after Gaines & Vorhies, 2016 ). However, we found no interior sedimentary layers or mica inside Brooksella or concretions, but shale laminations were deformed around both. Thus, we would argue that the concretions and Brooksella likely represent a type of displacive growth seen for carbonate nodules ( Gaines & Vorhies, 2016 ) and represent one mode of growth ( Bojanowski et al., 2019 ). Though some concretions and Brooksella had a darker region in the interior that varied in shape (refer to Figs. 13 and 14 ), there were no definitive concentric growth regions suggestive of concentric growth concretions ( Raiswell et al., 1988 ; Gaines & Vorhies, 2016 ). However, some internal tubular structures occur within the central portion ( e.g. , Fig. 15H ) of Brooksella , but do not correspond to lobes, and are not arranged radially. These internal tubular structures may represent burrow traces exploring the unlithified portions around the lithified concretionary nucleus as the concretion grew over a short timescale (after Kastigar, 2016 ). In summation, there is no difference between Brooksella and concretions except for the presence of lobes. We posit that Brooksella be considered an early diagenetic displacive silica concretion until more evidence can be produced that it was a biogenic structure. Silica sources for Brooksella and concretions Cambrian seas were rich in silica and were the source for primary silica, while post-Cambrian silica cycles are dominated by biological activity ( Gao et al., 2020 ). It is postulated that during the Ediacaran and Cambrian, silica came from a variety of sources: Silica-rich hydrothermal fluids; inorganic precipitation from seawater; authigenic clay mineral formation; cyanobacteria facilitating silica precipitation; silica adsorption on organic matter; or from silica-secreting organisms ( Gao et al., 2020 and references therein; Hesse, 1989 ; Schieber, Krinsley & Riciputi, 2000 ; Vorhies & Gaines, 2009 ; Gaines et al., 2012 ). The early Paleozoic oceans were supersaturated with respect to silica compared to undersaturated modern oceans where the silica cycle is controlled primarily by diatoms and radiolarians ( Gao et al., 2020 ). Therefore, it is suggested that siliceous-secreting sponges and radiolarians were not a major component of the silica cycle in Ediacaran and Cambrian seas ( Gao et al., 2020 ), though other researchers attribute a decline in oceanic dissolved Si during the Ediacaran-Cambrian transition to the onset of significant sponge biosilicification (see Chang et al., 2019 ). Sperling et al. (2010) suggest that an abundance of Al 3+ -rich clay minerals in the Cambrian was conducive to the preservation of siliceous spicules relative to the Ediacaran. Concretions and Brooksella from the Conasauga Formation have similar compositions of silica with minor amounts of clay indicative of a clastic source for the silica, but adsorption onto organic matter can also not be ruled out. It was postulated that the silica-rich Brooksella was derived from remobilized biogenic silica from a presumably sponge-rich time around decaying organic matter associated with microbial and/or fungal biofilms ( Ciampaglio et al., 2006 ; Schwimmer & Montante, 2007 ; Kastigar, 2016 ). Siliceous-spicule secreting hexactinellid sponges were becoming more common in the middle Cambrian ( Finks, 2003 ; Reid, 2009 ), and a combination of both inorganically precipitated silica and biogenic silica cannot be ruled out as the source of silica for the concretions, including Brooksella . Conclusions In the century since its original description by Walcott (1896) and Walcott (1898) , star-shaped siliceous nodules known as Brooksella alternata from the middle Cambrian Conasauga Formation, Southeastern USA, have raised numerous questions for researchers of the Cambrian. Brooksella ’s long history of description and reevaluation from a jellyfish to a sponge or gas bubbles to trace fossils, mirrors the evolving understanding of life and environments that shaped the Cambrian seas and highlights one of the most persistent challenges in the study of early complex life—the difficulty of distinguishing life from non-life. Although Brooksella and all its Brooksella -like forms were synonymized as Brooksella alternata , a hexactinellid sponge of the Protospongiidae family ( Ciampaglio et al., 2006 ), we found no sponge-like diagnostic characteristics on either the external surface or internal regions of Brooksella . “Ostia” were likely lichen-etched pits on the surface of Brooksella , as modern lichen was common on Brooksella and concretions. Spicules were not present on either Brooksella surfaces or their interiors, although very rare, roughly cross-shaped ghosts in both concretions and Brooksella may have represented a stauractine at one time, but there was no definitive elemental analysis that supports these ghosts as being siliceous spicules. “White spicules” observed by Ciampaglio et al. (2006) on polished cross-sections were abundant, round voids and tubes that appeared light colored but were not siliceous spicules. Walcott (1898) also did not find spicules after examining hundreds of Brooksella, but observed some on the external surfaces of some concretions. A central depression (“osculum”) was not common on Brooksella , and an internal spongocoel did not occur. Some concretions, and rarely Brooksella, had a diagenetic somewhat central region that could be conflated as a spongocoel, but this structure varied in shape depending on how it was cut and was not connected to any radial canals or chambers. Brooksella’ s external lobes had no radial canals in the interior nor were radial canals visible in CT scans or thin sections. Importantly, thin sections, CT, and µCT scans of Brooksella and concretions reveal tubes and voids of variable size, shape, and orientation that can pass through the entire Brooksella or concretion and also occur in the weathered outer rind. These tubes are not consistent with radial canals proposed for the hexactinellid affinity of Brooksella , or with other biological affinities. Elemental analysis indicates that these tubes and voids can be lined or filled with barite, iron oxides, framboidal pyrite and occasional clays or carbonates. The framboidal pyrite and barite suggest formation in early diagenetic marine conditions during burial in the sulfate-reducing zone, although some with iron-oxide-clay infilling represent post-depositional roots or rootlets that penetrated the Brooksella and concretions. Other tubes/voids could be burrowing organisms from the middle Cambrian, like Walcott (1898) observed on Laotira specimens (refer to Figs. 1I – 1J ). These structures indicate that the organic accumulations that gave rise to Brooksella and associated concretions were likely mined for organic matter before or during the formation of these nodules, or that the growth of these nodules preserved burrows within them but the burrows did not contribute to forming lobes in Brooksella. These burrows were rapidly mineralized in early diagenesis and have no relation to any previous trace fossil affinities assigned to Brooksella like that of Dactyloidites . In summary, Brooksella and concretions share external weathering rinds, mineralogical composition, and internal structures; only Brooksella possesses external lobes and sometimes, a central depression (or protuberance). Brooksella lacks hexactinellid sponge-defining characteristics and shares more similarities with concretions from the Conasauga Formation. Although Brooksella has numerous proposed identities ( Fig. S1 ), the bulk of its characteristics are consistent with concretions. Therefore, from the sum of its parts, we suggest that Brooksella be considered a pseudofossil until proven otherwise, and the hypothesis that these sponges contributed biogenic silica to the exceptional preservation of the middle Cambrian Conasauga Lagerstätte needs to be reevaluated in light of the supersaturated silica-rich seas from this time period, which could have abiogenic or microbial sources. Future work on sponge biomarkers and silica stable isotopes ( δ 30 Si) on well-preserved specimens will hopefully settle the origin of this silica and the biogenicity of Brooksella . Supplemental Information Major characteristics of Brooksella and Brooksella -like fossils as described by Walcott (1995; 1896) and ( Ciampaglio et al., 2006 ), for the Conasauga Formation “–” indicates information not available. DOI: 10.7717/peerj.14796/supp-1 Download Brooksella measurements If cells are empty, no measurements were taken or could be taken. No. = abbreviation for number (as in counts). DOI: 10.7717/peerj.14796/supp-2 Download Concretion measurements DOI: 10.7717/peerj.14796/supp-3 Download Comparison of purported identities for Brooksella alternata Resser CE. 1938. Cambrian system (restricted) of the southern Appalachians. New York: The Geological Society of America. DOI: 10.7717/peerj.14796/supp-4 Download X-Ray diffractograms of two powdered Brooksella and two silica concretions These specimens have nearly identical mineral composition, which is predominantly silica, with some calcite. DOI: 10.7717/peerj.14796/supp-5 Download Energy dispersive X-ray spectra of features in Figure 18. (A) Electron beam scatter spectra of the dense feature shown in Fig 18E siliceous concretion sample 27. Note the presence of Fe, Ba, and S. (B) EBS spectra of a dense inclusion from siliceous concretion sample 126. Note the Ti peak. (C) EBS spectra of pyrite inclusions in a Conasauga carbonate concretion sample 157 shown in Fig 8L. Note the Fe and S peaks. (D) EBS spectra of a dense inclusion from the same carbonate concretion. Note the Ti peaks. DOI: 10.7717/peerj.14796/supp-6 Download Additional Information and Declarations Competing Interests The authors declare there are no competing interests. Author Contributions Morrison R. Nolan conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft. Sally E. Walker conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft. Tara Selly performed the experiments, analyzed the data, authored or reviewed drafts of the article, and approved the final draft. James Schiffbauer performed the experiments, analyzed the data, authored or reviewed drafts of the article, and approved the final draft. Field Study Permissions The following information was supplied relating to field study approvals ( i.e. , approving body and any reference numbers): Our field collection complied with Alabama’s laws regarding fossil collection, which do not require permitting for this type of collection. Alabama’s laws restricting fossil collection apply to certain kinds of fossils ( Basilosaurus ), fossils collected from specific state-protected sites ( e.g. , the Stephen C. Minkin Paleozoic Footprint Site), and from public sites where the state limits access to the lands ( e.g. , Cedar Lake Reservoir). The collection of Brooksella material from the publicly accessed and unimproved shores of low-level Weiss Lake does not meet any of these restriction criteria. Data Availability The following information was supplied regarding data availability: The raw data used in the study are available in Table 1 (describing field occurrence of samples), and Tables S2 and S3 (which contain the morphological measurements of the Brooksella and silica concretions respectively). The CT and uCT image stacks used in 3D reconstruction in this article are available at Morphosource (Project ID: 000436718, Brooksella and silica concretions): ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; and . Funding Morrison R. Nolan was supported from the Geological Society of America, UGA Department of Geology, Georgia Museum of Natural History Laerm Award, and University of Georgia Foundation; Sally E. Walker received funding from NSF Polar Programs ANT 1745057 and Shellebarger Endowment; James Schiffbauer and Tara Selly received funding from NSF IF 1636643. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Acknowledgements Acknowledgements The researchers would like to especially thank Dr. Don Champagne who donated Brooksella specimens and facilitated field work; William Bullard, Doug Johns, and Zachary Hinz, who assisted in field and lab work; Dr. Ajay Sharma and Dana Ambrose who facilitated CT scans; Dr. Paul Schroeder and Monika Milkovska guided the XRD analysis and Dr. Bruce Railsback, Dr. Paul Schroder, and Dr. Doug Crowe helped verify data on weathering, diagenesis, and thin section analyses; Chris Fleischer provided sample preparation and microprobe analysis. We thank the Georgia Mineral Society for sharing field trip discussions and we acknowledge the Tsalaguwetiyi Eastern Cherokee and S’atsoyaha Yuchi peoples, the traditional stewards of the lands around Weiss Lake and the Coosa River valley. We also appreciate the insightful and constructive comments from our three reviewers, Dr. Joachim Reitner, Dr. Konstantin Tabachnick, and Dr. Joseph Botting, and PeerJ Editor Dr. Alexander Ereskovsky.
More than 100 years ago, Charles Doolittle Walcott from the Smithsonian Institution was asked to examine strange star-shaped fossils with lobes hailing from the ~ 514-million-year-old Conasauga Formation in Alabama. Walcott described these odd fossils as jellyfish that likely floated in the middle Cambrian seas of what is now the southeastern United States. Little did he know that the Cambrian fossil he named would cause more than 100 years of controversy. The controversy hinged on the interpretation of what Brooksella really was: Was it truly a jellyfish that would be important for middle Cambrian marine ecosystems, a time when animals were originating and diversifying for the first time on Earth? Or was Brooksella just preserved gas bubbles? Or maybe it was a type of bulbous algae? Or a glass sponge made of opaline silica? Or, as hypothesized, perhaps Brooksella was not a fossil at all. Using shape and chemical analyses combined with high-resolution 3D imaging, we evaluated whether Brooksella was a fossil, like a sponge, a trace fossil, representing the burrows of worm-like animals, or not a fossil. We found that Brooksella lacked characteristics of glass sponges, specifically, the opaline-fused spicules that compose the body. Nor did it grow as a sponge would be expected to over its lifetime. Importantly, in the field, its purported excurrent canal (osculum) was always oriented down in the sediment, which would make it very hard—if not impossible— to filter water for food. We also did not find any indication that worms made the iconic star-shaped lobes. We then compared the composition and internal structure of Brooksella to silica concretions from the same middle Cambrian rock beds. We did not find any difference between Brooksella and the concretions, other than Brooksella had lobes and the concretions did not. We thus concluded that Brooksella was not part of early sponge diversification in middle Cambrian seas, but rather, was an unusual type of silica concretion. Concretions can be all kinds of shapes to the point some look like they were organically formed. The significance of our finding is two-fold: First, there are numerous enigmatic Cambrian fossils that need to be scrutinized to determine if they are really fossils to help paleontologists refine biodiversity estimates for the Cambrian when most of Earth's major animal groups originated. Second, this is not the first time that unusual fossils and rocks from the Cambrian have puzzled scientists, and our findings highlight the necessity of close scrutiny of early fossil materials, especially using newer, powerful analytical techniques like micro-CT in combination with classic lab and field approaches. Sunset at Weiss Lake where Brooksella alternata were collected. Credit: Morrison R. Nolan Morrison Nolan, Department of Geosciences, Virginia Tech, says, "Brooksella alternata interests me because so many scientists have worked on identifying it and have come to very different conclusions. It really illustrates how difficult it can be to distinguish one type of life from another, and even life from non-life, which is especially challenging for early materials in the geologic record. "Amateur paleontological/geological groups like the Georgia Mineral Society helped me learn about Brooksella and other interesting geological features around me. Such groups do a great job of teaching the public about the geologic past and bringing people to the field to see and learn about these features." Sally Walker, Professor of Paleontology at the University of Georgia in Athens, Georgia, U.S., says, "Brooksella intrigued me because, unlike most fossils, it had a 3D shape like a star-shaped puffed pastry that is unusual for soft-squishy animals like a sponge. A sponge usually gets flattened like roadkill during the fossilization process—especially a fossil more than 500 million years old. "Also puzzling was the fact that no one inspected Brooksella where it lived and its orientation; if they did, they would find that most lobes were oriented downward, which does not make sense for a sponge to be eating mud. Lastly, the puzzle of Brooksella continues: What are the physical, chemical and perhaps biological processes that actually formed these strange Brooksella concretions? That is for a future paleontologist to solve. James Schiffbauer, Associate Professor of Geological Sciences, University of Missouri, says, "While the applications for microCT have been nearly endless in the materials sciences and engineering fields, its capacities for elucidating the fossil record are really just beginning to be explored. This project is an excellent example of the types of fossil mysteries we can solve with applications of microCT. When we can scrutinize the internal construction of Brooksella with reference to its many past interpretations, it becomes increasingly apparent that none of them really match."
10.7717/peerj.14796
Other
New research shows link between ethnicity and bias
Ksenia Gnevsheva. The expectation mismatch effect in accentedness perception of Asian and Caucasian non-native speakers of English, Linguistics (2018). DOI: 10.1515/ling-2018-0006
http://dx.doi.org/10.1515/ling-2018-0006
https://phys.org/news/2018-07-link-ethnicity-bias.html
Abstract Previous research on speech perception has found an effect of ethnicity, such that the same audio clip may be rated more accented when presented with an Asian face (Rubin, Donald L. 1992. Nonlanguage factors affecting undergraduates’ judgments of nonnative English-speaking teaching assistants. Research in Higher Education 33(4). 511–531. doi: 10.1007/bf00973770). However, most previous work has concentrated on Asian non-native English speakers, and Caucasian speakers remain under-explored. In this study, listeners carried out an accentedness rating task using stimuli from first language Korean, German, and English speakers in 3 conditions: audio only, video only, and audiovisual . Korean speakers received similar accentedness ratings regardless of condition, but German speakers were rated significantly less accented in the video condition and more accented in the audiovisual condition than the audio one. This result is explained as an expectation mismatch effect, whereby, when the listeners saw a Caucasian speaker they did not expect to hear a foreign accent, but if they actually heard one it was made more salient by their expectation to the contrary. Keywords: Speech perception ; sociophonetics ; foreign accentedness ; ethnicity ; variation 1 Introduction There is a growing body of research on sociophonetic variation in speech perception (see Campbell-Kibler 2010 ; Drager 2010 ; for a review). Previous studies have shown a strong link between social and linguistic information such that, on the one hand, the perceived linguistic information influences the listeners’ assumptions about the speaker’s social information (e.g., Campbell-Kibler 2007 ) and, on the other hand, assumed social information may affect how linguistic information is perceived (e.g., Hay et al. 2006a ; Hay et al. 2006b ; Niedzielski 1999 ; see discussion below). The existing relationship between social and linguistic information may be explained by usage-based models of speech perception. Exemplar theory (e.g., Johnson 1997 ; Johnson 2006 ; Pierrehumbert 2003 ) suggests that our brain stores clouds of representations, exemplars , for a given category; and that these clouds are updated constantly through the perception-production loop. Sociolinguistic information, such as different speaker characteristics (sex, age, origin, etc.) and contextual information (speech styles, etc.), is associated and stored together with these exemplars. The sociolinguistic information is activated when associated exemplars are activated, and it can activate associated exemplars when it is accessed ( Hay et al. 2006a : 370). In non-native speakers, previous research has also shown that assumed social information may affect perceived linguistic information, such as degree of accent (e.g., see a discussion of Rubin 1992 below). An accent is a “cumulative auditory effect of those features of pronunciation which identify where a person is from, regionally or socially” ( Vishnevskaya 2008 : 235). When listeners hear substantial differences in a speaker’s production, they can identify the person as coming from a different background. A specific example of that would be the accent of a non-native speaker (NNS), whose first language is different from that of the listeners, which I will call here a foreign accent. In perception studies, accentedness is conceptualized as a subjective measure of how strong listeners rate a speaker’s accent. A usage-based account is potentially insightful when considering not just the first language (L1) perception studies discussed above, but also studies of second language (L2) variation, including foreign-accentedness rating tasks. Usage-based models would predict that in a foreign accentedness rating task the items that activated the representations most similar to the ones associated with the listener or other native speakers would be judged as less foreign-accented whereas the items different from them or similar to the representations that have previously been identified as foreign-accented would be judged as more foreign-accented. Accentedness perception has been shown to be influenced by a number of stimulus-independent factors ( Kraut and Wulff 2013 ; Levi et al. 2007 ). A speaker’s perceived ethnicity is one of such factors influencing his/her perceived accentedness. Anecdotes of native English speakers of a non-white background being perceived to have an accent are abundant: in Lippi-Green (1997) a monolingual English-speaking woman of Asian Indian decent is asked by a shopkeeper to speak more slowly because of her “accent”. Several studies have explored the effect of ethnicity on reverse linguistic stereotyping, which is the idea that perceived social characteristics may influence perceived linguistic characteristics ( Kang and Rubin 2009 ). Rubin (1992) and McGowan (2015) have shown that the assumed ethnicity of the speaker influenced their perceived accentedness and intelligibility ratings, respectively. Rubin (1992) showed that the same native speaker (NS) of American English was rated as more accented when the listeners were presented with a picture of an Asian woman compared to when they were presented with a picture of a Caucasian woman, and this was attributed to listeners’ negative bias. That is, listeners’ negative bias towards Asian faces was said to influence their accentedness rating even when presented with audio stimuli from a native speaker with a Standard American English accent. McGowan (2015) finds a different effect, such that when presented with Asian-accented speech stimuli, listeners’ intelligibility ratings are higher when they are shown a picture of an Asian face rather than a Caucasian face. This is not seen to be an effect of negative bias, then, but instead an effect of matching the types of stimuli: an audio clip from a native speaker of Standard American English mismatched with a picture of an Asian face increases accentedness ratings, but an audio clip from an Asian-accented speaker matched with a picture of an Asian face increases intelligibility scores. These studies provide us with important information about how a speaker’s assumed ethnicity can impact upon listeners’ reaction to speech stimuli, but they both focus on a minority ethnicity (i.e. Asian speakers in the USA). Much less is known about the perceptual effects of presenting listeners with foreign-accented speech along with pictures of Caucasian faces. Specifically, little is known about the perceptual effects of reverse linguistic stereotyping when listeners are presented with visual stimuli of a Caucasian speaker but audio stimuli of a non-standard variety, such as foreign-accented speech. The effects discussed by Rubin (1992) and McGowan (2015) offer contradictory predictions for the accentedness rating of foreign-accented speech presented with a Caucasian speaker: If reverse linguistic stereotyping works in the same way as reported in Rubin (1992) , we might expect a picture of a Caucasian face to reduce the accentedness rating of a foreign-accented audio clip. If the mismatch effect found by McGowan (2015) for intelligibility also underlies accentedness ratings, we would expect a picture of a Caucasian face to increase the ratings of a foreign-accented clip. This study examines this issue by exploring the effect of visual input and speaker ethnicity on accentedness rating in foreign-accented speech produced by Asian and Caucasian second language speakers of English. I begin in the following section by discussing sociophonetic work which has considered how linguistic and social information is thought to be related, and the effect that one type of information can have on perceptions of the other. Then I specifically address work in accentedness perception, elaborating on the studies introduced above. After presenting the method used in this study, I discuss the results in relation to reverse linguistic stereotyping and listener expectation. 2 Background 2.1 The inter-relationship between linguistic and social information Many studies have shown that the way a person speaks affects listeners’ perception of the speaker in terms of a range of social categories, in a form of linguistic stereotyping. For example, Campbell-Kibler (2007) found that two speech samples that differed only in the speaker’s production of the (ING) variable were associated with different social categories: –in was associated more with lack of education, masculinity, and the country, while –ing was perceived to be more educated, gay, and urban. The reverse has also been attested: perceived phonetic information has been found to be influenced by (assumed) social information, such as geographical region ( Hay et al. 2006a ; Niedzielski 1999 ), and the socio-economic status and age of the speaker ( Hay et al. 2006b ). Niedzielski (1999) found that the information the listeners were given about the origin of the speaker influenced their responses in a perception task. If listeners were told that the speaker was from Michigan, they did not choose the tokens that actually matched the speaker’s vowel production but those that matched most closely with the listeners’ expectations of Michigan speech. Hay et al. (2006a) found a similar effect of mentioning a geographical region with a population of listeners from New Zealand. Two groups of listeners were asked to choose a synthesized vowel which was most similar to that of the speaker’s actual production, and mark it on an answer-sheet which had either “Australian” or “New Zealander” written at the top. All listeners heard the same speaker of New Zealand English (NZE), but chose synthesized vowels which were more similar to Australian English if their answer sheet had “Australian” at the top. Hay and Drager (2010) found the same effect when no region was explicitly mentioned but the listeners were shown stuffed toy kangaroos or koalas, associated with Australia, or stuffed toy kiwis, associated with New Zealand. They argued that once a region is primed, it can have a perceptual effect in the listening task. Similar effects have been found with other social factors, such as socioeconomic class and age. Hay et al. (2006b) manipulated the perceived social class and age of the speakers in a vowel identification task and presented listeners with audio input containing /iə/ and /eə/, which are merged for some speakers of NZE. They found a connection between the assumed social characteristics of the speaker and listener accuracy at identifying the produced vowel. Interestingly, Strand (2000) found a similar effect for sex where listeners recognized words more slowly when the pitch in the recording was atypical of speaker sex. Hay and colleagues explain their findings by usage-based models of speech perception. Hay et al. (2006b : 479) suggest a relationship between identification accuracy and the difference between expected and actual production. When both the linguistic and sociolinguistic information is available and is congruent, this may facilitate access through more focused activation of representations, resulting in fewest identification errors. An expectation mismatch or incongruence between the actual production and the expected production, which comes to be expected because of what the listeners are told about the speakers, may lead to higher error rates (and/or slower reaction) as the mismatch between the perceived phonetic and social information would result in a more spread-out activation of representations and may inhibit access. One can hypothesize that activation of experience-based representations with conflicting phonetic or social information at the same time may influence a listener’s ratings. In the next section, I review existing work on foreign accentedness perception and its relationship to social information, namely ethnicity, in order to consider this hypothesis in more detail. 2.2 Foreign accentedness perception and ethnicity A number of studies have explored the way assumed ethnicity of the speaker influences his/her perceived accentedness and intelligibility. As introduced above, reverse linguistic stereotyping has been explored by studies using visual stimuli, such as pictures of people of different ethnicities, to represent the speaker in accentedness rating tasks. In Rubin (1992) the same audio-recording of a native speaker of Standard American English was presented to students in a class with two different pictures supposedly representing the speaker: a Caucasian and an Asian woman. The students who were presented with a picture of an Asian woman rated the recording as more accented because they expected it to be accented, Rubin (1992) argues. Moreover, comprehension scores of listeners presented with an Asian picture were lower than of those presented with a Caucasian picture. This is a persuasive example of what the effect of visual stimuli might be on the perception of native-like linguistic input. However, it remains unclear what the accentedness ratings would have been if the listeners had been presented with Asian and Caucasian faces matched with accented speech rather than Standard American English. In an experiment involving foreign- and standard- accented speech, Yi et al. (2013) collected native English speaker (NES) listeners’ intelligibility and perceived accentedness ratings of native speakers of American English and non-native English speakers (NNESs) of Korean L1 in audio only and audiovisual conditions. In the intelligibility experiment, word recognition in noise was better for NSs than NNESs and better in the audiovisual than audio condition; there was also a significant interaction such that audiovisual benefit was larger for NSs than for NNESs. In the accentedness rating experiment, six NS listeners were presented randomly and rated on a 9-point Likert scale 40 target sentences, each spoken by the 4 speakers (2 NS+2 NNS) in the 2 conditions (audio and audiovisual), resulting in a total of 320 presentations. In line with predictions of the negative bias hypothesis, the authors argue, the Korean speakers were rated significantly more accented in the audiovisual condition than in the audio only condition, exhibiting an effect of ethnicity. However, it could be that the experiment design may have had an impact on the obtained results because, besides the small number of listener and speaker participants, the use of audio/audiovisual condition as a within-listener factor may have prompted the participants to notice the importance of the visual cue. This interpretation is supported by Yi et al. (2014) ’s finding of a null condition effect in a clarity rating task with the same stimuli from the Yi et al. (2013) study. Following a similar method as Yi et al. (2013) , Han-Gyol et al. (2014) find no significant effect of condition or an interaction between condition and speaker group suggesting that listeners found Korean speakers equally comprehensible in both audio only and audiovisual conditions. Findings consistent with the reverse linguistic stereotyping account are explained differently by Babel and Russell (2015) . In line with previous work, they demonstrate that an Asian native speaker of English is perceived to be more accented and less intelligible, but they argue that this is a reflection of listeners’ prior experiences which result in an expectation to hear foreign-accented speech from Asian speakers. They find no relationship between the participants’ bias scores and perception ratings, making stereotyping an unlikely explanation for such behavior. McGowan (2015) explores intelligibility and perceived accentedness in Standard American English and foreign-accented speech. In the intellibility experiment, listeners were presented with foreign-accented speech together with an Asian or a Caucasian photograph or a silhouette. The listeners, who had a task of transcribing Chinese-accented speech presented in multi-talker babble noise, were found to be significantly more accurate when presented with an Asian photograph than a Caucasian face, possibly due to a “mismatch-induced inhibition” in the latter. According to usage-based models of speech perception, Chinese-accented speech and an Asian picture together would activate a more focused set of experience-based representations enhancing intelligibility while a misalignment between the audio and visual input would result in a mismatch of expectations and spread activation more thinly inhibiting intelligibility. Although this was not an accentedness rating experiment, McGowan (2015) argues against the negative bias hypothesis as he found that socioindexical cues enhanced perception. Reverse linguistic stereotyping and expectation mismatch predict conflicting outcomes in regard to foreign-accented Asian and Caucasian speakers in an accentedness rating experiment which is explained in detail in the following section. 2.3 The focus of the present study It should be clear from the discussion above that most studies of the effect of ethnicity on foreign accentedness perception have looked at Asian speakers, leaving Caucasian NNESs (who are not visually distinguishable from Caucasian NESs constituting the majority in the societies where such studies have been conducted) an under-studied group. However, in the absence of a negative bias, the use of Caucasian speaker participants allows for the testing of other effects of ethnicity, such as an expectation mismatch effect. In an accentedness rating task in which Caucasian listeners are presented with foreign-accented speech either by itself or together with an Asian or a Caucasian face, reverse linguistic stereotyping may predict a lower foreign accentedness rating for Caucasian NNESs in the audiovisual condition compared to the audio only one in the absence of a negative bias, and a higher foreign accentedness rating for Asian NNESs in the presence of a negative bias (see Table 1 ). On the other hand, an expectation mismatch effect would predict a similar accentedness score for foreign accented speech presented by itself or with an Asian face and a higher accentedness score for speakers when a Caucasian face is shown. Table 1: Two accounts’ predictions for Asian and Caucasian non-native English speakers’ (NNESs) accentedness ratings in two conditions. Asian NNESs Caucasian NNESs Reverse linguistic stereotyping Audiovisual>Audio Audiovisual<Audio Expectation mismatch effect Audiovisual=Audio Audiovisual>Audio This has not yet been fully explored, however, and is the focus of the rest of this paper. In order to distinguish between these two conflicting predictions, I report on an accentedness rating experiment in which 2 groups of non-native speakers of Asian and Caucasian ethnicities were presented to listeners in 3 conditions: audio track of the recording only, video track only, and audiovisual (audio and video tracks of the recording together). Although this study builds on work which combines the same audio clips with different visual stimuli, no such crossing is used in this paper. Instead, the audiovisual condition contains aligned audio and video tracks for each individual speaker recording which is a more naturalistic and ecological design. 3 Method 3.1 Speakers The speakers in this study were 18 highly proficient but non-native speakers of English (9 L1 Korean and 9 L1 German) and 6 L1 speakers of English (2 New Zealand, 2 Standard American, and 2 Southern British English). All had an age of acquisition of 10 or higher (see Gnevsheva 2015 for more detail). In each language group, half of the participants were males and half females. The age, education, socio-economic class of the participants were comparable to those of the interviewer and listeners: age range=21–34; average age=24.875; all were affiliated with the same university in New Zealand at the time of the study (highest academic degree achieved or in progress: 8 Bachelor’s, 4 Master’s, and 12 PhD). 3.2 Stimuli The 24 speakers were interviewed by the investigator about their university studies in a quiet room at the university. To elicit spontaneous speech, the speakers were asked to tell the interviewer about the applications of their research or study field. They were recorded with the use of a lapel Opus 55.18 MKII beyerdynamic microphone and an H4n Zoom audio-recorder and a Sony AC-L200C video-recorder, recording at 25 frames per second. The speakers were video-recorded against a plain background with the recorder positioned at their eye-level and the frame including their upper body (see Figure 1 ). The intensity was scaled to remove variation in volume of the audio-recordings. Short clips of a minimum of 30 words were extracted from the recordings as stimuli. Because stopping the clips mid-phrase could have an effect on listeners’ perception, complete phrases were used and the exact number of words per clip was allowed to vary (mean length in seconds=15; range=8–22). The clips did not contain proper nouns. The audio tracks recorded by the audio-recorder were synchronized with the respective video tracks, so that listeners heard the same track in both the audio only and the audiovisual conditions (see below). Figure 1: A snapshot from the video track of two non-native English speaking participants. 3.3 Listeners The listeners were 45 Caucasian native speakers of New Zealand English who were recruited through announcements posted around the university campus and via the friend-of-friend method. They were paid a small honorarium for completing the task. 48 people participated in the experiment originally, but 3 participants were excluded from the analysis as they indicated that they had met one or more of the speakers in the experiment. Of the remaining 45, 27 were females and 18 were males with the mean age of 25.47. The listeners were assigned to one of the 3 conditions before meeting with the experimenter: audio only, audiovisual , and video only, – with 15 participants in each. 3.4 Procedure The listeners were seated individually in a quiet lab in front of a computer. Stimuli were presented electronically using E-Prime 2.0 (Psychology Software Tools, 2012). The audio stimuli were presented through head-phones; the video stimuli were presented on the computer screen. Before starting the actual task the listeners read the instructions on the screen, completed a practice trial with a non-linguistic clip which allowed to adjust the volume (in the audio only and audiovisual conditions), and if needed, clarified the procedure with the research assistant. After that, the listeners were presented with 24 clips (1 from each of the 24 speakers) in random order. In the audio only condition, they were presented with the audio clips with a black screen and a fixation point; in the video only condition, they saw the video recordings but did not hear anything, and in the audiovisual condition, they were presented with both the video and the audio signal. In the task, the listeners were instructed to rate the presented clips on a scale which read “No foreign accent at all” and “Very strong foreign accent” at the two extremes using number keys 1 through 7. The listeners could not re-play the clips. At the end, the listeners completed a short biographical questionnaire. The task was self-paced and took up to 30 minutes to complete. The research was reviewed and approved by the University of Canterbury Human Ethics Committee. 4 Results The accentedness ratings of the NNSs were analyzed in R ( R Core Team 2014 ). A linear mixed-effects model was fit to the NNS data with the perceived accentedness rating as the dependent variable. The maximal model ( Barr et al. 2013 ) included an interaction of condition and L1 as fixed effects; s peaker , nested within L1 group, and listener were included as random intercepts; and L1 as a random slope for listener ( Table 2 ). The Korean L1 speakers in the audio condition were chosen as the reference level (Intercept). The estimate and the standard error columns in the table give us the predicted accentedness rating and standard error for a level respectively. So for the base level (the Korean L1 speakers in the audio condition), the predicted accentedness rating is 4.415. To calculate the predicted accentedness rating for a different level, the respective value in the estimate column is added or subtracted. For example, the Korean L1 speakers received a rating 0.052 higher in the audiovisual condition and 0.156 lower in the video condition than in the audio condition. These differences were not significant, as indicated in the significance column. This means that the ratings of Korean L1 speakers between the conditions were not significantly different. In the audio condition German and Korean L1 speakers were not rated to be significantly different from each other. Table 2: Summary for model of accentedness rating. Estimate Standard error df t value Pr(>|t|) Significance (Intercept) 4.415 0.388 30.7 11.371 0.000 – condition_audiovisual 0.052 0.295 44.3 0.176 0.861 condition_video −0.156 0.295 44.3 −0.527 0.601 L1 German −0.556 0.504 23.0 −1.101 0.282 condition_audiovisual:L1G 0.511 0.283 44.1 1.807 0.078 condition_video:L1G −0.785 0.283 44.1 −2.775 0.008 ** Note: * p<0.05; ** p<0.01, *** p<0.001. When the model was re-run with levels of L1 re-leveled and German as the Intercept, the accentedness ratings of German L1 speakers’ ratings were significantly different between conditions ( Table 3 ). They were rated significantly more accented in the audiovisual condition but less accented in the video condition compared to audio only, which is different from Korean L1 speakers. This interaction of condition and L1 is plotted in Figure 2 . Figure 2: Model prediction for accentedness ratings of Korean and German speakers in the 3 conditions (from model in Table 2 ). Table 3: Summary for model of accentedness rating (re-leveled). Estimate Standard error df t value Pr(>|t|) Significance (Intercept) 3.859 0.377 27.80 10.251 0.000 *** condition_audiovisual 0.563 0.263 44.21 2.143 0.038 * condition_video −0.941 0.263 44.21 −3.581 0.001 *** L1K 0.556 0.505 22.99 1.101 0.282 condition_audiovisual:L1K −0.511 0.283 44.11 −1.807 0.078 condition_video:L1K 0.785 0.283 44.11 2.775 0.008 ** Note: * p<0.05; ** p<0.01, *** p<0.001. To test whether the same NNESs who got a lower score in the video condition also received a higher score in the audiovisual condition compared to audio only, I calculated the means for each speaker in each condition, then for each speaker subtracted the audio mean from the audiovisual mean, obtaining the individual audiovisual enhancement score, and the video mean from the audio mean, resulting in the individual visual accentedness predictability score. The smaller the audiovisual enhancement score, the more of the visual benefit is found and the less accented the speaker is rated when the visual input is available compared to when it is not. The larger the visual accentedness predictability score, the more “accentless” the speaker looks compared to how he or she sounds. For example, German L1 speaker Lea’s mean score across all the listeners in the audiovisual condition was 5.80, in the audio condition 5.00, and in the video condition 3.27. Lea’s audiovisual enhancement score is 5.80–5.00=0.80, and the visual accentedness predictability score is 5.00–3.27=1.73. The positive audiovisual enhancement score means that Lea is perceived to be more accented in the audiovisual condition than in the audio only condition. The positive visual accentedness predictability score means that Lea is perceived to be more accented in the audio condition than in the video only one. Calculated in the same fashion, another German L1 speaker Linda’s audiovisual enhancement score is 0.93 and visual accentedness predictability score is 1.87. Both of these scores are higher for Linda than for Lea, suggesting that there may be a correlation between them. To see whether the difference between the audio and the video conditions is predictive of the difference between the audiovisual and audio conditions, I fit a linear regression model with the audiovisual enhancement score as the dependent variable and an interaction between the first language and the visual accentedness predictability score as predictors. However, the interaction was not found to be significant and L1 did not improve model fit, so the final model includes only visual accentedness predictability as an independent variable. In Table 4 we can see that the visual accentedness predictability score was a significant predictor of the audiovisual enhancement score such that the less accented a speaker was rated in the video condition compared to the audio condition the more accented that speaker was rated in the audiovisual condition compared to the audio condition. That is, the less accented a speaker looks, the more accented he/she is perceived to be when the video input is available compared to when it is not. This relationship is represented in Figure 3 . Figure 3: Model prediction for the relationship between the audiovisual enhancement score and the visual accentedness predictability score. Table 4: Summary for model of the audiovisual enhancement score (audiovisual - audio) in accentedness ratings. Estimate Standard error t value Pr(>|t|) Significance (Intercept) 0.204 0.103 1.983 0.065 – visual accentedness predictability score (audio – video) 0.189 0.072 2.637 0.018 * Note: * p<0.05; ** p<0.01, *** p<0.001. I ran a second linear mixed-effects model to explore the difference in accentedness ratings between German and New Zealand English L1 speakers in the video condition to check whether the Caucasian non-native speakers were rated more accented than the NS based on visual cues only. The model was fit to the German and New Zealand English L1 speaker video condition data with the perceived accentedness rating as the dependent variable. The maximal model included L1 as a fixed effect, listener and speaker , nested within L1 group, as random intercepts, and L1 as random slope for listener . The model illustrates that in the video condition, there was no significant difference between the two language groups ( Table 5 ). This suggests that the listeners were not able to infer the foreign accent based on the video input only. Table 5: Model summary for accentedness ratings of German and New Zealand English first language (L1) speakers. Estimate Standard error df t value Pr(>|t|) Significance (Intercept) 3.233 0.484 11.417 6.686 0.000 – L1_German −0.315 0.518 11.039 −0.607 0.556 5 Discussion By way of reminder, reverse linguistic stereotyping and expectation mismatch accounts had different predictions for Asian and Caucasian speakers in audio and audiovisual conditions. Reverse linguistic stereotyping predicted a higher accentedness score for Asian speakers and a lower accentedness score for Caucasian speakers in the audiovisual condition. Expectation mismatch effect predicted a similar score in different conditions for Asian speakers and a higher accentedness score in the audiovisual condition for Caucasian speakers. The accentedness ratings of Korean L1 speakers in the audio condition were not significantly different from the other two conditions, which is different from the findings of Yi et al. (2013) . No difference between the audio and the video conditions suggests that the degree of accentedness that the listeners heard in the audio condition was similar to the degree of accentedness they expected to hear from the Asian speakers in the video-only condition. When the video and audio inputs were congruent in the audiovisual condition, as per listeners’ expectations, there was no additional effect of ethnicity and the rating in the audiovisual condition was not significantly different from audio only. The negative bias hypothesis, as interpreted by Yi et al. (2013) , was not supported as Korean L1 speakers were not rated significantly more accented in the audiovisual condition compared to the audio one. The effect found by Yi et al. (2013) may be due to the experimental design in which listeners were presented with the same sentence and same speakers multiple times. Based on the results from previous studies, I predicted that the ratings of German L1 speakers in the audiovisual condition would be different from those in the audio condition. The results show that the audiovisual ratings were higher in accentedness scores than the audio only ones. This suggests that reverse linguistic stereotyping did not play the leading role here as a lower accentedness score would be expected. However, as it is likely that the listeners did not expect to hear a foreign accent coupled with a Caucasian face, this could constitute an expectation mismatch effect. That is, listeners were not expecting to hear a foreign accent when they saw a Caucasian speaker, but when they did, the accent was made more salient, resulting in a higher accentedness score. This interpretation is supported by the significant positive correlation between the difference in the ratings between the audiovisual and audio conditions and between the audio and the video conditions. This means that the more “accentless” the speaker looked and was rated in the video condition compared to the audio condition, the more of a mismatch effect there was and the more the accent “stood out” to the listeners in the audiovisual condition compared to the audio only one. In accordance with other accounts of reverse linguistic stereotyping ( Rubin 1992 ; Yi et al. 2013 ), German L1 speakers in the video only condition were rated significantly less accented when the listeners could not hear the speakers as in the audio condition when the accent was actually heard. Moreover, no significant difference was found in the ratings of German and NZE L1 speakers in the video condition, which means that the listeners could not tell the difference between Caucasian L1 and L2 speakers of NZE based on the video input only. To sum up, Asian NNESs received similar foreign accentedness ratings in the audio and audiovisual conditions while Caucasian NNESs received higher ratings in the audiovisual condition, in line with the predictions of the expectation mismatch effect and contradicting the negative bias hypothesis. These findings support the role of socioindexical expectation described in McGowan (2015) and suggest that reverse linguistic stereotyping may not be the only explanation for an ethnicity effect, but rather a perceived alignment between the audio and the video inputs may have a facilitatory effect while perceived mismatch or misalignment may result in inhibition as the visual and the audio input may be activating conflicting experience-based representations. This is compatible with the sociophonetic work, described above, which posits experience-based language representations but which has traditionally focused on variation in L1 (e.g., Hay et al. 2006a , who showed that listeners were more likely to make errors in vowel identification when there was a mismatch between actual production and their expected production). When listeners see an Asian speaker, “accented” representations are more likely to be activated, and hearing accented speech reinforces their activation, facilitating easier access and retrieval. However, when listeners see a Caucasian speaker, “accentless” representations are more likely to be activated, but hearing accented speech activates other representations spreading overall activation more thinly and inhibiting access and retrieval. 6 Conclusion This study explored the effect of socioindexical expectation on accentedness ratings of two groups of NNESs. No differences were found in accentedness ratings of Korean L1 speakers in the audio, video, and audiovisual conditions, but German L1 speakers were found to be rated significantly less accented in the video condition and more accented in the audiovisual condition compared to the audio only condition. I have argued that this result is due to an expectation mismatch effect for the German L1 speakers. Whereas listeners were expecting to hear an accent when they saw an Asian face paired with accented audio input in the audiovisual condition, they were “surprised” to hear an accent when they saw a Caucasian face, resulting in a higher accentedness rating. Literature on L1 linguistic behavior has often used expectations which are formed by previous experience to explain variation in multiple domains (e.g., Hay et al. 2006b ; Niedzielski 1999 ; discussed above). The current study extends the effect of expectation to NNS perception. The discussion was framed within usage-based models of speech perception. As listeners’ expectations are representative of their past experiences and of societal stereotypes, the current findings may only be applicable to societies with a similar demographic distribution; therefore, it would be interesting to replicate this study in a different setting. In the same setting, quantification of listener experience with Asian NESs and Caucasian NNESs may allow to explore the effect of such experience on accentedness perception ratings in more detail. Future research using Asian NES and NNES listeners may help to clarify whether there is an effect of listener ethnicity on perception of foreign accented speech produced by Asian and Caucasian speakers. Negative bias may be expected to have an effect on perceived accentedness in an expectation mismatch when there is expectation of accentedness based on social characteristics. An expectation mismatch may result in higher accentedness ratings when there is no expectation of accentedness. These findings suggest that an ethnicity effect may be found for Caucasian speakers as well as Asian speakers, as has been highlighted in previous research. Whereas we may be better aware of stereotyping of minority ethnicities and its effects on people’s judgments, we might not be aware of an adverse effect of a majority ethnicity on perceived accentedness to the same extent.
New research from The Australian National University (ANU) has shown people demonstrate unconscious negative biases when they encounter a person of ethnic appearance or hear a foreign accent. Dr. Ksenia Gnevsheva of the ANU School of Literature, Languages and Linguistics asked research participants to watch videos of ethnically diverse people speaking and rate the level of accent they detected. She found participants rated a person of Asian appearance as accented, even though a video of them speaking was played to them mute. They also rated a Caucasian-looking person as not at all accented without hearing their audio, when they were actually a native German speaking English with an accent. "I wanted to see how accents interacted with ethnicity and I found that when people responded to visual cues only; what a person looks like, they make assumptions about a person's accent. They rated an Asian person's accent stronger than that of a Caucasian-looking person's, without even hearing the audio," said Dr. Gnevsheva. While both Korean and German speakers were rated to have a similar degree of accent in English when the participants were played the audio-recordings (without vision), these ratings changed when the video track was added. When participants both saw and heard the non-native speakers of English, they rated the Asian person with the same level of accent as when there was no video track, but rated the German English speaker as more accented than just the audio-recording. "This was a new and unexpected finding," said Dr. Gnevsheva and it tells me that people don't expect to hear an accent from a Caucasian looking person so they get a surprise when they hear one and rate it as more highly accented. "I call this an expectation mis-match and it goes some way to showing that our unconscious biases can work against a large cross section of society, not just people of ethnic appearance," she said. Dr. Gnevsheva hopes the research will raise people's awareness of inherent biases they have in relation to ethnic appearance and linguistic ability. "You could say this is linguistic discrimination. People often base hiring and promotion decisions on communicative ability and we have shown that ethnicity affects our perception of communicative ability." "Racial discrimination laws don't actually cover linguistic discrimination. People can substitute other types of discrimination for linguistic discrimination," she said. Dr. Gnevsheva says linguists are divided over whether the results are the product of inherent negative biases or due simply to a limited experience and exposure to a diversity of languages. "I think as a society and as individuals we are cautious of 'the other'; things we haven't experienced or don't understand, but as they become more common and we begin to see them more frequently, we grow to know, understand and accept diversity and multiculturalism." Dr. Gnevsheva's research is published in Linguistics.
10.1515/ling-2018-0006
Medicine
Breast cancer tumor-initiating cells use mTOR signaling to recruit suppressor cells to promote tumor
Thomas Welte et al, Oncogenic mTOR signalling recruits myeloid-derived suppressor cells to promote tumour initiation, Nature Cell Biology (2016). DOI: 10.1038/ncb3355 Journal information: Nature Cell Biology
http://dx.doi.org/10.1038/ncb3355
https://medicalxpress.com/news/2016-05-breast-cancer-tumor-initiating-cells-mtor.html
Abstract Myeloid-derived suppressor cells (MDSCs) play critical roles in primary and metastatic cancer progression. MDSC regulation is widely variable even among patients harbouring the same type of malignancy, and the mechanisms governing such heterogeneity are largely unknown. Here, integrating human tumour genomics and syngeneic mammary tumour models, we demonstrate that mTOR signalling in cancer cells dictates a mammary tumour’s ability to stimulate MDSC accumulation through regulating G-CSF. Inhibiting this pathway or its activators (for example, FGFR) impairs tumour progression, which is partially rescued by restoring MDSCs or G-CSF. Tumour-initiating cells (TICs) exhibit elevated G-CSF. MDSCs reciprocally increase TIC frequency through activating Notch in tumour cells, forming a feedforward loop. Analyses of primary breast cancers and patient-derived xenografts corroborate these mechanisms in patients. These findings establish a non-canonical oncogenic role of mTOR signalling in recruiting pro-tumorigenic MDSCs and show how defined cancer subsets may evolve to promote and depend on a distinct immune microenvironment. Main Myeloid-derived suppressor cells (MDSCs) are a heterogeneous population defined as CD11b + Gr1 + cells. They can be roughly divided into granulocytic and monocytic subsets using Ly6G and Ly6C as markers, respectively. Both CD11b + Ly6G + and CD11b + Ly6C + cells have immunosuppressive activities, although different mechanisms may be used. The granulocytic subset is more often found expanded in tumour models and is involved in promoting tumour progression 1 , 2 , 3 , 4 , although anti-tumour effects have also been observed 5 . In the clinic, MDSCs were first identified in the peripheral blood of cancer patients as non-lymphoid haematopoietic suppressor cells 6 that have been subsequently shown to increase during progression in many cancer types 7 , 8 . Similar to mouse, most human MDSCs carry markers of immature myeloid lineage cells and qualify as either granulocytic (CD11b + CD33 + CD15 + HDLA low ) or monocytic (CD11b + CD14 + HDLA low ) subsets 3 , 4 , 9 . Considerable information has been obtained about the biogenesis and functions of MDSCs. The cytokines responsible for MDSC accumulation include G-CSF 10 , 11 , 12 , 13 , 14 , GM-CSF 15 , IL-1β 16 , 17 , IL-6 18 , PGE2 19 , IFN-γ 20 , IL-4 21 and VEGF 22 . The immunosuppressive mechanisms used by MDSCs involve secretion of TGFβ, generation of nitric oxide and reactive oxygen species, and metabolic depletion of L -arginine by arginase 1 (refs 1 , 2 ). These activities can blunt cytotoxicity, block proliferation or induce apoptosis of cytotoxic T lymphocytes and natural killer cells. Other MDSC functions include formation of a pre-metastatic niche 23 , enhancement of tumour invasion 24 , 25 and stimulation of angiogenesis 25 . Despite this knowledge, we have a limited understanding of why and how individual tumours vary widely in their propensity to induce MDSCs. Here, we demonstrate that this propensity is determined by an oncogenic signalling pathway and linked to the subpopulation of tumour-initiating cells (TICs). RESULTS Inter-tumoral heterogeneity of MDSC infiltration We examined myeloid cells in a variety of syngeneic mammary tumour models of diverse genetic backgrounds and tumorigenic drivers. MMTV–WNT1, MMTV–WNT1–iFGFR and P53–PTEN double knockout (DKO) are genetically engineered mouse models in the FVB background. MMTV–WNT1 is a widely used model of basal-like tumours. WNT1–iFGFR is a bigenic model based on MMTV–WNT1, in which FGFR signalling can be inducibly activated 26 , 27 . The P53–PTEN DKO was generated by conditional deletion of these tumour suppressors using a MMTV-driven Cre. P53N tumour lines initially arose from transplanted P53-null mammary gland tissues in BALB/c mice, and are maintained through mouse-to-mouse orthotopic transplantation. Despite the common loss of P53, P53N lines exhibited remarkable inter-tumoral heterogeneity in genomic copy number, gene expression profiles and TIC frequencies 28 . The 67NR–4TO7–4T1 series are established cell lines derived from a spontaneous tumour in BALB/c mice. Taken together, these reagents provide an unbiased representation of available syngeneic models. Mammary tumours were generated either by spontaneous tumorigenesis (MMTV–WNT1, WNT1–iFGFR and P53–PTEN DKO), or orthotopic transplantation of primary tumour tissues (P53N series) or cell suspensions (67NR, 4TO7, and 4T1). When tumours reached 1 cm 3 , we examined myeloid cell infiltration by immunofluorescence staining of S100A8, which was predominantly expressed by CD11b + Gr1 + cells rather than tumour cells ( Supplementary Fig. 1b ). Significant inter-tumoral heterogeneity was discovered ( Fig. 1a and Supplementary Fig. 1a ). Quantification of CD11b + Gr1 + cells in dissociated tumours was largely consistent with S100A8 staining ( Fig. 1b ). The accumulation of CD11b + Gr1 + cells is systemic, as they were also found in peripheral blood and the frequencies closely correlated with the tumour-infiltrating myeloid cells ( Fig. 1c, d ). Overall, the inter-model variations of CD11b + Gr1 + cells were as high as 50- to 100-fold. In contrast, tumours of the same model exhibited only a 2- to 5-fold variation ( Fig. 1b ). Thus, the local and systemic accumulation of CD11b + Gr1 + cells seems to be a stable trait of each tumour line. Figure 1: Inter-tumoral heterogeneity of MDSC infiltration. ( a ) Representative immunofluorescence staining of S100A8 in the indicated tumours. Green, S100A8; blue, DAPI (nucleus). Scale bar, 100 μm. ( b ) Flow cytometry quantification of tumour-infiltrating CD11b + Gr1 + cells. Animal numbers ( n ) in each model are indicated. Five independent experiments were performed with consistent results and one representative experiment is shown. ( c , d ) Flow cytometry quantification of CD11b + Gr1 + cells in peripheral blood. Animal numbers: ( c ) tumour-free: n = 4, MMTV–Wnt1: n = 5, MMTV–Wnt1–iFGFR: n = 7; ( d ) tumour-free: n = 4, P53N-A: n = 4, P53N-B: n = 2, P53N-C: n = 5. Three independent experiments were performed with consistent results and a representative experiment is shown. NS, not significant. ( e ) FACS analyses show that CD11b + cells in blood of P53N-C tumour-bearing mice express Ly6C at a lower and more variable level compared with normal neutrophils. Upper: Representative data of Ly6G high CD11b + leukocytes in tumour-free and P53N-C-tumour-bearing mice. Lower: Ly6C histogram of CD11b + Ly6G + cells in P53N-C-tumour-bearing (pink) or tumour-free (green) mice. Each panel is representative of six mice per group. ( f ) Top and middle: Haematoxylin and eosin (H&E) staining of blood cells in P53N-C tumour-bearing mice compared with tumour-free control. Scale bars, 50 μm. Bottom: Representative abnormal granulocytic cells. Scale bar, 5 μm. ( g ) In vitro T-cell proliferation, measured by CFSE assay, is inhibited by MDSCs. Histograms of CFSE levels in CD3- and IL-2-stimulated T cells with (bottom) or without (top) co-culture of MDSCs at 1:3 ratio. See Supplementary Fig. 1 for quantification. ( h ) IFN-γ concentration in conditioned medium of T cells cultured with and without MDSCs or normal neutrophils. In g , h , triplicate wells of each group were analysed. Two independent experiments were conducted with representative results shown. ( i ) iNOS expression in MDSCs and normal neutrophils by real-time qPCR. Data derived from three and two animals, respectively. ( j ) Box–whisker plot of mammary tumour size in animals treated with IgG control and Ly6G depletion antibody, respectively. ( k ) Pulmonary metastasis. From left: representative IVIS images, visible lesions, and H&E staining of lung metastases in control and anti-Ly6G groups and bioluminescence quantification. n = 4 animals per group, selected from larger group to have similar orthotopic tumour sizes. Scale bars, 10 mm, 5 mm and 50 μm from left to right. In j , k , the experiment was performed twice with similar consistent results. One representative experiment is shown. Error bars indicate s.e.m. P values are determined by two-tailed Student’s t tests. Statistics source data for c , d , k , are provided in Supplementary Table 4 . Full size image Most CD11b + Gr1 + cells expressed a high level of Ly6G and a variable level of Ly6C ( Fig. 1e ), and exhibited a granulocytic morphology ( Fig. 1f ). Neutrophils isolated from tumour-free mice also express CD11b and Ly6G. However, P53N-C tumour-induced CD11b + Ly6G + cells differ from normal neutrophils by their heterogeneous expression of Ly6C ( Fig. 1e ), which typically indicates an immature status 29 . Co-culture of CD11b + Gr1 + cells with T cells inhibited CD3- and IL-2-induced T-cell proliferation and IFN- γ production ( Fig. 1g, h and Supplementary Fig. 1c ), whereas normal neutrophils did not exhibit these effects ( Fig. 1h and Supplementary Fig. 1c ). Moreover, iNOS expression is over 500-fold higher in CD11b + Ly6G + cells than in normal neutrophils ( Fig. 1i ). Thus, these CD11b + Gr1 + cells are immunosuppressive and, therefore, MDSCs by definition. MDSCs can be depleted by administration of a monoclonal antibody against Gr1 (ref. 12 ). The depletion inhibited orthotopic growth of P53N-C tumours ( Fig. 1j ). Moreover, treatment following orthotopic tumour resection significantly reduced distant metastasis to lungs ( Fig. 1k ) demonstrating the tumour-promoting role of MDSCs. The oncogenic mTOR pathway dictates MDSC accumulation To identify the determinant of inter-tumoral heterogeneity of MDSCs, we performed reverse-phase protein array (RPPA) analysis of 200 (phos-)proteins covering major signalling pathways ( Supplementary Table 1 ). MDSC accumulation correlated with the AKT–mTOR and the MAPK pathways across the WNT1, WNT1–iFGFR and P53–PTEN DKO models, and only the AKT–mTOR pathway in the P53N series ( Fig. 2a ), suggesting that the mTOR pathway is a likely driver of MDSC accumulation. This was largely confirmed by western blot analysis in P53N tumours ( Fig. 2b ) as well as in the 67NR–4TO7–4T1 series ( Fig. 2c ). Figure 2: The mTOR pathway drives tumour-induced MDSC accumulation. ( a ) RPPA profiling in the indicated mammary tumour models. Signalling molecules involved in the mTOR and MAPK pathways have been selected to make heat maps. Each column represents a biological replicate. Each biological replicate is the mean of a technical triplicate. See Supplementary Table 1 for raw data. ( b , c ) Western blotting with indicated antibodies to validate RPPA results in indicated tumour models. Each lane shows one independent tumour. Unprocessed original scans of blots are shown in Supplementary Fig. 8 . ( d ) MDSC quantification in blood of P53N-C tumour-bearing mice at the indicated time points and rapamycin treatment. Three independent experiments were performed and yielded similar results. One representative experiment with n = 5 animals per group is shown. ( e ) MDSC quantification in blood of animals carrying control 4T1 tumours or 4T1 expressing shRNA against the mTOR complex 1 protein Raptor. n = 8 and 24 animals for control and Raptor shRNA groups, respectively. ( f ) T-cell quantification in blood of the animals in e . n = 8 and 24 animals for control and Raptor shRNA groups, respectively. ( g ) P53N-C tumour growth curves under different treatment conditions. On day 32, the rapamycin-treated group was randomized into two sub-groups as indicated by the dashed line, one of which received transplantation of 4 × 10 6 MDSCs at two time points (indicated by vertical arrows). Tumour size was normalized for each sample relative to tumour size at the onset of differential treatments. n = 6 animals per group. P value calculation: general linear model accounting for repeated measurements at different time points. ( h ) Tumour growth curves of 4T1 cells expressing Raptor shRNA ( n = 24 animals) or scrambled shRNA control ( n = 8 animals). On day 15 after tumour transplantation, the Raptor shRNA group was randomized into three groups with or without adoptive transfer of MDSCs or normal neutrophils (NNs; arrows indicate time points of cell transfers). n = 8 per group after randomization. Experiments in e – h were performed twice with similar consistent results. Error bars indicate s.e.m., and P values are calculated by two-tailed Student’s t tests unless otherwise noted. Full size image Treatment with rapamycin, an mTOR inhibitor, reduced MDSCs in P53N-C-tumour-bearing mice ( Fig. 2d ). Rapamycin may affect non-cancer cells including MDSCs themselves. To verify that the mTOR activity responsible for MDSC accumulation is cancer cell-intrinsic, we employed a short hairpin RNA (shRNA) to deplete Raptor, an essential protein of the mTOR complex 1, in 4T1 cancer cells ( Supplementary Fig. 2 ). This depletion resulted in a significant reduction of MDSCs in vivo ( Fig. 2e ). We also observed an increase of overall T cells and a decrease of PD1 + T cells ( Fig. 2f ). These data strongly support that the oncogenic mTOR signalling may mediate this systemic immunosuppression. To further distinguish cancer cell-autonomous functions of mTOR from its systemic impact through MDSCs, we performed a ‘rescue’ experiment by adoptively transferring MDSCs into rapamycin-treated animals bearing P53N-C tumours ( Fig. 2g ), or into animals transplanted with 4T1 cells expressing the shRNA against Raptor ( Fig. 2h ). In both cases, tumour progression was partially restored ( Fig. 2g, h ). In contrast, normal neutrophils were unable to rescue tumour progression ( Fig. 2h ). Thus, at least part of the tumour-promoting effects of the mTOR pathway is through MDSC induction. The mTOR pathway drives expression of G-CSF We surveyed over 20 cytokines using multiplexed luminex assays in some P53N models. The serum G-CSF concentration was higher in the P53N-C model than in tumour-free or P53N-A. Other cytokines did not exhibit this trend ( Fig. 3a ). Expression of a constitutively active mTOR mutant 30 in P53N-A tumours led to increased expression of G-CSF ( Fig. 3b ). Conversely, rapamycin treatment on P53N-C cells/tumours resulted in the reduction of G-CSF in conditioned medium or in serum ( Fig. 3c, d ). For 4T1 cells, either rapamycin treatment or genetic depletion of Raptor also decreased G-CSF expression ( Fig. 3e, f ). Taken together, these data support that G-CSF is a downstream target of the mTOR pathway in tumour cells. Figure 3: The oncogenic mTOR pathway drives MDSC accumulation through G-CSF. ( a ) Bioplex assays of the indicated cytokines in sera. Data are from two tumour-free, two P53N-A-tumour-bearing and three P53N-C-tumour-bearing mice. ( b ) Quantification of G-CSF expression by real-time qPCR in P53N-A tumour cells expressing a constitutively active mTOR mutant (S2215Y) or wild-type control. ( c ) Cell viability (by WST-1 assay) and G-CSF quantity (by ELISA) in P53N-C cells as a function of rapamycin concentration. ( d ) Quantification of serum G-CSF by ELISA in P53N-C-tumour-bearing mice with rapamycin or vehicle treatment. Two experiments were performed with consistent results. One representative experiment with n = 5 animals per group is shown. ( e ) Rapamycin’s effect on 4T1 cell viability and G-CSF expression. ( f ) ELISA quantification of G-CSF in conditioned medium of 4T1 cells expressing Raptor shRNA, the scrambled shRNA control or empty vector. In b , c , e , f , two independent experiments were conducted each with technical triplicates. One representative experiment is shown. ( g ) Growth curves of 4T1 cells expressing Raptor shRNA ( n = 12 animals) or scrambled shRNA ( n = 6 animals). On day 17 after tumour transplantation, the Raptor shRNA group was randomized into two groups ( n = 6 animals each), one of which received G-CSF. The P values were determined using general linear models accounting for repeated measurements at different time points. ( h ) Representative H&E staining and pulmonary metastasis quantification from the experiments in g , day 28 post tumour transplantation. n = 5 animals for the first two groups, and n = 4 for the third group. Ten fields were randomly chosen from each animal and averaged. P values are calculated by one-tailed Student’s t tests. Scale bars, 200 μm and 50 μm in the upper and lower panels, respectively. ( i , j ) Tumour volume ( i ) and lung metastasis ( j ) of 4T1 cells expressing G-CSF shRNA or vector control ( n = 8 animals). On day 17, the G-CSF shRNA group was randomized into three groups: untreated (G-CSF shRNA, n = 7 animals), treated with MDSCs (G-CSF shRNA + MDSCs, n = 7 animals) or treated with Gr1 + splenocytes of tumour-free mice (G-CSF shRNA + Gr1 + , n = 4 animals). Tumour volumes on day 24 and lung metastases on day 35 are shown. The lung metastasis burden was graded as described in Methods . P values are calculated by two-tailed Student’s t tests. Scale bar, 5 mm. Error bars indicate s.e.m. Statistics source data for c , i , j , provided in Supplementary Table 4 . Full size image We asked whether systemic restoration of G-CSF could rescue delayed tumour progression caused by mTOR inhibition. We administered recombinant G-CSF protein to mice bearing 4T1 tumours expressing the shRNA against Raptor. This treatment restored orthotopic tumour growth and pulmonary metastasis ( Fig. 3g, h ). Thus, G-CSF functionally mediates at least part of the tumour-promoting functions of mTOR. The role of G-CSF in MDSC formation and tumour progression has been investigated in previous studies 10 , 11 , 12 , 13 , 14 , 31 . To confirm these effects in our models, we depleted G-CSF either by a neutralizing antibody in the P53N-C model or by shRNA-mediated knockdown in the 4T1 model. These approaches lowered serum G-CSF by 50% or 70%, respectively ( Supplementary Fig. 3a, b ), resulted in a decrease of MDSCs ( Supplementary Fig. 3c, d ), and retarded tumour progression and metastasis ( Fig. 3i, j ). Importantly, this retardation was reversed by adoptive transfer of MDSCs but not splenic Gr1 + cells from tumour-free mice ( Fig. 3i, j ). The depletion of G-CSF did not generate discernible effects on cancer cells in vitro ( Supplementary Fig. 3e ), arguing against a simple autocrine effect of G-CSF. Taken together, these data further demonstrate that G-CSF-promoted tumour progression is mediated by MDSCs. Pharmacological inhibition of mTOR may have confounding effects due to its roles in non-cancer cells. Diverse upstream signals may stimulate mTOR activity, some of which are more cancer-specific, such as activation of FGFR and loss of PTEN ( Fig. 2a ). Indeed, the induction of FGFR signalling in WNT1–iFGFR bigenic tumours led to a sevenfold increase of serum G-CSF ( Fig. 4a ). 4T1 cells are known to possess autocrine FGFR signalling 32 . Reanalysis of a published data set revealed that treatment of 4T1 cells by TKI258, a potent FGFR inhibitor, reduces Csf3 expression, the gene encoding G-CSF. Other MDSC-inducing cytokines did not exhibit the same trend ( Fig. 4b ). Furthermore, inhibition of the upstream FGFR signalling by BGJ398 also reduced G-CSF both in vitro ( Fig. 4c ) and in vivo ( Fig. 4d ). Accordingly, this inhibition also led to a significant reduction of MDSCs ( Fig. 4e ). Simultaneous inhibition of both EGFR and FGFR significantly reduced G-CSF expression in the P53N-C model ( Fig. 4f ), although targeting FGFR alone was ineffective, suggesting a redundant role of these receptor tyrosine kinases. Thus, G-CSF is a mediator of mTOR-driven MDSC accumulation. Targeting upstream activators of the mTOR pathway such as FGFR, therefore, may represent viable therapeutic strategies to mitigate MDSC-mediated tumour progression, in addition to their direct effects on cancer cells. Figure 4: FGFR, an upstream activator of the mTOR pathway, can be targeted to reduce G-CSF production. ( a ) Quantification of serum G-CSF by ELISA in mice bearing the indicated tumours. Number of animals per group: no tumour: n = 4; MMTV–Wnt1: n = 5; MMTV–Wnt1–iFGFR: n = 7. Two independent experiments were performed with similar results. One representative experiment is shown. ( b ) Gene expression alteration of several cytokines related to MDSC expansion in 4T1 cells on treatment with TKI258 ( n = 3 biological replicates, raw data available from GSE19222 ). ( c ) 4T1 cell viability (measured by WST-1 assay) and G-CSF quantity (by ELISA) are plotted as a function of concentration of BGJ398. All results are presented as percentage of control (untreated). For each data point, cells were cultured in three wells. The experiment was performed twice with consistent results. One representative experiment is shown. ( d ) Quantification of serum G-CSF by ELISA in 4T1-tumour-bearing mice and tumour-free mice at the indicated time points and treatment. ( e ) Quantification of MDSCs in the blood of 4T1-tumour-bearing mice at the indicated time points with or without BGJ398 treatment. For d , e , three independent experiments were performed with similar results. A representative experiment with n = 4 animals per group is shown. ( f ) Relative G-CSF concentration as measured by ELISA in the supernatant of P53N-C cells treated with the indicated drugs for 6 h. For each data point, cells were cultured in three wells. The experiments were repeated twice with consistent results. Error bars indicate s.e.m., and P values are calculated by two-tailed Student’s t -tests. Statistics source data for a , d , e , are provided in Supplementary Table 4 . Full size image Genomic analyses support the role of G-CSF and MDSCs in mTOR-driven immunosuppression and tumour progression We examined the Cancer Genome Atlas (TCGA) 33 data sets comprised of genomic, transcriptomic and proteomic data of human breast cancers. To assess the G-CSF/MDSC status, we obtained a gene expression signature derived from the haematopoietic cells of human patients subjected to G-CSF treatment 34 . This signature (designated as G-CSF-sig) encompasses genes overexpressed by G-CSF-induced human haematopoietic cells. In TCGA data sets, the G-CSF-sig positively correlates with mTOR pathway activity as indicated by the RPPA data, and inversely correlates with T-cell antigen receptor (TCR) pathway genes ( Fig. 5a ). As one of the upstream activation signals, the genomic amplification and expression of FGFR positively correlated with the G-CSF-sig ( Fig. 5a ). The correlation among FGFR, the G-CSF-sig, and the TCR genes was also confirmed in other clinical data sets ( Supplementary Fig. 4a–d ) 35 , but not in a panel of breast cancer cell lines in vitro ( Supplementary Fig. 4e ), suggesting that this association is not due to intrinsic overlap between the gene sets, rather it seems to be dictated by non-tumour cells. Figure 5: A G-CSF-responsive gene signature (G-CSF-sig) links the mTOR pathway to MDSC infiltration and immunosuppression in human breast cancer. ( a ) Heat maps showing the expression of the G-CSF signature as a single score (top), RPPA data for phospho-proteins reflecting the mTOR activity (middle), and TCR pathway components (bottom). The red and blue sticks above the heat maps indicate tumour samples whose FGFR-1, -2 or-3 expression is higher than the population means by >2 standard deviations at the levels of transcription (red) or genomic copy number (blue). The P values are based on Student’s t -test on the Pearson correlation coefficients among different variables. ( b ) Gene Set Enrichment Analysis (GSEA) shows the correlation between G-CSF-sig and gene sets up- or downregulated by rapamycin and pp242. NES, normalized enrichment score. q , false discovery rate. P values were computed on the basis of comparison against random simulation as implemented in GSEA. ( c ) Box–whisker plots of G-CSF-sig scores in different subtypes of breast cancer. ( d ) Kaplan–Meier curves show distant metastasis-free survival in the EMC-MSK data set divided into tertiles based on G-CSF-sig score. P values were calculated using the log-rank test. ( e ) Scatter plots show the correlation between G-CSF expression and the mTOR activity as indicated by the direct substrate pS6K(T389) determined by western blotting. Each point represents an independent tumour. Seven different PDXs with 1–3 independent tumours were examined ( n = 13 tumours). The P value was computed using Student’s t -test on the Pearson correlation coefficient. ( f ) Flow cytometry quantification of MDSCs in peripheral blood in animals carrying the indicated PDX lines. n = 4, 4, 4, 7 and 3 animals for the five PDX lines (from left to right), respectively. Error bars indicate s.e.m. P = 9.8 × 10 −10 based on one-way ANOVA across different PDX lines, and 0.00045 by two-sided Student’s t -test between low-pS6K(T389) PDXs (BCM2147, HCI11 and BCM4195, see e ) and high-pS6K(T389) PDXs (HCI3 and MC1, see e ). ( g ) Two PDX lines, MC1 and HCI11, were treated with Torin 1 or rapamycin in 3D suspension cultures. The expression of G-CSF was determined by qPCR. The experiment was repeated twice with similar results. One representative experiment is shown. Upper panel: n = 4 biological replicates for each group. Lower panel: n = 6, 6 and 5 biological replicates for the three groups, respectively. Error bars indicate s.e.m. P values are calculated by two-tailed Student’s t -tests. Statistics source data for g , f are provided in Supplementary Table 4 . Full size image To further evaluate the mTOR pathway, we used two gene expression signatures denoting the responses of cancer cells to mTOR inhibitors, rapamycin 36 and pp242 (ref. 37 ). The G-CSF-sig inversely correlates with these signatures in gene set enrichment analysis ( Fig. 5b ), suggesting a positive association with mTOR activity. The G-CSF-sig is expressed at a higher level in both the Basal and Luminal B (LumB) breast cancer subtypes ( Fig. 5c ), and correlates with the risk of distant metastasis ( Fig. 5d ). The G-CSF-sig is associated with poor prognosis in multivariate prognosis analysis in two different data sets with long-term follow-up 35 , 38 ( Supplementary Fig. 4f ). Tumour-induced MDSC accumulation seems independent of the adaptive immune system as it was also observed in immune-deficient mice ( Supplementary Fig. 4g ). Therefore, it was possible to test human xenografts for their ability to induce MDSCs. A strong connection between mTOR activity and G-CSF expression was observed in a panel of patient-derived xenografts (PDXs) 39 , 40 ( Fig. 5e ). Specifically, we assayed pS6K(T389) as an indicator of mTOR activity by western blotting, and G-CSF expression by quantitative real-time PCR ( Supplementary Fig. 4h ). PDXs expressing a higher level of pS6K(T389) and G-CSF also induced more MDSCs in vivo ( Fig. 5f ). Moreover, treatment with mTOR inhibitors, rapamycin or Torin 1, reduced G-CSF in two of the PDX lines cultured in three-dimensional (3D) suspension medium ( Fig. 5g ). Collectively, these data suggest that human tumours with an activated mTOR pathway are more likely to have an increased infiltration of MDSCs, a decreased infiltration of T cells and a worse prognosis. TICs exhibit enhanced G-CSF production In addition to inter-tumoral heterogeneity, we also observed an intra-tumoral heterogeneity of G-CSF production. Intracellular staining of G-CSF, pS6K(T389), pS6(S235–S236) and pSTAT3(S727) followed by fluorescence-activated cell sorting (FACS; Supplementary Fig. 5a, b ) revealed a correlation between G-CSF and mTOR activity at the single-cell level ( Fig. 6a ). Among the top 5% of G-CSF high cells (>2 × mean) in 4T1 and P53N-C models, approximately 60–80% are CD24 high CD29 high , a widely used definition of TICs, as opposed to 30% in the entire population ( Fig. 6b and Supplementary Fig. 5c, d ). We then examined a different set of TIC markers, EpCAM and CD49f. Among P53N-C cells, the top 5% G-CSF high cells fell into two distinguishable populations, both expressing to different degrees EpCAM and CD49f. Together they account for over 80% of G-CSF high cells ( Fig. 6c ). Sca-1 and c-Kit have also been used as TIC markers in the 4T1 model 41 , and Sca-1 high c-Kit high cells were also enriched in the G-CSF high subpopulation of both 4T1 and P53N-C tumours ( Fig. 6d ). Taken together, these data support the conclusion that G-CSF high cells enrich for TICs. Figure 6: G-CSF high cells enrich for TICs. ( a ) G-CSF and mTOR substrates were analysed by intracellular FACS staining. Left: Isotype-control antibody and representative staining of G-CSF and pS6K(T389) in P53N-C. Right: Median fluorescence intensity (relative to total population) of indicated phos-proteins in G-CSF high (H) versus G-CSF low (L) subpopulations. The average values of three technical replicates are shown. Two independent experiments were performed and one representative experiment is shown. ( b ) 4T1 (left) or P53N-C (right) cells with G-CSF expression (top 5.3%, designated as G-CSF high ) were compared with total population in CD24 and CD29. ( c ) G-CSF high P53N-C were analysed for CD49f and EpCAM expression relative to total population. Two populations with slightly different EpCAM/CD49f expression were identified and assigned as EpCAM high CD49f med and EpCAM med CD49f high . ( d ) 4T1 and P53N-C cells were analysed for Sca-1 and c-Kit. Graph shows percentages of Sca-1 + c-Kit + subgroup. For b – d , n = 3 independent experiments each with technical replicates. Mean values of all three experiments are used to generate the plot and calculate P values. ( e ) ELISA validation: CD24 high CD29 high and Sca-1 + c-Kit + cells were isolated by FACS and analysed for G-CSF (ELISA) and compared with mock-sorted total population. n = 6 (CD24 and CD29) and 3 (Sca-1 and c-Kit) independent FACS experiments. ( f , g ) SAGE libraries of CD44 + and CD24 + cell populations were analysed for CSF3 ( f ) and FGFR1 ( g ) expression. n = 5 independent tumours/specimens. For each, a P value was computed on the basis of Poisson distribution to test the hypothesis that CD44 + and CD24 + populations have the same frequencies of the transcripts. Bonferroni correction was used to adjust for multiple comparisons. Fisher’s method was then used to combine the P values. Raw data are in Supplementary Table 6 of ref. 43 . ( h ) Overlap between genes preferentially expressed in the human CD44 + cell fraction and genes downregulated by the mTOR inhibitors rapamycin and TKI258 is shown in Venn diagrams. Numbers of overlapping genes are indicated. P value was computed using Fisher’s exact tests. Error bars indicate s.e.m., and P values are calculated by two-tailed Student’s t -tests unless otherwise noted. Statistics source data for b – e are provided in Supplementary Table 4 . Full size image The converse hypothesis that TICs overexpress G-CSF is supported by the elevated G-CSF level in CD24 high CD29 high cells and Sca-1 + c-Kit + subpopulations as indicated by FACS ( Supplementary Fig. 5e, f ), or by enzyme-linked immunosorbent assay (ELISA; Fig. 6e ). A similar conclusion was also reached using EpCAM and CD49 as markers ( Supplementary Fig. 5g, h ). To investigate whether TICs have enhanced ability to induce MDSC differentiation as compared with non-TICs, we treated primary bone marrow cells of naive mice with tumour-cell-conditioned medium. On a per-cell basis, tumour cells grown under a TIC-enriching condition (mammospheres in suspension medium 42 ) secreted more G-CSF than under the condition favouring differentiation (2D cultures). Accordingly, tumour-conditioned medium obtained under the TIC-enriching condition induced increased MDSCs (CD11b + Ly6G + Ly6C variable ). This increase was reversed by G-CSF-neutralizing antibodies ( Supplementary Fig. 5i ). To test the above conclusions in human cancer, we analysed a SAGE (serial analysis of gene expression) data set of purified CD44 + and CD24 + cancer cells 43 . CSF3 transcripts were exclusively found in CD44 + cells but not CD24 + cells, suggesting that cancer cells with TIC-like phenotype overexpress G-CSF ( Fig. 6f ). CD44 + cells also express a significantly higher level of FGFR1 transcripts ( Fig. 6g ). To examine whether CD44 + cells may possess increased activity of the FGFR and mTOR pathway, we intersected the CD44 + -associated genes 43 with gene expression signatures that reflect mTOR inhibition 37 by rapamycin or FGFR inhibition by TKI258 (ref. 32 ), and discovered statistically significant overlaps ( Fig. 6h ). Finally, we treated human primary bone marrow cells with conditioned medium from PDX cells. The TIC-enriched conditioned medium induced more differentiation of CD11b + CD33 + CD15 + CD16 − cells ( Supplementary Fig. 5j ), which are known to be immature granulocytic cells 9 , 44 . Taken together, these results provide evidence for the clinical relevance of the connection between TICs, G-CSF and MDSCs. MDSCs enhance TIC features through Notch signalling We asked how MDSCs contribute to tumour initiation. MDSCs have been shown to promote angiogenesis 24 , 45 . The perturbation of the mTOR–G-CSF axis did not seem to affect tumour angiogenesis, as suggested by immunofluorescence staining of CD31 ( Supplementary Fig. 6 ). Monocytes and macrophages have been shown to enhance TIC features 46 . We asked whether granulocytic MDSCs have similar functions. The presence of MDSCs increased the number of mammospheres formed by 4T1 cells ( Supplementary Fig. 7a ). Moreover, MDSCs also disproportionally increased the number of cancer cells expressing murine mammary TIC markers including CD24, CD29, CD49f, Sca-1 and c-Kit ( Supplementary Fig. 7b ). Co-culturing with MDSCs stimulated several stemness-related genes in cancer cells, including Nanog , LGR5 and MSI-1 (ref. 47 ) ( Fig. 7a ). Cancer cells extracted from 4T1 tumours with G-CSF knockdown exhibited reduced frequencies of CD24 high CD29 high cells and EpCAM high CD49f + cells ( Fig. 7b ), and a decreased ability to form mammospheres ( Supplementary Fig. 7c ). These alterations could be rescued by adoptive MDSC transfer ( Fig. 7b ), suggesting that MDSCs contribute to the expansion of TICs in vivo . Taken together, these data indicate that MDSCs enhance the tumour initiation. Figure 7: MDSCs promote breast cancer TIC features. ( a ) Gene expression of Nanog, Lgr5 and Msi-1 in 4T1 mammosphere cultures with or without MDSCs. 4T1 and MDSCs were co-cultured in direct contact, or separated (0.4 μm pore size, Boyden chamber (B.C.)). Cells were cultured in different wells to form 3–6 technical replicates. Three experiments yielding consistent results were performed. ( b ) Percentage of CD24 high CD29 high cells (left) or EpCAM high CD49f + cells (right) in mammary tumours formed by 4T1 cells expressing shRNA against G-CSF or vector control. Some of the mice bearing G-CSF shRNA tumours were subjected to MDSC transplantation (G-CSF shRNA + MDSCs). n = 3, 3 and 2 tumours for the three groups, respectively. ( c ) Gene expression of three canonical Notch target genes in the co-culture experiments described in a . Three to six technical replicates were used. Two experiments yielding consistent results were performed. ( d ) Relative bioluminescence intensity of cancer cells expressing a luciferase Notch reporter, either cultured alone or together with MDSCs (+MDSCs). Dominant-negative RBPJ (DN-RBPJ) or MAML (DN-MAML), or control vector was co-expressed with the Notch reporter in cancer cells. n = 13 wells of cells that were independently set up, transfected with plasmids, and co-cultured with MDSCs/NNs. NS, not significant. ( e ) Quantification of EpCAM high CD49f + cell frequencies in cancer cells cultured in 3D suspension medium either alone, with MDSCs (+MDSCs) or normal neutrophils (+NNs). DN-RBPJ or DN-MAML, or control vectors were expressed in cancer cells. The P value was determined by two-way ANOVA across various treatment groups with different genetic perturbations as one factor (three levels: control, DN-RBPJ and DN-MAML) and co-culturing conditions as the second factor (three levels: control, +MDSCs and +NNs). Data were derived from two independent experiments. ( f ) MC1 and HCI11 cells were maintained in 3D suspension cultures with or without MDSCs. CD24 and CD44 expression was assessed by FACS. The P values were determined using two-way ANOVA with different co-culturing conditions as one factor (two levels: −MDSCs and +MDSCs), and different PDX lines (two levels: HCI11 and MC1, left) or semagacestat treatment (two levels: ‘−’ and ‘+’, right) as the second factor. Data were obtained from two (left) or three (right) independent experiments. ( g ) Heat maps show expression of G-CSF-sig and the indicated Notch target genes. Pearson correlation coefficient between G-CSF-sig and the sum of the four genes, and the corresponding P value (by two-sided Student’s t -test on correlation coefficients) is shown. Error bars indicate s.e.m., and P values are calculated by two-tailed Student’s t -tests unless otherwise noted. Statistic source data for b , d – f are provided in Supplementary Table 4 . Full size image Direct cell–cell interaction seemed to be critical for this function, as MDSCs separated in the Boyden chambers were unable to enhance the stemness-related genes ( Fig. 7a ). This points to pathways that require cell juxtaposition and dictate stem or progenitor cell properties. The Notch pathway meets these criteria 48 . Indeed, MDSC co-culturing significantly increased expression of multiple canonical Notch target genes in a contact-dependent fashion ( Fig. 7c ). To determine whether this increase is restricted to cancer cells, we transduced a Notch reporter 49 into cancer cells, and observed that the reporter activity was enhanced by MDSCs. This enhancement was abolished by co-expression of a dominant-negative (DN) mutant of Rbpj 50 or MAML 51 in cancer cells ( Fig. 7d ). Consistently, the MDSC-mediated increase of CD49 + EpCAM high cells was reduced by DN-MAML or DN-Rbpj ( Fig. 7e ). Normal neutrophils were unable to induce the same effects ( Fig. 7e ). In human models, co-culturing of MDSCs and MC1 or HCI11 cells (isolated from the respective PDX models) significantly increased the proportion of CD44 + CD24 − cells ( Fig. 7f ). This increase was almost completely abolished by a Notch inhibitor, semagacestat ( Fig. 7f ). Finally, the G-CSF-sig correlates with Notch target genes in the TCGA data set ( Fig. 7g ). Taken together, these results strongly support that MDSCs enrich TICs through Notch signalling in cancer cells. To test the impact of mTOR–MDSC cascade on tumour initiation in vivo ( Fig. 8a ), we transiently treated P53N-C-tumour-bearing mice with rapamycin. A subset of the rapamycin-treated mice also received transplantation of exogenous MDSCs to selectively rescue the MDSC-mediated effects downstream of the mTOR pathway. The tumours were then subjected to tumour initiation assays without further rapamycin or MDSC treatment. As expected, compared with control tumours, rapamycin-treated tumours exhibited a significant decrease of CD29 + EpCAM high CD49f + cells ( Fig. 8b ). Provision of exogenous MDSCs reversed this decrease ( Fig. 8b ). When transplanted into untreated mice at high cell numbers (5,000 and 1,000 cells per mouse), the rapamycin-treated group exhibited slower tumour progression ( Fig. 8c and Supplementary Fig. 7d ). At low cell numbers (200 cells and 40 cells per mouse) tumour initiation is significantly delayed in this group ( Fig. 8d and Supplementary Fig. 7e ). There was also a statistically significant decrease of TIC frequency compared with the other groups ( P = 0.016 by extreme limiting dilution analysis 52 ). The effects of rapamycin on tumour size, latency and TIC frequency were all partially reversed by concomitant transfer of MDSCs ( Fig. 8c, d and Supplementary Fig. 7d, e ). These data show that the transient inhibition of mTOR signalling reduced TIC frequency, and, more importantly, this reduction is partially mediated by MDSCs. Figure 8: The mTOR–MDSC cascade increases TIC frequency in vivo . ( a ) Schematics of experiments to test the MDSC-mediated effects of the mTOR pathway on tumorigenesis capacity of P53N-C cells. P53N-C tumours were allowed to grow to about 0.5 cm 3 before being randomized into three groups: untreated, Rapa (rapamycin 5 mg kg −1 , three times per week for two weeks), and Rapa + MDSCs (4 × 10 6 cells transplanted twice a week for two weeks). After the treatment, tumours were extracted and subjected to the assays shown in b – d . ( b ) Quantification of frequencies of CD29 + EpCAM high CD49f + cells of P53N-C tumours subjected to the treatment shown in a . n = 7, 4 and 4 for the three groups, respectively. Error bars, s.e.m. ( c ) Results of tumour initiation assays. Tumour formation after injection of equal numbers of tumour cells was compared. Tumour size on day 17 post orthotopic tumour cell injection (5,000 cells per mouse). n = 5 mice per group. Error bars, s.e.m. ( d ) Kaplan–Meier curve of tumour-free survival. Low tumour cell number injection (40 cells per mouse) reveals increased latency of tumour cells derived from rapamycin-treated mice. Number of mice with tumours at the end of observation time and total numbers of mice are also indicated for each condition. P values were computed by log-rank test. Statistics source data for b are provided in Supplementary Table 4 . Full size image DISCUSSION Our results indicate that mammary tumours relying on different oncogenic pathways may also differ in their ability to alter the immune system. Recent genomic studies have suggested enormous heterogeneity of tumour-infiltrating immune cells 53 , 54 , 55 . However, why and how different tumours evolve to enrich distinct immune cells remains poorly understood. Here, our data demonstrate how a tumour cell-intrinsic oncogenic pathway, the mTOR pathway, determines a tumour’s capacity to accumulate granulocytic MDSCs, providing one partial answer to these questions. Our data demonstrate that G-CSF is a downstream target gene of the mTOR pathway, and mediates MDSC accumulation. The roles of MDSCs and G-CSF in tumour progression and therapeutic responses have been elucidated 12 , 13 , 16 , 18 , 31 . However, the role of mTOR–G-CSF cascade has not been reported, and these results extend the oncogenic functions of the mTOR pathway from a cancer cell-intrinsic to a systemic level. Our data suggest a therapeutic strategy to combine current immunotherapies with mTOR inhibitors, which has already been applied to endocrine-therapy-resistant ER + tumours in the clinic 56 . A recent study used a PI(3)K inhibitor and demonstrated synergistic effects with immune checkpoint blockade 57 , supporting that inhibition of the PI(3)K–mTOR pathway may help overcome immunosuppression and enhance immunotherapies. A potential caveat of this strategy is that the PI(3)K–mTOR pathway also plays an important role in T cells 58 . As a result, PI(3)K and mTOR inhibitors themselves are expected to be immunosuppressive. Here, our data provided mechanistic insights into potential upstream targets including FGFR and G-CSF. Drugs blocking FGFR or G-CSF may inhibit MDSC accumulation without potentially detrimental effects on the adaptive immune system. MDSCs enhance TIC features. This is consistent with the previously reported role of MDSCs in ovarian cancer 59 . The mechanism here, however, seems to be distinct—through the Notch pathway in tumour cells. In addition, our data also indicate that TICs enrich the ability to produce G-CSF, and hence contribute more significantly to MDSC accumulation. Taken together, we identified a feedforward loop between TICs and MDSCs, suggesting that the tumorigenic potential may evolve hand in hand with tumour-induced immunosuppression. Among the variety of tumour models we examined, only a subset resulted in granulocytic MDSC accumulation. Thus, these cells are not universally required, and there may be different immune cells playing distinct roles in other tumour contexts. For instance, another major type of MDSC, monocytic MDSCs, also possess potent immunosuppressive activities 37 . Further studies will be necessary to determine whether monocytic MDSCs are involved in different subsets of breast cancers, and are regulated by different oncogenic pathways. □ Methods Animal studies and tumour models. All animal experiments were carried out in accordance with a protocol approved by the Baylor College of Medicine Institutional Animal Care and Use Committee. Female animals of 5–6 weeks are used as the recipients of tumour tissue or cell line transplantation. BALB/C and SCID/Beige were used as recipients of syngeneic tumours (P53-null series and 4T1–4TO7–67NR cell line series) or PDXs, respectively. MMTV–WNT1, MMTV–WNT1–iFGFR and P53–PTEN DKO mice are in the FVB background. The mouse breast cancer models were as follows: P53N-A (formerly assigned T11), P53N-B (formerly assigned 2225L) and P53N-C (formerly assigned 2208L) from our P53-null mouse mammary tumour bank described in ref. 28 are on the BALB/C background and were used in syngeneic BALB/C mice unless stated otherwise. Models on the FVB background and maintained in FVB strain of mice: MMTV–Wnt1 (termed Wnt1 for brevity) and MMTV–Wnt1 tumours transduced with inducible FGFR (termed Wnt1–iFGFR) have been described elsewhere 27 . The P53–PTEN DKO was generated by conditional deletion of these tumour suppressors using a MMTV-driven Cre through nipple-injection of adenoviral Cre into transgenic mice carrying respective Flox alleles of P53 and PTEN. Breast cancer PDXs were maintained by injecting 1–2 mm size tumour pieces into fat pad-cleared mammary glands of SCID Beige mice. The development of PDX lines was conducted under the Institutional Review Board-approved protocols, and was documented in previous studies 39 , 40 . The current study used already-established PDXs that had been de-identified, and therefore, has been granted protocol exemption by the Institutional Review Board of Baylor College of Medicine for not involving human subjects. PDXs used in this study: HCI11 (ER + , FGFR + ), HCI3 (ER + ), BCM3143 (Her2 + ) and triple-negative subtype MC1, BCM2147, BCM4272 and BCM4195, which were obtained from two independent sources 39 , 40 . Cell lines. None of the cell lines used in this study is listed in the database of commonly misidentified cell lines maintained by ICLAC and NCBI Biosample. 4T1, 4TO7 and 67NR cell lines were purchased from Karmanos Cancer Institute. No cell line authentication was performed. Cell lines were subjected to bi-monthly tests for mycoplasma contamination. Lentivirus transduction of tumour cells. The expression vector for a firefly luciferase–GFP fusion protein (FLUC–PWIPZ) was packaged into lentivirus in 293T cells with pMD2 and pSpAX2 packaging system. Lentiviral stocks were filtered and concentrated by ultracentrifugation. To label P53N-C tumour cells, tumours were digested into single-cell suspensions by collagenase 3 treatment, and incubated with FLUC–PWIPZ lentivirus and Polybrene for 24 h under mammosphere assay conditions to preserve tumour characteristics. At 48 h culture period, successfully labelled cells were isolated by FACS sorting of GFP-positive cells and injected into mammary glands of BALB/C mice for tumour development. Tumours that retained GFP and luciferase signal (about 40% of tumours) were collected and comprised the tumour stock used in experiments. 4T1 cells were transduced with G-CSF-specific shRNA, Raptor-specific shRNA or a scramble-control nonspecific shRNA using the lentiviral GIPZ vector system (Thermo Scientific Open Biosystems) that allows puromycin selection to obtain a pure transduced cell population. Knockdown efficiencies were determined by gene-specific real-time qPCR and by ELISA (R&D Systems) for G-CSF or by quantitative western blotting for Raptor (Odyssey System, rabbit anti-raptor, Cell Signaling Technology). P53N-A model-derived tumour cells were transfected using Xtreme Gene HP DNA Transfection Reagent (Version 08, Roche) with expression plasmid for a constitutively active mTOR mutant, pcDNA3–AU1–mTOR-S2215Y, which was a gift from F. Tamanoi, Tarbiat Modares University, Iran (Addgene plasmid no. 26037). In vivo treatments. 1–2-mm-size tumour pieces were implanted orthotopically, in the fourth mammary gland on the left side of the animal. When tumour size reached an average of 80 mm 3 , treatments were initiated as indicated in figure legends. In all experiments, the initial implantation was conducted on animals at the age of 5–6 weeks. BGJ398 was dissolved at 10 mg ml −1 in dimethylsulfoxide (DMSO), and mixed with PEG300/5% glucose (3:1) to administer by oral gavage (15 mg kg −1 , daily, 5 d per week). Rapamycin was administered by intraperitoneal (i.p.) injection (100 μg in 200 μl of 10% PEG400, 10% TWEEN80 and 4% ethanol, dosage was at 5 mg kg −1 , three times per week). Anti-Ly6G treatments comprised of two i.p. injections per week of 100 μg anti-Ly6G (BioXCell) in PBS. Anti-G-CSF (R&D Systems) was i.p.-injected daily, five times per week, at 10 μg injection dose. Control animals received an equal amount of isotype-control antibody (BioXCell). Recombinant human G-CSF (Novoprotein, specific activity: 6 × 10 7 IU mg −1 ) was injected subcutaneously in the flank at 1.5 μg per injection in 150 μl PBS every 2 days for the duration indicated in the figures. Gr1 + cells of tumour-bearing mice (MDSCs), or for controls, from tumour-free mice, were enriched using a magnetic cell separation kit (Biotinylated anti-mouse Gr1, BD Pharmingen, Biotin Selection Kit, Stem Cell Technologies) yielding >95% purity. For in vivo transfer, cells were washed in PBS and immediately injected through the retro-orbital route. Two consecutive injections of 4 × 10 6 cells spaced 5 d were applied. Tumour and tissue analysis. Tumour length ( L ) and width ( W ) were measured with a calliper. Tumour volumes were calculated using the formula L ∗ W 2 ∗ π/6. MDSCs in tumour sections were identified by immunohistochemistry with anti-S100A8 (clone 335806, R&D Systems). Immunofluorescence staining of 5 μm tumour sections was carried out as previously described 60 . To assess tumour vascularity, tumour sections were stained with CD31 antibody, followed by DAPI staining of nuclei. Images of stained sections were analysed for number of S100A8 + cells, number of DAPI + nuclei, and area covered by CD31 with the help of ImageJ and Photoshop. The images shown in Fig. 1a, f and Supplementary Fig. 6 are representative of at least five randomly picked fields from multiple tumours. Lung metastasis was quantified by bioluminescence imaging (luciferase-labelled tumour cells) or enumeration of visible lesions: on euthanization, animals were perfused with PBS, and lungs were excised and immediately subjected to bioluminescence measurement with IVIS Lumina II equipment. Then lungs were fixed in 4% PFA and visible lesions were counted with the aid of a magnifying stereoscope. Lung metastasis nodules were counted by two people who were blinded with regard to a sample’s experimental-group origin. Severity of metastasis was scored on the basis of these metastasis nodule counts. Paraffin-embedded lung samples were also submitted to haematoxylin and eosin (H&E) staining to reveal the size of lung metastases. For quantification, 10 stacked H&E-stained sections per lung, spaced 200 μm to cover a representative portion of the whole lung, were evaluated. Sections were scanned and total lung area and metastasis-covered area were measured using Photoshop. Figures 1k and 3h, j show the average level of lung metastasis burdens in their corresponding group (group sizes indicated in the corresponding figure legends). Blood was drawn and collected in EDTA-coated tubes. To separate plasma from blood cells, a 15 min centrifugation at 1,500 g , 4 °C, was performed. Plasma was stored at −80 °C before analysis by Bioplex (23-cytokines-kit, Biorad) or ELISA (mouse G-CSF DuoSet, R&D Systems). Blood cells were subjected to red blood cell lysis, followed by FACS staining or other analyses. Flow cytometry of blood and tumour samples. To characterize and quantify MDSCs by FACS, cells were incubated with FcR blocker (eBioscience), and then stained with Ly6C–PE–CF594 (BD Biosciences), Ly6G–PerCPCy5.5 (BD Bioscience), CD11b–APC (eBioscience) and/or anti-GR1 (RB6, BD Biosciences). MDSCs were identified as GR1 + CD11b + cells, granulocytic MDSCs as Ly6G high Ly6C low or − CD11b + , monocytic MDSCs as Ly6C high Ly6G low CD11b + . To determine absolute cell numbers counting beads (BD Biosciences) were added before FACS acquisition. T cells were analysed with an antibody combination of CD3–PerCPCy5.5, CD4–APC, CD8–FITC (all eBioscience) and PD1–PE–CF594 (BD Biosciences). Also see Supplementary Table 2 for antibody use information. Tumour-infiltrating MDSCs and T cells were quantified by enzymatic digestion of the dissected tumours (mouse tumour cell dissociation kit, Miltenyi Biotec), followed by FACS staining with the above antibody combinations. Staining for the common leukocyte marker CD45 was also included to facilitate the distinction of leukocytes from tumour cells and other stromal cells. Reverse-phase protein array. RPPA assays were carried out as described previously 61 with minor modifications. Protein lysates were prepared from tissue samples with Tissue Protein Extraction Reagent (TPER; Pierce) supplemented with 450 mM NaCl and a cocktail of protease and phosphatase inhibitors (Roche Life Science). Protein lysates at 0.5 mg ml −1 of total protein were denatured in SDS sample buffer (Life Technologies) containing 2.5% 2-mercaptoethanol at 100 °C for 8 min. The Aushon 2470 Arrayer (Aushon BioSystems) with a 40-pin (185 μm) configuration was used to spot lysates onto nitrocellulose-coated slides (Grace Bio-labs) using an array format of 960 (experimental and controls) lysates per slide with each sample spotted as technical triplicates (2,880 spots per slide). Slides were prepared for antibody labelling by blocking for 1 h with I-Block reagent (Applied Biosystems) followed by 15 min incubation with Re-Blot reagent (Dako). Antibody labelling was performed at room temperature with an automated slide stainer Autolink 48 (Dako) using specific primary antibodies and appropriate biotinylated secondary antibody (Vector). An amplified fluorescent detection signal was achieved with Vectastain-ABC Streptavidin–Biotin Complex (Vector, PK-6100) followed by TSA-plus Biotin Amp Reagent diluted at 1:250 (Perkin Elmer, NEL749B001KT) and a 1:50 dilution of LI-COR IRDye 680 Streptavidin (Odyssey). The total protein content of each spotted lysate was assessed by fluorescent staining with Sypro Ruby Protein Blot Stain for selected subset of slides (Molecular Probes). Fluorescent-labelled slides were scanned on a GenePix AL4200 scanner, and the images were analysed with GenePix Pro 7.0 (Molecular Devices). For normalization, raw image intensity of each spot is subtracted by that of negative control and then divided by total protein values. Tumours with different genetic backgrounds are analysed separately. Western blotting. Tumour lysates described in the RPPA section above were also subjected to gel electrophoresis using 25 μg protein per lane of pre-cast 4–12% gradient minigels (Novex, Life Technologies), followed by transfer onto nitrocellulose membrane (Novex, Life Technologies), blocking with 5% dry milk in TBST for 1 h, shaking with supplier-recommended concentration of primary antibodies (all from Cell Signaling Technology) at room temperature for 4 h or at 4 °C overnight, and developing with secondary antibodies that were labelled for detection with the Odyssey system. Labelled molecular weight marker was also loaded on the gels. Protein bands were quantified using Odyssey features with scan times set to exclude overexposure. For representative images shown in Fig. 2b and Supplementary Fig. 2 , at least three independent experiments were performed with similar images obtained. In vitro T-cell activation assay. T cells of naive BALB/C mice were enriched from spleen by negative selection on magnetic beads (removal of TER119 + , B220 + and CD11b + cells) and labelled with CFSE (1 μM, Molecular Probes). T cells were cultured alone or admixed with MDSCs (at 1:0.33, 1:1 and 1:3 ratios) in 96-well plates. T-cell activation was with anti-CD3 (coating of wells at 5 μg ml −1 overnight, 4 °C) and IL-2 (huIL-2, 5 ng ml −1 ). At 5 days of culture, cells were collected and CFSE dilution was determined by FACS. Samples were also stained for Ly6G, to be able to exclude non-T-cells (MDSCs) from the analysis. Cell culture supernatants were also collected and analysed by Bioplex assay of IFN-γ. In vitro drug treatment. 4T1 cells were purchased from the Barbara Ann Karmanos Cancer Center. 2208L cells were derived from P53N-C tumours by continuous culture for more than one year. They were shown to express high E-cadherin levels suggesting the maintenance of luminal subtype of breast cancer characteristics and were apparently free of stromal components (only p53N genotype detected) at the time of use. Routine mycoplasma inspection was performed bi-monthly to ensure negativity of infection. No STR profiling or other authentication procedures have recently been performed. For drug treatment experiments, cells were seeded at 100,000 cells per well in 12-well format, or 10,000 cells per well of 96-well format the day before the experiment. Drug treatments were done after one wash in FBS-free DMEM (high glucose) medium for 8 h or 24 h as indicated in the figures. BGJ398 (Novartis), Torin 1 and lapatinib stocks were prepared in DMSO at ×1,000 of final concentration. Rapamycin was dissolved at ×1,000 of final concentration in ethanol. Control cells received corresponding amounts of vehicle alone. Cell supernatants were collected and, after a brief spin, analysed on G-CSF-ELISA plates (mouse G-CSF DuoSet, R&D Systems). In cell survival assays, WST-1 reagent (Roche) was added at end of the drug treatment period following the supplier’s instructions and attenuance D 450 nm was measured. Tumour initiation assay. P53N-C tumours were extracted and dissociated into single-cell suspension. FACS was employed to remove lineage-positive cells and dead cells. The remaining cancer cells were serially diluted into 50,000, 10,000, 2,000 and 400 cells ml −1 . A 100 μl cell suspension was transplanted into cleared mammary fat pads of recipient mice. Mammosphere assay. Tumour cells (20,000 cells per well) were cultured in a low-adhesion 24-well plate in DMEM/F12 medium with murine bFGF (20 ng ml −1 , Life Technologies), murine EGF (20 ng ml −1 , Life Technologies) and B27 supplement (GIBCO). For co-culture, 80,000 Gr1 + MDSCs were added. As indicated, in some instances MDSCs and tumour cells were co-cultured in Boyden chambers, putting MDSCs on insets with 0.4-μm-pore-size membranes. Co-culture periods were 5 h for RNA expression study, 24 h for FACS of cancer stem cell surface markers, and 6–12 d to determine mammosphere-forming units. As indicated in the figures, specific control groups received Gr1 + cells of tumour-free mice (NNs, normal neutrophils) instead of Gr1 + MDSCs. These control cells were isolated from bone marrow with the same magnetic sorting procedure used for MDSCs (biotinylated anti-mouse Gr1, BD Pharmingen, Biotin Selection Kit, Stem Cell Technologies). Mammospheres were enumerated by manual counting of low-magnification images of the cultures (GelCount). The images in Supplementary Fig. 7a are representative of over 20 images obtained from different experiments. For flow cytometry, mammospheres were dissociated by trypsin–EDTA digestion. Total tumour cell counts were obtained by FACS of the GFP-tagged tumour cells. FACS of G-CSF, phospho-S6K, breast cancer stem cell markers. Single-cell suspensions of tumour cells were incubated on ice with FcR blocker (eBioscience) in staining buffer (1% FBS in PBS), followed by cell surface marker staining with the following antibodies: CD24–PE, Epcam–APC, ScaI–PE, CD29–PECy7, CD49f–eFLUOR450, c-Kit–APC and CD45–eFLUOR450 as indicated in the figures. When intracellular staining was carried out, cells were fixed and permeabilized using reagents of eBioscience’s FOXP3 staining kit. Intracellular staining was performed in the presence of 2% normal mouse serum, 2% normal goat serum and 2% FBS to reduce nonspecific binding. Antibodies used were G-CSF-APC (eBioscience) and rabbit anti-pS6K(T389, Cell Signaling), rabbit anti-pS6 (S235/236, Cell Signaling) or rabbit anti-pStat3(S727, Cell Signaling) followed by goat anti-rabbit-eFluor488. For analyses of CSCs in tumour tissues, we first performed tumour dissociation using gentle MACS dissociators and tumour dissociation kit manufactured by Miltenyi Biotec. The cell suspension was then subjected to the staining described above. Owing to technical variations, tumours collected together and analysed at the same time were considered as an experimental set, and the systematic differences between different sets were removed by normalization (setting the mean frequencies of ‘No treatment’ samples as one within each set). Co-culture of human bone marrow cells with human breast cancer PDX cells. Cell suspensions of PDX tumour cells were prepared with Miltenyi Biotec tumour dissociation kit as described above. To obtain tumour-cell-conditioned media, 1 × 10 6 tumour cells ml −1 were cultured for 3 days under mammosphere conditions (3D condition, low-adhesion plate, mammosphere medium described above) or on regular culture plates (2D condition, DMEM–high glucose with 10%FBS). Cell supernatant was collected, cleared by centrifugation and stored at −20 ° until use. Unprocessed human bone marrow was obtained from healthy volunteers (Poietics, consent agreement with vendor, Lonza). Red blood cells were removed by lysis. In treatments with tumour-conditioned media, 1 × 10 6 bone marrow cells per well of a 24-well plate were cultured in IMDM medium (IMDM, 10% heat-inactivated FBS, 0.01 M HEPES, 55 μM 2-mercaptoethanol, 1% antibiotic/anti-micotic). To keep culture conditions consistent between groups, controls received 7.5% of 3D plus 7.5% of 2D non-conditioned tumour cell culture medium and experimental groups received 7.5% of 3D tumour-cell-conditioned medium plus 7.5% of non-conditioned 2D medium, or 7.5% of 2D tumour-cell-conditioned medium plus 7.5% of non-conditioned 3D medium, respectively. Similar experiments with mouse bone marrow cells and mouse breast cancer cell line 4T1 were carried out under the same conditions. G-CSF-neutralizing antibody was added as indicated in the figures, at 6 μg ml −1 (approximately five times ED 50 based on information provided by the supplier). Recombinant human G-CSF (Novoprotein) was used at 40 ng ml −1 (specific activity: 6 × 10 7 IU mg −1 ). After 3-day culture, cells were analysed by FACS. Cells with the phenotype of CD11b + CD33 + CD15 + CD16 − HLA-DR −/low (human) or CD11b + Ly6G + (mouse) and granulocytic characteristics in the FSC–SSC profile were enumerated and assigned as granulocytic myeloid-derived suppressor cells. Human bone marrow cells were also directly co-cultured with PDX tumour cells at a 4:1 ratio under mammosphere conditions. At the end of the 8-day culture period, the CD24 and CD44 profile of tumour cells was determined by FACS. Tumour cells and bone marrow cells were discriminated on the basis of FSC and SSC properties and CD11b. PDX cell co-culture with MDSCs. PDX MC1 cell suspensions were prepared as described above, cleared of mouse stromal cells using mouse cell removal kit (Miltenyi Biotec) and mixed with purified MDSCs (Gr1 + ) from blood of MC1-tumour-bearing mouse and cultured under mammosphere conditions. The human cancer stem cell phenotype CD24 − CD44 + was determined by flow cytometry with anti-human CD24–PECy7 (eBioscience) and anti-human CD44–PE (Biolegend) and included a mouse MDSC marker staining to facilitate the exclusion of this population by gating during data analysis. Notch pathway studies. The γ-secretase inhibitor semagacestat (Selleck Biochem) was dissolved in DMSO as 10 mM stock and used at 10 μM final concentration. Controls received the same amount of DMSO. For assays of Notch pathway activation, the Notch reporter plasmid pCBFRE–luc, a gift from N. Gaiano, Johns Hopkins University School of Medicine, USA (Addgene plasmid no. 26897), was used. The Notch reporter was co-transfected with a constitutive expression (EF1 α promoter-driven) Renilla luciferase plasmid. Firefly luciferase (Notch reporter) and Renilla luciferase bioluminescence activity were measured by an in vivo imaging system (IVIS) and determined consecutively by adding respective substrates ( D -luciferin, Gold Biotechnology and RediJect Coelenterazine h, PerkinElmer, respectively) and included background measurements before and between reporter analyses. Notch reporter signal was normalized to the signal from constitutive plasmid. Dominant-negative forms of RBPJ (DN-RBPJ) and MAML (DN-MAML) were applied to inhibit the intracellular Notch pathway. This was done either by co-transfection in the Notch reporter assays or using lentiviral transduction followed by GFP-based flow cytometric sorting to remove non-transduced cell population. Bioinformatics analysis. The TCGA data set was obtained from the TCGA Data Portal in December 2012. Data for different platforms were matched on the basis of patient identifiers. Patients with incomplete records of DNA-copy number, mRNA expression, or RPPA data were excluded. The EMC-MSK data set was derived as described in the previous study 62 . The METABRIC data set 38 was described elsewhere and obtained from Cancer Research UK, Cambridge Research Institute. The FGFR signature was derived from a previous study by comparing MMTV–WNT–iFGFR tumours with MMTV–WNT tumours 27 . The genes that are significantly altered by inducible activation of FGFR1 were determined using a cutoff of FC (fold change) >2 and P < 0.05 (by two-sided Student’s t -tests). The signature was then applied to the gene expression profiles of p53-null tumours. Specifically, a single-value score was computed using the formula: ∑ G ↑ –∑ G ↓ , where G ↑ represents the expression values of upregulated genes and G ↓ represents the expression values of downregulated genes. The G-CSF signature was derived on the basis of a previous study using the same approach as for FGFR signature. In Fig. 3b , we used GSE19222 to analyse the effect of an FGFR inhibitor, TKI258, on the expression of multiple cytokines. In Supplementary Fig. 4e , we used GSE12777 to gauge the correlation between G-CSF-sig and TCR genes in cell lines. Gene Set Enrichment Analysis (GSEA) software was downloaded from the website . Gene signatures reflecting the action of pp242, rapamycin and G-CSF were derived from previous studies 34 . A continuous score of G-CSF signature was computed using a similar approach as described in the previous paragraph for the FGFR signature. GSEA was applied to the TCGA data set using genes up- or downregulated by pp242 and rapamycin as gene sets 36 , 37 , and G-CSF signature scores as the continuous phenotype indices. P values of the enrichment were computed using built-in algorithms of GSEA based on random simulation. Kaplan–Meier curve generation and tests ( Fig. 4d ), multivariate fitting of Cox proportional hazard model ( Supplementary Fig. 4F ), gene expression comparison across different molecular subtypes ( Fig. 4c ), and other data mining procedures and statistical assessments were performed using R Statistical Software on the basis of log-rank tests. RT-qPCR. RNA was collected and processed in TRIZOL, reverse transcribed to cDNA with Applied Biosystems’ High Capacity Reverse Transcription Kit, and analysed by real-time qPCR on an ABI7500 Fast Real-time PCR machine using SsoFast EvaGreen QPCR reagent (Biorad). Results were normalized to beta-actin and/or GFP (tumour cell label) and relative values were calculated as 2 [−(CTgene−CT reference)] . Gene-specific primers used are listed in Supplementary Table 3 . Statistics and reproducibility. All results are presented in the form of mean ± s.e.m. unless otherwise specified. Sample sizes for in vivo experiments are noted in the corresponding figures or figure legends. Individual animals and independently prepared or treated primary tissue samples (for example, PDXs and primary human bone marrows) are considered as biological replicates. Different wells in one experiment are considered as technical replicates. In each experiment the size is determined on the basis of our prior knowledge of the variability of experimental output. Specifically, in animal experiments, a sample size of 4–8 mice per group allows us to detect a 50% difference in tumour size with power of 60–85% and α = 0.05. Animals that suffered non-cancer-related disease conditions (for example, pathogen infection) were diagnosed by the Center of Comparative Medicine at Baylor College of Medicine on the basis of routine inspection, and excluded from experiments as per the recommendation of veterinarians based on pre-established protocols. Otherwise, animals were included into analyses. For the experiments shown in Figs 2g, h , and 3g , a randomization process was performed by randomly assigning animals into two separate groups. After assignment, tumour sizes of the two resultant groups were compared to make sure there is no statistically significant difference (by two-sided Student’s t -test). No randomization was performed in other experiments. The researchers were not blinded to allocation during experiments and outcome assessment except for experiment in Fig. 3j , in which the results were assessed by a researcher blinded to group allocation. For in vitro experiments including FACS analyses, in most cases two to three independent experiments were performed each with three to six technical replicates. Specific information is included in the corresponding figure legends. No specific tests were conducted to test the assumption of normal distribution. The difference between means of different experimental groups was analysed using two-tailed unpaired Student’s t -test unless otherwise noted in respective legends. F -tests were performed to compare variation within different groups, and t -tests were performed with or without the assumption of equal variation accordingly. In a few experiments (see Fig. 7e, f ), we computed P values in the framework of two-way ANOVA to achieve more stable estimates of biological variations by collectively analysing multiple biological conditions or perturbations. Specifically, in Fig. 7e , we treated different genetic perturbations (control, DN-Rbpj, and DN-Maml) as one factor, and different co-culture conditions (control, +MDSCs, and +NNs) as the second factor. In Fig. 7f , we used different co-culture conditions (with or without MDSCs) as one factor, different PDX models (HCI11 and MC1, left panel) or different treatments (with or without Notch inhibitor, right panel) as the second factor. Two-way ANOVA tested the null hypotheses that there is no significant difference among different levels of the same factor and there is no interaction between the two factors. The analyses were performed using the built-in two-way ANOVA analysis tool in Excel. Detailed output including results of post hoc tests is provided in Supplementary Table 4 . Accession numbers and data sets. Referenced accessions: previously published microarray data that were reanalysed here are available from GEO under accession codes GSE7400 34 , GSE19222 32 and GSE12777 63 . The EMC-MSK data set 62 is comprised of GSE2603 , GSE5327 , GSE2034 and GSE12276 . The METABRIC data set 38 is deposited at the European Genome-Phenome Archive ( ), which is hosted by the European Bioinformatics Institute, under accession number EGAS00000000083 . Accession codes Primary accessions Gene Expression Omnibus GSE12276 GSE2034 GSE2603 GSE5327 Referenced accessions Gene Expression Omnibus GSE12777 GSE19222 GSE7400 Change history 20 May 2016 In the version of this Article originally published, in the fourth affiliation, 'Los Angeles' should have read 'Louisiana'. This has been corrected in all online versions of the Article.
Not every breast cancer tumor follows the same path to grow. Some tumors have the assistance of myeloid-derived suppressor cells (MDSCs), a diverse type of immune cell involved in the suppression of the body's response against tumors. How breast cancer cells recruit MDSCs is not completely understood, but in a paper released today in Nature Cell Biology, Baylor College of Medicine researchers report a new mechanism that helps cancer cells engage MDSCs. "There are alternative paths a tumor may take without the MDSCs, but those cancer cells that take the mTOR path of activity tend to have more MDSCs through the production of granulocyte-colony stimulating factor (G-CSF), which drives the accumulation of MDSCs," said corresponding author Dr. Xiang Zhang, a McNair Scholar and assistant professor of molecular and cellular biology at Baylor College of Medicine. Knowing how cancer cells and MDSCs interact with each other helps researchers understand the events that may lead to tumor growth and metastasis and identify potential therapeutic targets. For instance, "determining that a patient's tumor is using the mTOR pathway would indicate that the cancer cells are more likely to depend on MDSCs for progression," said Zhang, who also is with the Lester and Sue Smith Breast Center at Baylor. "This information suggests that, in this case, available therapies for mTOR combined with therapies for MDSCs represent potential therapeutic strategies." Tumors that do not use the mTOR signaling pathway would not be expected to respond as well to the same therapies. The discovery of Zhang and colleagues is much in line with the concept of personalized medicine. "People talk about the specific mutations one patient's tumor has that are not in another patient's tumor. The same type of tumors having different mutations may warrant different treatments; that is personalized medicine," explained Zhang. "We are trying to come from a different angle. We are trying to enrich this concept by saying that not only tumor-intrinsic characteristics are different from patient to patient, but, related to that, there is also diversity in terms of the immune components. Different tumors may evolve via different characteristics of the tumor and the immune response." MDCSs are just one type of aberrant immune cell associated with the tumor. "In addition, there are other immune cells associated with the tumor - monocytes, macrophages, different subsets of T cells - that can either attack or help the tumor. All those cells may vary from patient to patient, and we don't really understand that yet," said Zhang. In addition, MDSCs also play a role in non-cancer situations. For instance, in chronic inflammation, these cells try to suppress the inflammation; in this case, they play a pro-health role. So, "simply eliminating all MDSCs to treat cancer may likely result in negative side effects, such as autoimmune disease. That's why it's necessary to further characterize the diversity, to find the specific subsets of MDSCs that are tumor specific," said Zhang.
10.1038/ncb3355
Chemistry
New catalyst for fuel cells a potential substitute for platinum
Doris Grumelli, Benjamin Wurster, Sebastian Stepanow and Klaus Kern. "Bio-inspired nanocatalysts for the oxygen reduction reaction." Nature Communications, 5 December 2013 DOI: 10.1038/ncomms3904 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms3904
https://phys.org/news/2013-12-catalyst-fuel-cells-potential-substitute.html
Abstract Electrochemical conversions at fuel cell electrodes are complex processes. In particular, the oxygen reduction reaction has substantial overpotential limiting the electrical power output efficiency. Effective and inexpensive catalytic interfaces are therefore essential for increased performance. Taking inspiration from enzymes, earth-abundant metal centres embedded in organic environments present remarkable catalytic active sites. Here we show that these enzyme-inspired centres can be effectively mimicked in two-dimensional metal-organic coordination networks self-assembled on electrode surfaces. Networks consisting of trimesic acid and bis-pyridyl-bispyrimidine coordinating to single iron and manganese atoms on Au(111) effectively catalyse the oxygen reduction and reveal distinctive catalytic activity in alkaline media. These results demonstrate the potential of surface-engineered metal-organic networks for electrocatalytic conversions. Specifically designed coordination complexes at surfaces inspired by enzyme cofactors represent a new class of nanocatalysts with promising applications in electrocatalysis. Introduction Energy and climate change have become one of the biggest preoccupations attracting the common interest of science, engineering and society. In this important field, electrochemistry provides a bridge for efficient interconversion of chemical to electrical energy, that is, fuel cells 1 , 2 , 3 . Fuel cells based on oxidation of H 2 and reduction of O 2 require electrocatalytic surfaces since the electrode processes are not simple reactions and show a strong dependence on the nature of the electrode material. For instance, a direct relationship between the kinetic rates of the oxygen reduction reaction (ORR) and the surface electrode structure was found for gold single crystals 4 , 5 . ORR is a complex reaction and an effective and inexpensive catalyst that can compete with or exceed the efficiency of platinum remains an ongoing challenge. The physical and economic requirements of low overpotential, long-term stability, pH working conditions, non-toxic and earth-abundant elements, and so on impose severe boundaries on the usable material combinations. Nature provides an alternative and efficient solution to this problem, that is, the O 2 in our atmosphere is biogenic—the result of a catalytic process in the photosynthetic machinery of green plants 6 . With increasing atmospheric oxygen levels, higher life-forms with oxygen-metabolizing enzymes evolved. These naturally occurring oxygen activation catalysts, which have been utilized in biological processes for several hundred millions of years, are considered as viable substitutes for precious metals in ORR catalysts for fuel cells 7 , 8 , 9 . Considering both biological and classical inorganic catalysts, one finds close relation between structure and function. In the case of the enzymatic ORR catalysts, nature uses, first, molecules containing electron-donating functional groups that prevent the formation and release of detrimental partially reduced oxygen by-products and intermediates 10 . Second, bulky protein chains sufficiently separate the ORR inorganic active sites, preventing site overlap and catalytic deactivation 9 . The active sites, mostly composed of first-row transition metal atoms and clusters, share a common and important feature with classical inorganic catalysts, that is, the presence of coordinatively unsaturated sites, which play a prominent role in heterogeneous catalytic transformations 11 , 12 . Thus, new catalytic materials can be designed that are inspired by both natural and classical ORR catalysts by using organic molecules with specific molecular backbones (adequately separating the active sites) and selected functional groups (electron withdrawing or donating) to coordinate transition metal atoms or clusters directly on surfaces. Thus, two-dimensional metal-organic coordination networks (2D-MOCNs), that is, organic molecules and metal centres self-assembled on surfaces under well-controlled conditions, constitute a promising route to fabricate functional low-dimensional bio-inspired architectures 13 . 2D-MOCNs were created using organic ligands with different functional groups (carboxylic acid, pyridyl, cyano and hydroxyl end groups) and metal centres (Mn, Fe, Co, Ni and Cu) on various noble metal substrates like Au, Ag, Cu or highly ordered pyrolytic graphite 13 , 14 , 15 , 16 . An example of a 2D-MOCN is presented in Fig. 1 showing a Scanning tunnelling microscopy (STM) image of biphenyl-dicarboxylic acid self-assembled with Fe atoms on Cu(100) 17 . The biphenyl-dicarboxylic acid ligand coordinates with its two-terminal carboxylate groups to dimeric Fe centres, thus forming an extended and periodic 2D coordination polymer. The network presents open cavities where the underlying substrate is exposed. Each network node consists of two coordinatively unsaturated Fe centres with each atom being threefold coordinated to carboxylate oxygen atoms (see Fig. 1 ). The network is chemisorbed to the copper surface but the top binding position of the Fe centres remains open. Moreover, the hybridization of the metal centres with the metal surface is significantly reduced yielding atomic-like electronic configurations for the Fe ions as found for isolated metal–organic complexes 18 . Figure 1: Example of 2D-MOCNs. Scanning tunneling microscopy image of Fe-biphenyl-dicarboxylic acid on Cu (100) self-assembled under UHV conditions. The main features of 2D-MOCNs are indicated: metal centre, organic ligand and cavity. The expansion illustrates the Fe coordination environment 17 . Full size image Engineering of the synthetic ligands in combination with the judicious choice of the metal centres and substrates allows complete control on the network structure, separation and coordination geometry of the metal centres as well as the size of the generated cavities 17 , 19 . To date, the most explored application of surface-supported 2D networks so far is the accommodation of guest molecules inside the cavities 20 , 21 . Initial experiments on the reactivity of the metal centres towards O 2 adsorbed from the gas phase 22 indicate the high potential of 2D-MOCNs as tailor-made heterogeneous electrocatalysts. In the present work, we show that rationally designed 2D−MOCN exhibit substantial electrocatalytic activity for the ORR in alkaline media. The electrochemical behaviour differs from bare Au(111) and the ORR mechanism depends strongly on the choice of the metal centre, while the stability of the network is determined by the organic molecule. The networks containing Fe atoms exhibit a (2+2)e − pathway for the complete reduction to H 2 O, while the coordinated Mn atoms show evidence for a direct 4e − route to convert O 2 to H 2 O. Results Formation of metal–organic networks on Au(111) The networks consist of benzene-1,3,5-tricarboxylic acid (TMA, Fig. 2a ) and Fe or Mn atoms and 5,5′-bis(4-pyridyl)(2,2′-bipirimidine) (PBP, Fig. 2b ) and Fe atoms self-assembled on Au(111). Both carboxylic and pyridyl ligands coordinating Fe and Mn atoms were selected to mimic both biological and inorganic catalysts for ORR 22 , 23 , 24 , 25 . The aromatic rings ensure the planarity of the coordination structure on the surface and define the separation of the reactive metal centres. STM was employed to determine the structural details of the architectures and to confirm the network formation as well as estimate coverage before the electrochemical measurements. Sample preparation and STM characterization are conducted under ultra-high vacuum (UHV) conditions. A sample transfer system connecting the UHV chamber and the electrochemical cell permits full atmospheric control of the sample environment throughout the experiments ( Supplementary Fig. S1 ). Figure 2: Chemical structure and STM topography of the organic ligands on Au(111). ( a ) STM image (12 × 12 nm 2 ) of the TMA honeycomb network and ( b ) 8 × 8 nm 2 STM image of PBP on Au(111). Full size image 2D networks of TMA were previously investigated on different single crystals by STM in both UHV and liquid environment 26 , 27 , 28 , 29 , 30 . On Au(111), TMA is physisorbed with the aromatic ring orientated parallel to the surface. The carboxyl groups form intermolecular hydrogen bonds resulting in a porous network well-known as ‘honeycomb’ structure, see Fig. 2a . The honeycomb pattern is composed of six bright triangular features representing individual TMA molecules 26 , 27 , 28 , 29 , 30 . After TMA deposition, either Fe or Mn atoms are incorporated into the network (see Methods). During this process, the carboxyl moieties of TMA are expected to deprotonate favouring the O−metal interaction 31 . An STM image of TMA−Fe on Au(111) is shown in Fig. 3a . The TMA−Mn structure is nearly indistinguishable from TMA−Fe ( Supplementary Fig. S2 ). A high-resolution STM image and proposed model are presented in Fig. 3b . Clearly, the motif of the original TMA network changes after the incorporation of metal atoms. On the basis of corresponding systems studied previously, a model is adapted. The confirmation of the stoichiometry and structure is, however, not straightforward. The triangular features are interpreted as flat-lying TMA molecules. Particular regions between the molecules appear brighter and are assigned to the positions of the metal atoms, thereby creating coordination chains where in each segment metal dimers coordinate to four molecules. TMA−Fe networks on other substrates show different arrangements 31 , 32 , 33 , 34 . Here the metal dimers are bridged by two carboxylate groups with each metal atom coordinating to three oxygen atoms. Adjacent coordination chains are potentially interlinked by carboxyl hydrogen bonds, which require only partial deprotonation of TMA. Further information containing unit cell and network dimensions are presented in Supplementary Fig. S3 and Supplementary Table S1 . Figure 3: STM topography and model of TMA-Fe and PBP-Fe. ( a ) STM image (12 × 12 nm 2 ) of TMA−Fe network on Au(111). ( b ) High-resolution (3.75 × 3.75 nm 2 ) STM image of TMA−Fe with the model superposed. ( c ) STM image (8 × 8 nm 2 ) of PBP−Fe network on Au(111). ( d ) High-resolution (3.5 × 3.5 nm 2 ) STM image of PBP−Fe with the model superposed. ( e ) Large-scale (74 × 74 nm 2 ) STM image of PBP−Fe after EC experiments. ( f ) Zoom of e : 8 × 8 nm 2 STM image of PBP−Fe after EC experiments. Colour legend: N (green), C (black), O (red), H (white). H is omitted in PBP scheme. Full size image 2D metal-organic networks of PBP have been reported on Ag and Cu surfaces but not on Au(111) 35 . An STM image of the molecular network of PBP on Au(111) is shown in Fig. 2b . On Au(111), PBP adsorb in a flat geometry with the aromatic rings parallel to the surface. A lamellar structure is observed with the molecules oriented parallel and aligned in rows. The pyrimidine groups are adjacent to each other and the pyridyl end groups form hydrogen bonds resulting in a skewed structure. The tilt accounts for a closer proximity and better alignment of the N-functional groups. Similar structures were observed with the pyridyl benzoic acid molecule assembled on Ag(111) 36 and PBP on Ag(100) 35 . On incorporation of Fe, a network structure forms, which is comparable to the networks formed on Cu and Ag surfaces using Cu adatoms 35 . Figure 3c–f shows STM images of PBP−Fe on Au(111) directly after preparation in UHV (c,d) and after the electrochemical (EC) experiments (e and f, see Discussion below). A high-resolution STM image and proposed model are presented in Fig. 3d . In this network, each molecule coordinates to four Fe adatoms via the pyrimidine and pyridyl groups fully saturating the molecule’s binding sites. Hence, each Fe adatom binds to three nitrogen atoms of the two central pyrimidine units of a PBP molecule and to the pyridyl moiety of a neighbouring PBP ligand. Network parameters are presented in Supplementary Fig. S3 and Supplementary Table S1 . Electrocatalytical properties of the networks The presence of precisely defined and coordinatively unsaturated metal atoms (Fe or Mn) opens the possibility to explore the electrocatalytic properties of these nanostructures. Here we explore the ability of the networks to reduce O 2 in alkaline media. On Au(111), the reduction of O 2 occurs via a 2e − pathway with the final product H 2 O 2 ( HO 2 − in basic media) according to the following mechanism 4 , 5 : Figure 4a shows the polarization curve for the bare Au(111) (grey line) in O 2 -saturated 0.1 M NaOH solution. The weak shoulder at −0.20 V with a current density around 0.25 mA cm −2 corresponds to the reduction of O 2 to H 2 O 2 (equation (1)). The polarization curves for TMA−Fe and PBP-Fe acquired under the same conditions are presented in Fig. 4a (pink and turquoise lines). Multiple potential sweeps can be done without major changes proving reversibility ( Supplementary Fig. S4 ). TMA-Fe and PBP-Fe polarization curves differ significantly from bare Au(111). First, the shoulder at −0.2 V shows an onset shifted ~0.05 V to lower overpotentials (see also inset in Fig. 4a ) and has tripled current density (0.75 mA cm −2 ) compared with bare Au(111). Second, and much more striking, is the presence of a second peak at about −0.80 V with a large current density (2.5 and 3 mA cm −2 for TMA-Fe and PBP-Fe, respectively). Small differences are observed between TMA-Fe and PBP-Fe polarization curves. First, PBP-Fe has a somewhat larger catalytic current, which can be related to the higher density of Fe ions ( Supplementary Table S1 ). Second, the position of the peak at about −0.8 V is shifted to more negative potentials for PBP-Fe. This could be a result of the distinct coordination environment. By analogy with other Fe-containing systems, the second shoulder can be assigned to the electroreduction of H 2 O 2 to H 2 O ( OH − in basic media) 37 according to the following reaction: Figure 4: ORR polarization curves. Polarization curves in O 2 -saturated 0.1 M NaOH solution at 0.05 V s −1 for: ( a ) bare Au(111) (grey solid line), TMA−Fe (pink solid line), PBP−Fe (turquoise solid line) and TMA−Fe after the addition of 10 mM H 2 O 2 (pink-dashed line); ( b ) bare Au(111) (grey line), TMA−Mn (orange) and TMA−Mn after the addition of 10 mM H 2 O 2 (orange-dashed line). The insets in the graphs show the different onset potential for the shoulder at −0.2 V. Full size image To test that hypothesis, H 2 O 2 was added to the solution. The polarization curve for TMA-Fe in the presence of 10 mM H 2 O 2 in O 2 -saturated 0.1 M NaOH solution is shown in Fig. 3a (pink-dashed line). Polarization curves for PBP-Fe in the presence of 10 mM H 2 O 2 in O 2 -saturated 0.1 M NaOH give a similar result ( Supplementary Fig. S5 ). The H 2 O 2 electroreduction of TMA-Fe presents two well-defined waves at −0.2 and −0.8 V in the employed potential window, thus confirming the hypothesis. The increase in the current density by adding H 2 O 2 can be traced back to an increase of the surface concentration of O 2 owing to disproportionation of H 2 O 2 in alkaline media 38 . On bare Au(111), a similar pattern is generated ( Supplementary Fig. S6 ) if the potential window is extended to at least up to −1.3 V because the second wave (H 2 O 2 reduction) is shifted to more negative potentials (−1.2 V). This is a significant result as it means that TMA−Fe and PBP-Fe networks also catalyse the decomposition of H 2 O 2 lowering the overpotential by 0.4 V, which is of high importance in biotechnological applications 39 . The ability of Fe to reduce O 2 is widely explored in the literature, and several mechanisms were identified yielding H 2 O or H 2 O 2 as a final product 37 , 40 , 41 , 42 . For TMA−Fe and PBP-Fe networks, we propose that the first wave at −0.2 V in the polarization curve is related to reduction of O 2 towards H 2 O 2 mediated by both exposed Au surface areas and oxygen atoms present in the MOCNs. In fact, the polarization curve for pure TMA or PBP on Au(111) ( Supplementary Fig. S5a,b ) shows a small enhancement of the current density compared with bare Au(111). In UHV conditions, the Fe centres in the network assume an oxidation state close to Fe(II) 18 , 43 , and Fe(II) can readily bind O 2 . However, we do not expect the Fe atoms to participate in the reaction because in the potential window of the first wave between −0.20 and −0.40 V, we assume trivalent metal centres present in the network. It is well established that Fe(III) is not favoured as an active site for O 2 reduction 41 . Analysing the second wave, we consider that Fe(II) species become predominant at potentials more negative than −0.60/−0.70 V. At this potential, Fe(II) can bind both O 2 or H 2 O 2 to form OHOH adducts. The strong bonding of H 2 O 2 to Fe(II) prevents it from leaving as a two-electron reduction product and provides a path for final reduction to H 2 O 41 . The presence of Fe in the TMA and PBP networks explains the second wave, and a mechanism combining equations (1) and (2) is proposed. Blank experiments using the same amount of Fe (0.07 monolayer) contained in the networks deposited on Au(111) at room temperature show a similar behaviour as TMA−Fe and PBP-Fe networks but with substantially reduced catalytic activity ( Supplementary Fig. S5a,b ). The Fe is probably present as small clusters at the elbow sites of the Au(111) herringbone reconstruction. The key role of the unsaturated metal centres in the electrocatalytic reduction of oxygen becomes evident when going from Fe- to Mn-based MOCNs. Figure 4b shows the polarization curve for TMA–Mn (orange-solid line) in O 2 -saturated 0.1 M NaOH solution. A distinct behaviour is observed in comparison with bare Au(111) and TMA-Fe, PBP-Fe networks The first wave is shifted to a lower potential of −0.15 V with an onset potential lowered by 0.1 V (see inset in Fig. 4b ) and presents a high current density (1 mA cm −2 ), which is also better defined. Moreover, the wave at −0.8 V found in both PBP-Fe and TMA−Fe ORR polarization curve is not present for the TMA−Mn network. A possible reason could be that the potential reduction of small amounts of the generated H 2 O 2 is obscured by the onset of the hydrogen evolution reaction. However, adding 10 mM H 2 O 2 to the O 2 -saturated 0.1 M NaOH solution does not produce any change in the shape of the polarization curve (orange-dashed line in Fig. 4b ), indicating that no H 2 O 2 electroreduction occurs. In addition, here we find an increase in the current density by adding H 2 O 2 due to disproportionation of H 2 O 2 in alkaline media as found for the Fe-containing networks 38 . Hence, we conclude that the TMA−Mn network reduces O 2 directly to H 2 O trough a 4e − pathway according to the following equation: It is important to point out that Mn oxides catalyse the O 2 reduction mainly via a (2+2)e − pathway 44 , while Mn coordination compounds 45 directly reduce O 2 to H 2 O via a 4e − mechanism. Similar to Fe on Au(111), the polarization curves for Mn clusters (0.07 monolayers) on Au(111) show a similar profile as TMA−Mn with clearly lower catalytic current density ( Supplementary Fig. S5c ). This result emphasizes the importance of the complexation of the metal atoms and hints at the tremendous potential of this approach. On one hand, the chemical activity of the metal centres is determined by both the nature of the metal ion and its coordination shell. On the other hand, the ligation separates the unsaturated metal atoms preventing catalytic deactivation 9 . Thus, the specific design of ligands allows tuning the catalytic activity of metal adatoms for a desired chemical conversion. Although repetitive EC curves show the reversibility and stability of the system ( Supplementary Fig. S4 ), STM measurements of TMA networks after the EC experiments were difficult to perform probably because of residual adsorbates from the electrolyte solution even after rinsing and drying the sample. Imaging of PBP-Fe networks proved to be less delicate. A large-scale STM image of PBP-Fe after EC measurement is shown in Fig. 3e . The image represents well the system after the EC experiment. First, long-range 2D-network domains are still presented after the EC experiment. Second, a nearly identical structure was found after the liquid environment compared with the UHV-prepared network (see high-resolution image in Fig. 3f ). In addition, some agglomerates are present, which are associated with residual adsorbates from the electrolyte solution. Discussion We have demonstrated that 2D-MOCNs of TMA−Fe, PBP−Fe and TMA−Mn on Au(111) catalyse the ORR and, moreover, that the final product and mechanism can be influenced by the choice of the metal centre. While Fe structures exhibit a (2+2)e − mechanism, the TMA−Mn networks can directly reduce O 2 through a 4e − pathway. To obtain deeper understanding of the electron transfer characteristics and to overcome kinetic limitations, a flow-cell system will be implemented in future experiments. This work provides a proof of concept that surface-engineered metal–organic complexes and networks that display structural resemblance with enzyme active sites have a high potential for heterogeneous catalytic chemical conversions. The possibility to create novel and highly stable functional 2D coordination complexes at surfaces using specifically designed organic molecules and transition metal centres taking inspiration from nature opens up a new route for the design of a new class of nanocatalyst materials with promising applications in electrocatalysis. Methods Sample preparation and STM data acquisition Network preparation and STM measurements were conducted in a home-built UHV chamber (base pressure ~5 × 10 −10 mbar). The Au(111) single-crystal substrate (MaTeck) was cleaned by repeated cycles of sputtering with Ar + ions and annealing at 825 K. TMA was purchased from Sigma-Aldrich (purity 97%) and PBP was kindly provided by Professor Mario Ruben 35 . The molecular powders were sublimed from a quartz crucible at 465 and 470 K for TMA and PBP, respectively, onto the surface held at room temperature. For the TMA networks, atomic iron and manganese (purity 99.9%) were evaporated using an electron-beam-heated evaporator. Subsequent to metal evaporation, the substrate was annealed to 525 K resulting in a full monolayer of either TMA−Fe or TMA−Mn. In contrast, pure TMA networks cannot be annealed higher than 353 K because of desorption of the molecules. For PBP networks, atomic iron was evaporated onto the surface held at 413 K resulting in full monolayer network coverage. The STM topographs were acquired at 298 K. WsXM software was used for image analysis 46 . Transfer of the sample from UHV to electrochemical cell A home-built sample transfer system between UHV and EC environment was implemented. Supplementary Fig. S1 shows a scheme of the experimental set-up. After UHV preparation of the sample and verification by STM, the sample was brought to the transfer chamber and a 1-bar Argon atmosphere was established. For the STM characterization of the surfaces after the EC experiments, the sample was removed from the electrochemical cell under potential control (0.15 V), rinsed with water, dried in Ar atmosphere and heated to remove most of the residual adsorbates originating from the electrolyte solution. Electrochemical experiments ORR experiments were carried out in a three-electrode conventional electrochemical cell. The 0.1 M NaOH solution was prepared using NaOH pellets from Sigma-Aldrich (99.99% trace metals basis) and milliQ water (18.2 MΩ). Oxygen and Argon gases were of 5.0 purity. Silver/silver chloride (3 M KCl) and Pt- or Au-coiled wire were used as a reference and counter electrode, respectively. All potentials are referred to the Ag/AgCl/(3 M KCl) reference electrode. Cyclic voltammetry experiments were carried out using an Autolab potentiostat (Ecochemie Inc., model PGSTAT 302N). Additional information How to cite this article: Grumelli, D. et al. Bio-inspired nanocatalysts for the oxygen reduction reaction. Nat. Commun. 4:2904 doi: 10.1038/ncomms3904 (2013).
Fuel cells represent an important component of the energy transition, as they supply electrical energy without first having to create heat and steam from fossil fuels. Instead, they create the energy directly from a reaction of hydrogen and oxygen to form water. For that reason, they can create energy more efficiently than coal-fired or gas-fired power plants. Today's fuel cells require, however, costly platinum as a catalyst for this reaction, which restricts their more widespread use. A team at the Max Planck Institute for Solid State Research in Stuttgart has been inspired by nature to develop an alternative catalyst. It consists of organic molecules as well as iron or manganese on a metallic substrate. These materials are less costly and more easily available than platinum. Humans and animals obtain energy from the same reaction as fuel cells: they breathe in oxygen and bind hydrogen with it in their cells to form water. In this chemical conversion, energy is released, which the organism uses to live. Therefore, the idea to search in nature for a catalyst as a substitute for expensive platinum is logical. This noble metal drives a specific partial reaction during the energy conversion in a fuel cell: the so-called reduction of oxygen. During this reaction, oxygen picks up either two or four electrons, depending on whether it reacts directly with hydrogen or via an intermediate hydrogen peroxide molecule to form water. Natural oxygen-reducing enzymes contain metals like iron and manganese, which are easily obtained through nutritional sources. Organic molecules associated with this enzyme hold on tightly to the atoms of these metals, so that the oxygen can dock there and be reduced. Klaus Kern and his staff member Doris Grumelli from the Max Planck Institute for Solid State Research have now evaporated iron and manganese atoms together with organic molecules onto a gold substrate. In doing this, they established that the organic molecules and the metal atoms become ordered in patterns that strongly resemble the functional centres of enzymes. Networks formed in which individual iron or manganese atoms are surrounded by several organic molecules, like the intersecting points in a lattice fence. The iron atoms (blue) and the organic molecules (green, black) form a lattice pattern on the gold substrate. Credit: Grumelli et al., Nat. Comm. 2013 In order to be able to test how the catalyst functions in the networks, the researchers had to develop a transport system with which they can move the samples from vacuum into liquids. This is because the new surface structures were formed under a very high vacuum, while the tests took place outside the vacuum chamber in an electrochemical cell. It turned out that the catalytic activity depended of the kind of metallic centre, while, on the other hand, the stability of the structure depended on the type of organic molecules that form the network. Iron atoms led to a two-level reaction via the intermediate hydrogen peroxide molecule, while manganese atoms produced a direct reaction of oxygen to water. The latter reaction would be interest for fuel cells, as experts expect higher efficiency in converting the chemical energy to electrical energy through the direct reaction. "However, the other variant could also have applications," says Grumelli, "even serving to modulate or interrupt the reaction." This can play a role in biosensors, for example. In any event, the Group has succeeded in making a new class of nanocatalysts that are cost-effective to manufacture, and whose raw materials are plentiful. Doris Grumelli is already working on a new variant of these kinds of structures: with the help of special organic molecules which each contain a metal atom, and the use of additional metal atoms, she hopes to create a surface structure that simultaneously contains two types of metal atoms. "Such structures could serve as models for biological research," says the scientist.
10.1038/ncomms3904
Physics
Quantum teleportation: Transfer of flying quantum bits at the touch of a button
Takeda, S. et al. Deterministic quantum teleportation of photonic quantum bits by a hybrid technique, Nature, 15 August 2013.DOI: 10.1038/nature12366 Journal information: Nature
http://dx.doi.org/10.1038/nature12366
https://phys.org/news/2013-08-quantum-teleportation-bits-button.html
Abstract Quantum teleportation 1 allows for the transfer of arbitrary unknown quantum states from a sender to a spatially distant receiver, provided that the two parties share an entangled state and can communicate classically. It is the essence of many sophisticated protocols for quantum communication and computation 2 , 3 , 4 , 5 . Photons are an optimal choice for carrying information in the form of ‘flying qubits’, but the teleportation of photonic quantum bits 6 , 7 , 8 , 9 , 10 , 11 (qubits) has been limited by experimental inefficiencies and restrictions. Main disadvantages include the fundamentally probabilistic nature of linear-optics Bell measurements 12 , as well as the need either to destroy the teleported qubit or attenuate the input qubit when the detectors do not resolve photon numbers 13 . Here we experimentally realize fully deterministic quantum teleportation of photonic qubits without post-selection. The key step is to make use of a hybrid technique involving continuous-variable teleportation 14 , 15 , 16 of a discrete-variable, photonic qubit. When the receiver’s feedforward gain is optimally tuned, the continuous-variable teleporter acts as a pure loss channel 17 , 18 , and the input dual-rail-encoded qubit, based on a single photon, represents a quantum error detection code against photon loss 19 and hence remains completely intact for most teleportation events. This allows for a faithful qubit transfer even with imperfect continuous-variable entangled states: for four qubits the overall transfer fidelities range from 0.79 to 0.82 and all of them exceed the classical limit of teleportation. Furthermore, even for a relatively low level of the entanglement, qubits are teleported much more efficiently than in previous experiments, albeit post-selectively (taking into account only the qubit subspaces), and with a fidelity comparable to the previously reported values. Main Since originally being proposed 1 , the concept of quantum teleportation has attracted a lot of attention and has become one of the central elements of advanced and practical realizations of quantum information protocols. It is essential for long-distance quantum communication by means of quantum repeaters 2 , and it has been shown to be a useful tool for realizing universal quantum logic gates in a measurement-based fashion 3 . Many proposals and models for quantum computation rely on quantum teleportation, such as the efficient linear-optics quantum computing scheme of ref. 4 and the ‘one-way’ quantum computer using cluster states 5 . Although much progress has been made in demonstrating quantum teleportation of photonic qubits 6 , 7 , 8 , 9 , 10 , 11 , most of these schemes share one fundamental restriction: an unambiguous two-qubit Bell-state measurement (BSM), as is needed to teleport a qubit using two-qubit entanglement, is always probabilistic when linear optics is used, even if photon-number-resolving detectors are available 12 , 13 . There are two experiments avoiding this constraint, but in these either a qubit can no longer be teleported when it is delivered independently from an external resource 7 or an extra nonlinear element leads to extremely low measurement efficiencies, of the order of 10 −10 (ref. 8 ). A further experimental limitation, rendering these schemes fairly inefficient, is the probabilistic nature of the entangled resource states 13 . Efficient, near-deterministic quantum teleportation, however, is of great benefit in quantum communication, where it can be used to reduce the storage times of quantum memories in a quantum repeater, and it is a necessity in teleportation-based quantum computation. An additional drawback of the previous experiments, owing to the lack of photon-number-resolving detectors, was the need either to destroy the teleported qubit 20 or to attenuate the input qubit 10 , thus further decreasing the success rate of teleportation. We overcome all the above limitations by taking a different approach: continuous-variable quantum teleportation of a photonic qubit. The strength of continuous-variable teleportation lies in the on-demand availability of the quadrature-entangled states and the completeness of a BSM in the quadrature bases using linear optics and homodyne detections 15 . So far, these tools have been used to unconditionally teleport continuous-variable quantum states such as coherent states 16 , 21 . However, it has not yet been possible to apply them to qubits 18 , 22 , because typical pulsed-laser-based qubits (such as those in refs 6–11 ) have a broad frequency bandwidth that is incompatible with the original continuous-wave-based continuous-variable teleporter, which works only on narrow sidebands 16 , 21 . We overcome this incompatibility by using very recent, advanced technology: a broadband continuous-variable teleporter 23 and a narrowband time-bin qubit compatible with that teleporter 24 . Importantly, this qubit uses two temporal modes to represent a dual-rail-encoded qubit 13 where |0, 1〉 and |1, 0〉 refer to the temporal modes of the photon (expressed in the two-mode photon-number basis, with | α | 2 + | β | 2 = 1). Therefore, teleportation of both modes of the qubit is accomplished by means of a single continuous-variable teleporter acting subsequently on the temporal modes of the time-bin qubits ( Fig. 1 ). Figure 1: Experimental set-up. A time-bin qubit is heralded by detecting one half of an entangled photon pair produced by an optical parametric oscillator (OPO). The continuous-variable teleporter ( g , feedforward gain) always transfers this qubit, albeit with non-unit fidelity. The teleported qubit is finally characterized by single or dual homodyne measurement to verify the success of teleportation. See Methods Summary for details. APD, avalanche photodiode; EOM, electro-optic modulator; HD, homodyne detector; LO- x and LO- p , local oscillators to measure x and p quadratures, respectively. PowerPoint slide Full size image Remarkably, the main weakness of continuous-variable teleportation, namely the intrinsic imperfection of the finitely squeezed, entangled states, can be circumvented to a great extent in the present ‘hybrid’ setting when the input to the continuous-variable teleporter is a dual-rail qubit. The entangled state of the teleporter is a two-mode squeezed, quadrature-entangled state, , here written in the number basis with g opt ≡ tanh( r ), where r is the squeezing parameter. Because infinite squeezing ( r → ∞) requires infinite energy, maximally entangled states are physically unattainable; thus, the teleportation fidelity is generally limited by r . Following the standard continuous-variable quantum teleportation protocol with unit gain for the receiver’s feedforward displacement 15 yields a largely distorted output qubit with additional thermal photons. In contrast, non-unit gain conditions are useful in some cases 25 , 26 . In particular, a single-mode continuous-variable teleporter with gain g opt creates no additional photons, because it is equivalent to a pure attenuation channel from which an intensity fraction of is lost to the environment 17 , 18 . Moreover, the dual-rail-encoded qubit represents a quantum error detection code against such amplitude damping, where either a photon-loss error occurs, erasing the qubit, or a symmetric damping leaves the input qubit state completely intact 19 . These two facts together mean that the dual-rail continuous-variable teleporter at optimal gain, g opt , transforms the initial qubit state as follows: Most importantly, no additional photons are created and the quantum information encoded in | ψ 〉 remains undisturbed regardless of the squeezing level. The only effect of the teleporter is the extra two-mode vacuum term, whose fraction becomes arbitrarily small for sufficiently large squeezing, g opt → 1. This technique allows us to teleport arbitrary qubit states more faithfully by suppressing additional photons, thereby realizing unconditional teleportation with a moderate level of squeezing. Equation (1) also shows that a fidelity of unity is obtainable for any non-zero squeezing level, g opt > 0, provided that the signal qubit subspace is post-selected, that is, the non-occurrence of a photon-loss error is detected with a probability approaching zero for g opt → 0. We note that the remaining vacuum contribution could be made arbitrarily small without post-selection of the final states, by instead immediately discarding all quadrature results of the BSM that are too far from the phase-space origin 22 , 27 . To demonstrate successful qubit quantum teleportation, we prepare four distinct qubit states: , , and . This set, including even and uneven superpositions of |0, 1〉 and |1, 0〉 with both real and imaginary phases, represents a fair sample of qubit states on the Bloch sphere. In theory, our teleporter acts on any qubit state in the same way ( Supplementary Discussion ). The experimental density matrix of the input state, | ψ 1 〉, is shown in Fig. 2a . This input state is not a pure qubit state, but rather a mixed state with a 25 ± 1% vacuum contribution, a 69 ± 1% qubit contribution and a 6 ± 1% multiphoton contribution. Because the continuous-variable teleporter transfers input states of arbitrary dimension, all of these components are teleported and constitute the final, mixed output state. We note that in our first analysis we do not discard any of these contributions from the input or the output states, thus ensuring that none of the quantum states that enter or leave our teleporter is pre-selected or post-selected, respectively. Figure 2: Experimental density matrices. By means of homodyne tomography, two-mode density matrices are reconstructed both for the input and the output states in photon-number bases 24 : . The bars show the real or imaginary parts of each matrix element ρ klmn . Blue, red and green bars correspond to the vacuum, qubit and multiphoton components, respectively. a , Input state, | ψ 1 〉. b – d , Output states for different values of r and g . PowerPoint slide Full size image First we present the output state of unit-gain teleportation with r = 1.01 ± 0.03 ( Fig. 2b ). All the matrix elements obtained are in good agreement with theory: the qubit contribution decreases, whereas the contribution of the multiphoton terms grows owing to the finite squeezing. The off-diagonal elements of the qubit (|0, 1〉〈1, 0|, |1, 0〉〈0, 1|) retain the original phase information of the input superposition between |0, 1〉 and |1, 0〉, demonstrating that the non-classical feature of the qubits is preserved during the teleportation process. These off-diagonal elements, however, decay a little more rapidly than do the diagonal elements (|0, 1〉〈0, 1|, |1, 0〉〈1, 0|), illustrating that the quantum superposition of the qubit is the feature most susceptible to error in an experimental situation. Next we turned down the gain, g , and observed the new output state. Figure 2c shows the output state at g = 0.79 (close to g opt = 0.77). Compared with Fig. 2b , from Fig. 2c it can be seen that the qubit components are almost undisturbed, but that the vacuum grows and the occurrence of extra multiphoton components is suppressed. Thus, here the input–output relation is similar to the pure-attenuation model with a loss fraction of . The bar graph in Fig. 3 shows the g dependence of the qubit and multiphoton components in the output state, clearly demonstrating that gain tuning reduces the creation of additional photons in continuous-variable teleportation. Figure 3: Experimental results of teleportation including gain tuning. The horizontal axis, showing g , uses a logarithmic scale. Orange and green bars respectively represent qubit and multiphoton components of the teleported states (the left-hand vertical axis). Red diamonds and blue circles with error bars (1 s.d.) correspond to F qubit and F state , respectively (the right-hand vertical axis). Theoretical fidelity curves ( Supplementary Information ) are also plotted, in the same colours. All observed F qubit values significantly exceed the classical limit of 2/3. For g = 0.79, F state > 1 − η /3 and, thus, unconditional teleportation is demonstrated. PowerPoint slide Full size image The performance of the teleporter can be assessed by means of the fidelity. In our deterministic scheme, we must take into account the vacuum and multiphoton contributions, which was not the case in previous non-deterministic teleportation experiments using post-selection. The overall fidelity between the input state, , and the output state, , is 28 When has a qubit fraction of η , the classical bound on F state is F thr ≡ 1 − η /3, which is the best fidelity achievable without entanglement ( Supplementary Discussion ). Therefore, F state > F thr is a success criterion for unconditional quantum teleportation. Alternatively, we may also assess our teleporter by calculating the fidelity after post-selecting the qubit components alone: , where | ψ 〉 is the ideal qubit state and is obtained by extracting from the output density matrix the qubit subspace spanned by {|0, 1〉, |1, 0〉} and then renormalizing it. We note that F qubit > 2/3 is the success criterion of post-selective teleportation with a pure input qubit and a mixed output qubit 29 . As shown in Fig. 3 , the g dependence of these two fidelities is in good agreement with the theoretical predictions. The maximal fidelities are obtained at g = 0.79. Most importantly, here, we satisfy not only the usual qubit-subspace teleportation criterion, F qubit = 0.875 ± 0.015 > 2/3, but also the fully non-post-selected, Fock-space criterion, F state = 0.817 ± 0.012 > F thr = 0.769 ± 0.004, thus demonstrating deterministic, unconditional quantum teleportation of a photonic qubit. As well as for the input qubit, | ψ 1 〉, the Fock-space criterion is also fulfilled for the other three qubit states, |0, 1〉, |1, 0〉 and | ψ 2 〉, with the same experimental r and g values, for which states F state values of 0.800 ± 0.006, 0.789 ± 0.006 and 0.796 ± 0.011 were respectively observed (theoretically, F state and F qubit are independent of the qubit; see Supplementary Discussion and Supplementary Data ). We note that, although the pure-attenuation model predicts F qubit = 1 and a complete suppression of multiphoton terms at gain g opt , our results slightly deviate from that situation owing to experimental imperfections, such as extra loss and phase fluctuations of the squeezing. Finally, Fig. 2d shows the output state for the lower squeezing level, r = 0.71 ± 0.02, and g = 0.63 ( g opt = 0.61). Here, although the vacuum component becomes more dominant, the qubit components retain almost the same form as in Fig. 2c . Under these circumstances, the success of teleportation is only post-selective ( F qubit = 0.879 ± 0.015 > 2/3, F state < F thr ), owing to the insufficient squeezing resource. However, the overall success probability for transferring the qubits (43 ± 3%, the ratio of the input and output qubit components) is still much higher than in previous experiments ( 1%), even for this relatively low squeezing level. This shows the great advantage of our hybrid approach over the standard approaches. In conclusion, we experimentally realized unconditional quantum teleportation of four distinct photonic qubit states, exceeding the fidelity limits of classical teleportation in a deterministic fashion. In our scheme, once the input qubit states are prepared, there is no need to preprocess or post-select them, and the teleported states freely emerge at the output of our teleporter. Methods Summary Our experimental set-up is shown in Fig. 1 . The time-bin qubit is conditionally created at a rate of ∼ 5,000 s −1 using a continuous-wave laser 24 (wavelength, 860 nm), by extending the technique of ref. 30 . Each time bin has a frequency bandwidth of 6.2 MHz around the laser frequency. Our continuous-variable teleporter 23 operates continuously with a bandwidth of 12 MHz around the laser frequency, which is sufficiently wide to cover the qubit bandwidth—ultimately enabling us to teleport qubits in a deterministic fashion. In our teleporter, two single-mode squeezed states (each with an ideal, pure squeezing parameter r ) from two optical parametric oscillators are suitably mixed at a 50:50 beam splitter to generate the quadrature-entangled beams. This entanglement source is permanently available with no need for any probabilistic heralding mechanism. At the sending station of the teleporter, the input qubit is first combined with one half of the entangled beams at a 50:50 beam splitter. A complete continuous-variable BSM is then performed by measuring the two output modes of the beam splitter through two homodyne detections of two orthogonal quadratures. These homodyne signals are classically communicated to the receiving station, where they are multiplied by a gain factor ( g ) and fed forwards by means of a displacement operation on the other half of the entangled beams. Time synchronization of this final displacement is achieved by introducing an optical delay to the corresponding entangled beam. Finally, the output state is characterized by single or dual homodyne measurement 24 . For every state, 100,000 sets of quadrature values are recorded and the corresponding two-mode density matrix is reconstructed using the maximum-likelihood technique without compensating finite measurement efficiencies.
By means of the quantum-mechanical entanglement of spatially separated light fields, researchers in Tokyo and Mainz have managed to teleport photonic qubits with extreme reliability. This means that a decisive breakthrough has been achieved some 15 years after the first experiments in the field of optical teleportation. The success of the experiment conducted in Tokyo is attributable to the use of a hybrid technique in which two conceptually different and previously incompatible approaches were combined. "Discrete digital optical quantum information can now be transmitted continuously – at the touch of a button, if you will," explained Professor Peter van Loock of Johannes Gutenberg University Mainz (JGU). As a theoretical physicist, van Loock advised the experimental physicists in the research team headed by Professor Akira Furusawa of the University of Tokyo on how they could most efficiently perform the teleportation experiment to ultimately verify the success of quantum teleportation. Their findings have now been published in the prestigious specialist journal Nature. Quantum teleportation involves the transfer of arbitrary quantum states from a sender, dubbed Alice, to a spatially distant receiver, named Bob. This requires that Alice and Bob initially share an entangled quantum state across the space in question, e.g., in the form of entangled photons. Quantum teleportation is of fundamental importance to the processing of quantum information (quantum computing) and quantum communication. Photons are especially valued as ideal information carriers for quantum communication since they can be used to transmit signals at the speed of light. A photon can represent a quantum bit or qubit analogous to a binary digit (bit) in standard classical information processing. Such photons are known as 'flying quantum bits'. The first attempts to teleport single photons or light particles were made by the Austrian physicist Anton Zeilinger. Various other related experiments have been performed in the meantime. However, teleportation of photonic quantum bits using conventional methods proved to have its limitations because of experimental deficiencies and difficulties with fundamental principles. What makes the experiment in Tokyo so different is the use of a hybrid technique. With its help, a completely deterministic and highly reliable quantum teleportation of photonic qubits has been achieved. The accuracy of the transfer was 79 to 82 percent for four different qubits. In addition, the qubits were teleported much more efficiently than in previous experiments, even at a low degree of entanglement. Entanglement 'on demand' using squeezed light The concept of entanglement was first formulated by Erwin Schrödinger and involves a situation in which two quantum systems, such as two light particles for example, are in a joint state, so that their behavior is mutually dependent to a greater extent than is normally (classically) possible. In the Tokyo experiment, continuous entanglement was achieved by means of entangling many photons with many other photons. This meant that the complete amplitudes and phases of two light fields were quantum correlated. Previous experiments only had a single photon entangled with another single photon – a less efficient solution. "The entanglement of photons functioned very well in the Tokyo experiment – practically at the press of a button, as soon as the laser was switched on," said van Loock, Professor for Theory of Quantum Optics and Quantum Information at Mainz University. This continuous entanglement was accomplished with the aid of so-called 'squeezed light', which takes the form of an ellipse in the phase space of the light field. Once entanglement has been achieved, a third light field can be attached to the transmitter. From there, in principle, any state and any number of states can be transmitted to the receiver. "In our experiment, there were precisely four sufficiently representative test states that were transferred from Alice to Bob using entanglement. Thanks to continuous entanglement, it was possible to transmit the photonic qubits in a deterministic fashion to Bob, in other words, in each run," added van Loock. Earlier attempts to achieve optical teleportation were performed differently and, before now, the concepts used have proved to be incompatible. Although in theory it had already been assumed that the two different strategies, from the discrete and the continuous world, needed to be combined, it represents a technological breakthrough that this has actually now been experimentally demonstrated with the help of the hybrid technique. "The two separate worlds, the discrete and the continuous, are starting to converge," concluded van Loock.
10.1038/nature12366